Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

Hi Craig,

Just to update you on some progress (today was the 1st day I had time to look at the camera).

I updated the firmware and got the gst_viewer code to run out of the box on an x86 machine with ubuntu 20.04. The loopback didn’t work (not sure why).

Next I ran on the gst_viewer sample code on the nitrogen6x board running ubuntu 18.04 and got no feed and the following error.

start, hit any key to stop
[INFO] bitstreamMode 1, chromaInterleave 0, mapType 0, tiled2LinearEnable 0
Error: Internal data stream error.
XIO: fatal IO error 22 (Invalid argument) on X server “:0.0”
after 8 requests (8 known processed) with 0 events remaining.

Then I set the resolution in the source code to THETAUVC_MODE_FHD_2997 as you had suggested and I got a streamed feed. SUCCESS!!!..but the latency was about 2seconds. After 90seconds of feed the latency was up to 10seconds. Perhaps there is a some way in the code to fix this. A 2 second lag would likely not kill us in our application, but it would have to be fairly static.

The loopback did not work. "Could not open device /dev/video1 for reading and writing. I am unsure what’s going on here. The kernal module was loaded etc. I am uncertain how to be sure the correct /dev/videoX is being selected. Not sure how to proceed here or if its worth it.

I tried the libptp examples and was able to set state parameters on the camera, capture images and copy them off the camera. In some settings this might even work for us…I will have to do some more testing.

I could not try the streaming over wifi, because I could not load the rtsp plugin onto the theta V, that requires a windows or mac box and I don’t have one of those here ATM.

So some success. If there is a way to stabilize latency the in gst feed that would be great. If there was a way to speed up the still image capture / transfer that would be great too. If you have ideas about the loopback let me know.



I can see some artifacts on the stitching which are not fantastic. Is there a way to re-run some calibration for this? or is this unavoidable? or is this unit defective?

The source code for the Science Arts RTSP plug-in is below.

You can compile it on a Linux machine with Android Studio and install it with adb. As that may be a bit of a hassle, you may want to wait until you get access to a Windows or Mac. English translation of the README with additional information is available here.

If the frame is showing a 2 second latency with increasing latency, there might be a qos issue.

Can you let us know if you have qos=false on the pipeline.

if (strcmp(cmd_name, "gst_loopback") == 0)
    pipe_proc = "decodebin ! autovideoconvert ! "
        "video/x-raw,format=I420 ! identity drop-allocation=true !"
        "v4l2sink device=/dev/video0 qos=false sync=false";

The video device is specified on line 190 in the example below. Change /dev/video1 to /dev/video0 if you only have one camera on the system.

for v4l2loopback, can you see the kernel module?

$ sudo modprobe v4l2loopback
[sudo] password for craig: 
$ lsmod
Module                  Size  Used by
v4l2loopback           40960  0
btrfs                1253376  0

Does lsusb show 05ca:2712 when the camera is streaming?

The stitching is off if you place an object close to the camera in the stream. There is no way to calibrate the stitching.

With the THETA V, the fasting you can take the pictures is around 4 seconds. If you disable stitching with a plug-in you can get it down to 1 or 2 seconds. The image will be dual-fisheye.

With modest effort, you can test the MotionJPEG stream using the Amelia Viewer by Jake Kenin .

GitHub - codetricity/amelia_viewer: Jake Kenin's viewer for RICOH THETA V LiveView to browser or headset using Electron It is possible that the .deb binary still works Releases · codetricity/amelia_viewer · GitHub

With qos=false, it’s possible things will start working.

Hi Craig,

You said “If you disable stitching with a plug-in you can get it down to 1 or 2 seconds. The image will be dual-fisheye.”

Is there a way to do this and pull the images off with usb with ptpcam or some other command…at 2Hz or so? I saw someone asking about this in a discussion from 2018?

Thanks for your help.



1 Like

I believe that in order to get the fastest speed, you need to use a plug-in and disable the stitching in the plug-in.

When you use the plug-in, you cannot use the USB API.

The plug-in itself does not provide an external WebAPI to allow you to download the image.

As the plug-in is an Android app, you can write your own HTTP server inside the plug-in and make the image available or push it to another device. No one has done this as far as I know. It is just a theoretical possibility.

If you can find a Windows/Mac machine you can try running the RTSP plug-in inside the camera and then have the Nitrogen6x board pick it up with Wi-Fi. You could then use the Janus Gateway to get it to another computer if you want a human to view it.

This may not be the end-solution, but at least you can get the stream onto your device for processing and testing.

I know of one company that is streaming with a USB cable and a Raspberry Pi inside of a remote drone. I will talk to @jcasman about asking them if they had to modify the Linux kernel. I think there might be a patch to the Linux kernel for more h.264 hardware driver support. It’s possible that the Nitrogen6x board needs a specific driver. The RPi solution is not directly relevant to you, but it could serve as a baseline process for getting the driver working on a board that isn’t documented on this forum.

1 Like

Community member Shun Yamashita of fulldepth is using a RPi4 to stream the THETA to a Windows PC. I updated the linux streaming community document with additional information. I’ll put the relevant pieces here for convenience.


  • Raspberry Pi is inside of a drone and connected to the Z1 with a USB cable. The Raspberry Pi OS is using the gstreamer sample code and driver from this site
  • the RPi4 transmits the video stream from a drone to a Windows PC on a boat. I’m not sure what the connection is between the RPi4 and the Windows PC, whether there is a physical cable or some type of radio transmitter
  • the RPi4 sends the video stream as UDP/RTP


This is the modification to the pipeline in gst/gst_viewer.c

src.pipeline = gst_parse_launch(
        " appsrc name=ap ! queue ! h264parse ! queue"
        " ! rtph264pay ! udpsink host= port=9000",


The RPi4 has 4GB of RAM.

Update on Long-term Streaming at 4K

I have a long update continuous live streaming on community document available here.

To make the information easier to find for the casual reader, I will also put some key points of the update here.

Long-Term Streaming

With firmware 1.60.1 or newer, the Z1 can stream indefinitely. The battery will charge when streaming at 4K. To stream indefinitely, you need the proper equipment.

The USB port supplying charge to the RICOH THETA needs to supply approximately 900mA of current.

In my tests, most USB 3.0, 3.1, and 3.2 ports on Linux computers did not supply the required electrical current.

If your computer does not supply 900mA of charge while streaming data, you will need to use a powered hub with the proper specification.

There are different standards for Battery Charging 1.2 for the USB electrical specification. You will need BC 1.2 CDP to provide 1.5A data plus charge. The THETA Z1 will only consume 0.9A of the 1.5A capacity.

Battery Charging 1.2 table

It’s likely that USB Type-C (not USB 3.0 with a USB-C connector) and USB PD can also deliver over 900mA, but I did not test these. Note that my Asus Zephyrus laptop has USB-C connector ports directly the laptop body, but these physical ports comply to the USB 3.2 specification, not USB-C. USB 3.2 does not require USB Power Delivery.

From the table below, it would appear that USB 3.2 Gen 2 would deliver the required electrical current. However, I wasn’t able to keep the Z1 charged indefinitely at 4K with my ROG Zephyrus G14 GA401.

Here’s the specifications on my laptop.

Zephyrus laptop specifications

Long-term Streaming Platform Tests

Platform Result
Acer Predator 300 laptop with onboard USB 3.1 ports success. battery charged while streaming
Jetson Nano with external powered USB hub with BC 1.2 success. battery charged.
Jetson Nano using onboard USB 3 ports fail. battery drained.
Desktop computer with Intel X99 Wellsburg motherboard and USB 3.1 ports fail. battery drained
Asus Zephyrus laptop with USB 3.2 ports fail. battery drained
desktop computer with Intel B85 motherboard and USB 3.0 ports fail. battery drained
1 Like

Hi Craig,

Once we create a virtual device by self (i.e. modprobe v4l2loopback ref: GitHub - umlaeute/v4l2loopback: v4l2-loopback device) and then run gst_loopback – it works.

So we were wondering why THETA’s gst_loopback depends on GPU model. Assuming gst_viewer has been working, we thought theta’s loopback could work by a gst streamer pipeline (and that’s line 188-191 of gst_viewer.c described).

Hope to hear your expertise (since we are getting new GPU models, we hope it would still work :sweat_smile: )

– Luke

ps: Our testing GPU is NVIDIA-SMI 440.118.02 | Driver Version: 440.118.02 | CUDA Version: 10.2
Volatile GPU-Util is 2%.

I’m not sure why the libuvc-theta-sample has problems with some video cards when used with v4l2loopback.

I’ve heard from the developer of the application that there are problems with support of some platforms when using v4l2loopback.

Do you get any error to the console when you run gst_loopback?

I’ve been able to clear up many of the problems I encountered by editing the gstreamer pipeline in the C code. The README that the developer wrote provides the hint that some of the problems could be resolved with the pipeline.

I had some success on x86 an an old GTX950 GPU by using the nvdec and glimagesink described in this article.

Optimization - RICOH THETA Development on Linux

I don’t know if it would help, but it is something different to try.

The document above has this pipeline as a working example.

    if (strcmp(cmd_name, "gst_loopback") == 0)
    // original pipeline
        // pipe_proc = "decodebin ! autovideoconvert ! "
        //  "video/x-raw,format=I420 ! identity drop-allocation=true !"
        //  "v4l2sink device=/dev/video2 qos=false sync=false";
        //modified pipeline below
        pipe_proc = "nvdec ! gldownload ! videoconvert n-thread=0 ! "
            "video/x-raw,format=I420 ! identity drop-allocation=true !"
            "v4l2sink device=/dev/video2 qos=false sync=false";     

Hi, may I ASK A question about /dev/video0?

I’m using theta Z1 on jetson Xavier NX with usb cable,
after the tutorials, the gst_viewer in the sample streams well,

however, I failed to make the camera work as /dev/video0,
I can see the camera lists in device as video0:
crw-rw----+ 1 root video 81, 0 Mar 15 14:14 /dev/video0

the python failed to detect the device: “VIDEOIO ERROR: V4L2: Could not obtain specifics of capture window. VIDEOIO ERROR: V4L: can’t open camera by index 0”

and the gst_loopback is not working, the error info is “Segmentation fault (core dumped)”

is there any information about this? I think it may be a Xavier NX issue.


confirm v4l2loopback kernel module is loading

with lsmod do you see v4l2loopback?

Module                  Size  Used by
bnep                   16562  2
zram                   26166  4
overlay                48691  0
spidev                 13282  0
v4l2loopback           37383  0

Specify /dev/video0 in gst_viewer source

Did you change v4l2sink device to /dev/video0 as you only have one camera attached?

For Xavier

On Jetson Xavier, auto plugin selection of the gstreamer seems to be not working well, replacing “decodebin ! autovideosink sync=false” to “nvv4l2decoder ! nv3dsink sync=false” will solve the problem.


Please post again after you confirm the three steps above. If you still have problems, please continue to post.

Make sure the camera is turned on with the word “LIVE” on the front of the camera. You most press the physical power button of the camera. It will not turn on automatically.

1 Like

Hi, Craig,

Really appreciate your reply,

Yes, I finished all the above things before I asked the question.
The Theta Z1 camera is the only camera on Jetson Xavier NX.
The v4l2loopback is on the list of lsmod; the /dev/video0 in gst_viewer is specified and the decoder is specified as well, while the gst_viewer can stream perfectly.

remotes/origin/HEAD → origin/theta_uvc
Already on ‘theta_uvc’
Your branch is up to date with ‘origin/theta_uvc’.

However, I’m still not able to access by gst_loopback or python cv2.VideoCapture(0),
“Segmentation fault (core dumped)” and "VIDEOIO ERROR: V4L: can’t open camera by index 0”

I guess there must be some parts I was missing.
I also tested my Theta V, the problem is exactly the same.
Only gst_viewer can work with the live.

Thanks and best wishes.

Do you have any output from v4l2-ctl --list-formats-ext --device /dev/video0 ?

v4l2-ctl --list-formats-ext --device  /dev/video0
    Type: Video Capture

    [0]: 'YU12' (Planar YUV 4:2:0)
        Size: Discrete 1920x960
            Interval: Discrete 0.033s (30.000 fps)

Please list the version of Linux you are using on the Xavier platform.

It should work

1 Like

Hi, craig,
Thanks for your kind help, it worked,
I previously changed the decoder for gst_loopback same as the gst_viewer, it is not necessary, then I changed it back.

Also the video grasp with H.264 works well on Xavier NX,
however the raw file (lossless huffman) loses a lot of frames.



Thank you for reporting back on your success. It helps everyone.

Community member @snafu666 is also using huffman encoding. You may be able to exchange information with him.

When losing frames, are you saving to file or displaying to screen?

Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson - #143 by snafu666

If saving to file, snafu666 indicated that the bandwidth of the storage might be a factor.

He’s using Jetson Xavier (I think the NX model)

Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson - #122 by snafu666

this is the pipeline posted by snafu666 in the link above

gst-launch-1.0 v4l2src device=/dev/video99 ! video/x-raw,framerate=30/1 \
! videoconvert \
! videoscale \
! avenc_huffyuv \
! avimux \
! filesink location=raw.hfyu

I do not know if he was getting frame loss. I did a similar test on x86 and I don’t recall significant frame loss.

I’m curious, what is the use case for lossless huffman? do you need this for distance calculation or object detection?

1 Like

Hi, craig,

Thanks for your reply,

I tried to grab from theta Z1 in 4K and save it to file with lossless huffman, the frame loss is significant here on my Xavier NX,

however H.264 don’t provide enough quality (although with no frame loss), my main usage is face detection in 4K, maybe in a multiple person environment with a bit of distance from the camera.

I finally use following H.265 encode to save to file, which provides both good quality in 4K and fast speed (no frame loss):

gst-launch-1.0 v4l2src num-buffers=600 device=/dev/video0 ! video/x-raw ! nvvidconv ! nvv4l2h265enc ! h265parse ! qtmux ! filesink location=test.mp4 -e


1 Like

Thank you for this explanation of H.265 versus H.264 versus lossless Huffman. I did not know the H.265 could provide higher quality than H.264.

I added your explanation and pipeilne to the community document to make it easier to find. I made the attribution to Nakamura_Lab.

I’m assuming that you’re processing the stream through something like OpenCV and that is why you need to use v4l2loopback to send the video stream to /dev/video0. If people are just saving the video stream to file without processing, they can modify the C code and put the gstreamer pipeline directly in the C code to avoid the overhead of v4l2loopback. It’s also possible to write your own application using thetauvc.c and thetauvc.h using the gst_viewer.c example as a base and access the camera and the OpenCV libraries directly.

1 Like

i haven’t done any raw captures recently. quite honestly, my use of raw huffman is strictly in order to reduce bandwidth on the disk interface. not so much of an issue going to the NVMe SSD. also, one consideration is that raw does consume a lot of disk. huffman encoding helps, to a point. frame drops weren’t a problem for me.


Thanks @craig,
every time the information you’ve given me is quite helpful. I want to use video0 because the previous project was written in Python, I just want to continue using it for convenience.

@snafu666, thanks for your information, I wrote the video on an SD card, it may be the main reason for frame loss.

craig: *

“I did not know the H.265 could provide higher quality than H.264.”

Indeed, I’m not quite sure about the theory as a beginner. Maybe they are not so different.

Here is the screenshot comparison from my result.
The H.264 code I’m using is exactly as snafu666’s.
I’m not quite sure why the block effect is so obvious on 264.

H.264 (snapshot of partial view in 4K)

H.265 (snapshot of partial view in 4K)

Ps: Next post will be the 200 one :slight_smile:

for libuvc-theta-smaple, I can run gst_viewer successfully, on ubuntu18.04. But when I run gst_loopback, the terminal give me the following errors:
start, hit any key to stop
Error: Cannot identify device ‘/dev/video1’.
the screensnap is:

the following is the information of my computer
My OS: ubuntu18.04
Could someone know how to solve this problem?

Assuming you have close to the highest-quality MicroSD card you can get, it might be worth it to try an NVMe SSD. Although I haven’t tried this myself, I read on the NVIDIA forums that it was preferable to save video files to an external SSD. As snafu666 indicated, the frame drop might be due to the bandwidth problems with the microSD card.

If I had the same problem as you, I would rig up an NVMe SSD as it is a cheap and easy test. I don’t know if @snafu666 was booting the Linux OS from the NVMe SSD or if he used it only as the media storage device on the Xavier NX.

The NVMe storage seems like good bang for the yen.

1 Like