Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

Update on power supply to Z1 from USB hub.

A community member let me know that his test showed that for live streaming with the Z1, USB3 with Linux may be limited to 400mA to 450mA when live streaming with the Z1. This would result in a slow decrease in the Z1 battery if you’re streaming at 4K. This limitation of the power draw may be specific to the Z1 and Linux USB3 port.

The solution is to used a powered hub.

I’m still trying to test this myself.

Thank you craig. I use the theta v for coloring the point cloud of vlp16 of Velodyne. So, I need get the pictures of theta v every several miliseconds and save them in a directory in a jpeg or any other format.
Is the gstreamer can do this job?

Hi, I am not that experienced with gstreamer myself. I think other people on this forum have a similar application to you. I hope that other people can help test the gstreamer plugin for the theta and post recipes or examples of use.

This pipeline was contributed by @snafu666 using lossless huffman encoding on an nvidia jetson using the loopback. The input portion would needed to be modified.

gst-launch-1.0 v4l2src device=/dev/video99 ! video/x-raw,framerate=30/1 \
! videoconvert \
! videoscale \
! avenc_huffyuv \
! avimux \
! filesink location=raw.hfyu

I think we can use MTP protocol to get photoes from theta v. This maybe the easy way to solve the problem.

1 Like

Thanks for posting about libmtp. I hadn’t seen that project before. I was previously using libptp2 and libgphoto2.

Update on Power Supply for Live Streaming on NVIDIA Jetson and Linux x86

Workarounds for low current problem on Linux USB3 port.

If uvc device driver(uvcvideo) is loaded as a kernel module and you don’t use uvc device other than THETA,
disable uvc driver by adding uvc device driver to the driver blacklist.
e.g.

   sudo sh -c "echo blacklist uvcvideo >> /etc/modprobe.d/blacklist.conf"

or
Before connecting the THETA, disable autosuspend feature of the USB driver by writing -1 to the

/sys/module/usbcore/parameters/autosuspend.

e.g.

  sudo sh -c "echo -1 > /sys/module/usbcore/parameters/autosuspend"

Community member reports that Intel PC and Jetson Nano work fine.

Hi everyone, I have a critical issue when trying to launch v4l2loopback with 2 Theta V (On my Jetson AGX Xavier).
I build the kernel and instanciate 2 video devices, and when I launch the first camera everything s fine, the second one only works one time over 5. it shows “can’t open theta”. I have to deplug and plug again several times to make them work in parallel… Any ideas please why this happens ?

thank you in advance.

This is difficult for most people to test because they need two cameras and have to modify the C code. I haven’t tested this yet. However, I have seen a demo of it working. I don’t have a solution yet.

I noticed that there is a new commit to gstthetauvc

image

It’s possible, it handles multiple devices.

image

Hi Craig,
I have a new Theta z and I am trying to stream video from theta Z to the laptop using Python OpenCV and integrate on the ROS later. So far I am able to run stream the live video using gst_viewer. However, this doesn’t meet our requirement. Could you please let us know if it is possible to stream directly without using additional hardware.

I have followed this instruction ros_theta_z and ros_theta_s.
`$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video1

BisonCam, NB Pro: BisonCam, NB (usb-0000:00:14.0-6):
/dev/video0`

As I am not able to see the theta z on my device lists, Could you please help me how to show theta z on the video device list. Thanks in advance!!

What version of Ubuntu/ROS are you using?

It’s possible to stream the Z1 to Linux x86 and NVIDIA Jetson without additional hardware.

$ v4l2-ctl --list-formats-ext --device  /dev/video1
ioctl: VIDIOC_ENUM_FMT
    Type: Video Capture

    [0]: 'YU12' (Planar YUV 4:2:0)
        Size: Discrete 1920x960
            Interval: Discrete 0.033s (30.000 fps)

NOTE: your resolution will be in 4K. I have mine set to 2K for testing.

Did you go to the site below, enter an email and review the documentation on v4l2loopback?

RICOH THETA360.guide Independent developer community

If you have problems, please post again.

Tried this and changed video99 to video0. The file raw.hfyu is getting larger very slowly, about 10 seconds for 1.2Mb. And after I use Ctrl+C to end the gst-launch-1.0, I cannot open the raw.hfyu.

BTW, I’m using the Intel NUC mini PC(x86 and no Nvidia GPU) as it is the only device that I can use gst_loopback and gst_viewer. The gst_viewer runs smoothly and gst_loopback runs with less than 0.1fps(10s/1frame).

So frustrating for theta Z1, I might use theta S instead.

Did the file not open with VLC on Linux?

In the video below, I completely purged the NVIDIA drivers from my system that has an NVIDIA graphics card in it and covered the entire process using X11 nouveau. As a further test, I used Wayland and not X11. You don’t need to use Wayland, but the build process for libuvc-theta and libuvc-theta-sample may help. I couldn’t use gst_loopback with Wayland in my first test. I don’t recommend you use Wayland at the moment.

Dear Craig

I gave up Theta Z1 with my NUC. Now by using my old Theta S, I can successfully output image and video using ROS + OpenCV. But there is another problem.

The video is dual-fisheye. It cannot be converted to equirectangular by RicohTheta Basic. It stuck in 100% no matter how small the video is.

The image or video have no metadata. The image can be converted but video cannot open.
Do you have any idea how I can transform these dual-fisheye video into equirectangular?

Oh. I found that changing encoding method of OpenCV from mp4v to avc1 solved the problem and theta software can open the video.

But the problem is still the lack of metadata, which leads to stitching error in the middle

(theta is placed side-down rather than straight for less vibration.

Do you have any information about how I can get the metadata from theta S while livestreaming

1 Like

nickel110 updated gstthetauvc

I have not tested it yet.

Have you looked at this ROS package? I’m not sure if it has the metadata, but it may have some tools that can help you.

I am having exactly same issue on Jetson nano B01 version. Any luck with this ?

Jetson Nano B01
Ubuntu 18.04.5 LTS

My end goal is to live stream video in browser and eventually run tf model .

I think the original poster got it working, but was having problems saving the frames to disk.

Are you getting an error message?

What model THETA are you using V or Z1?

I am using Jetson Nano B01 with no problems using JetPack 4.4, which I think is Ubuntu 18.04.

Hi Craig,

Thanks for replying back, I tried on two systems two different issues

Ricoh Theta Z1 (51GB)

Laptop - Ubuntu 18.04 with build-in camera

  1. Followed instructions to install all the required lib

  2. when I run ./gst_viewer it works

  3. when i run ./gst_loopback

    ./gst_loopback
    start, hit any key to stop
    Error: Device ‘/dev/video1’ is not an output device.
    stop

  4. ls -ls /dev/vid*

     0 crw-rw----+ 1 root video 81,   0 Jun  7 11:42 /dev/video0
     0 crw-rw----+ 1 root video 81,   1 Jun  7 11:42 /dev/video1
    
     Note : This does not identify Ricoh 
    
     Step 1: verify usb device ID
     Bus 002 Device 020: ID 05ca:2715 Ricoh Co., Ltd 
    
     Step 2: verify kernel module
    
     Module                  Size  Used by
     v4l2loopback           45056  0
    
    
     Step 3: verify device
    
     ls /dev/vide*
     /dev/video0  /dev/video1  /dev/video2
    
     added  video2
      v4l2-ctl --list-formats-ext --device  /dev/video2
     ioctl: VIDIOC_ENUM_FMT
    

now when I run VLC it only shows the first frame and does not show the video


import cv2

cap = cv2.VideoCapture(2)

Check if the webcam is opened correctly

if not cap.isOpened():
raise IOError(“Cannot open webcam”)

while True:
ret, frame = cap.read()
frame = cv2.resize(frame, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
cv2.imshow(‘Input’, frame)

c = cv2.waitKey(1)
if c == 27:
    break

cap.release()
cv2.destroyAllWindows()


When I use open cv simple code ( just to capture at index2 ) I get below error

python3 simple.py
[ WARN:0] global /tmp/pip-req-build-1syr35c1/opencv/modules/videoio/src/cap_v4l.cpp (1004) tryIoctl VIDEOIO(V4L2:/dev/video2): select() timeout.
Traceback (most recent call last):
File “simple.py”, line 11, in
frame = cv2.resize(frame, None, fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
cv2.error: OpenCV(4.5.1) /tmp/pip-req-build-1syr35c1/opencv/modules/imgproc/src/resize.cpp:4051: error: (-215:Assertion failed) !ssize.empty() in function ‘resize’

Second System
System 2 jet set nano
Ubuntu 18.04 LTS
Getting below error

./gst_loopback
start, hit any key to stop
Error: Cannot identify device ‘/dev/vidoe0’.
stop

Step 1: verify usb device ID
Bus 002 Device 004: ID 05ca:2715 Ricoh Co., Ltd

Step 2: verify kernel module
Module Size Used by
v4l2loopback 42767 0

Step 3: verify device
king@king-desktop:~/dev/ricoh/libuvc-theta-sample/gst$ v4l2-ctl --list-formats-ext --device /dev/video0
ioctl: VIDIOC_ENUM_FMT

./gst_loopback
start, hit any key to stop
Error: Cannot identify device ‘/dev/vidoe0’.
stop

Please let me know if you need any more information. I needed to achieve

  1. Live feed this into my angular application using flask or anything else
  2. Take this live feed and run models.

Bharat