Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

I would like to know if it is possible to synchronize the live streaming from two ricoh theta Z1 in a linux distribution? Whether its possible to give an external trigger to initiate live streaming. Synchronization would become important for any stereo vision applications.

Thanking you in advance.

You can initiate live streaming with the USB API. With an Jetson Xavier, you can run 2 4K streams from the THETA.

What is needed to synchronize the streams? Is there a working system you have that use two normal webcams (not THETA) on /dev/video0 and /dev/video1

I am also working on some python Gstreamer projects. Is the code thetauvc.py available somewhere? I may help it on Linux.

Thanks.

Usually, in multicamera systems, you can have an external physical device, sometimes the camera itself to send an external physical signal to initiate a capture. This would ensure that the capture occurs at the same time instant. Even if I use a USB API, I believe absolute synchronization can only be guaranteed from a real-time operating system but not with normal Ubuntu distribution. Such synchronized data is a necessity for multi-view applications like for finding correspondence to estimate disparity and eventually depth. Any slight change in the instance of capture could possibly affect the algorithm output.
I was using two theta S cameras in Ubuntu system, which essentially gets detected as /dev/video0 and /dev/video1. But we were not able to achieve absolute synchronization. We tried to reduce latency and improve synchronization using VideoCaptureAsync class in OpenCV along with ApproximateTimeSynchronizer in ROS. However, this does not achieve absolute synchronization as we have no control over the camera to initiate a capture.

I do not think that the solution in this thread with libuvc-theta will be able to meet the requirements for VideoCaptureAsync.

Ideas for further searching:

  • look for anyone using a libuvc (not libuvc-theta) with syncrhonization. Can a UVC 1.5 camera provide the synchronization signal?
  • look another project that uses two physical Android devices into another computer. The THETA V/Z1 run Android 7 internally. You may be able to write your own pipeline inside the camera

The V/Z1 plug-in also supports NDK. The internal OS is Android on SnapDragon ARM from Qualcomm.

Can you use two stereo cameras from another system for disparity and depth and then use a third camera (the THETA) for the 360 view and data.

Although using three cameras (2 not-THETA synchronized, and 1 THETA) is more complex, it may be easier to implement.

It seems that you can use two cheap cameras with the Jetson Nano and either a specialized board or an Arduino for the external signal.

It is possible to use the THETA for general surrounding information and the specialized dual-camera synced setup (not-THETA) for the detailed depth and distance estimation? The 360 view might provide your system with clues on what to focus on the stereo cameras on.

The reason we are interested in specifically using Ricoh cameras as stereo setup is to use the entire 360 view provided by the camera to estimate depth. This I can only achieve with omnidirectional cameras such as Ricoh. The below figure shows such a 3D reconstruction that we obtained from two ricoh theta S cameras.


We hope to improve the quality by using the higher resolution offered by THETA Z1 and also my properly synchronizing the images.
In the video you uploaded, the camera hardware has the facility to provide external trigger and obtain very good synchronziation (of the order of nanosecond). For Ricoh camera, I do not see any such hardware facility. But, the USB API you mentioned would help in synchronization. Since it is essentially a PTP cam we should be able to synchronize the clocks (of the order of microsecond). Right now, I have contacted another Engineer who is more familiar with the protocol. If I find a solution, I will surely share with the community.
Also, do you know if Ricoh cameras are global shutter or rolling shutter?

3 Likes

There is only thetauvc.c

libuvc-theta-sample/gst/thetauvc.c at master · ricohapi/libuvc-theta-sample · GitHub

BTW, I was able to compile OpenCV 4.4 on the Nano and use OpenCV with Python accessing the theta on /dev/video0. I’m not familiar with gstreamer and this may not be what you want.

I was also able to use gst-launch from the command line with /dev/video0.

I read this blog and the technique the author used for Python was with gstreamer and opencv.

This video has some information on my initial test

https://youtu.be/At5uMIMfBQY

I’ve since improved the OpenCV Python script performance with the recompile.

The test pipeline is:

$ gst-launch-1.0 -v v4l2src ! videoconvert ! videoscale ! video/x-raw,width=1000,height=500 ! xvimagesink

This is with gst-loopback running.

Also, cuda does appear to be working with OpenCV. I have not run these tests yet, but this looks quite useful.

Now with Canny edge detection in real-time. Code sent to meetup registrants.

Update: Aug 29

I’m having some problems with the nvidia-inference demos streaming from the THETA.

Normal webcam works. I’m trying to reduce the THETA 4k output to 2k, but can’t figure how to do this. The camera does support streaming at 2k, but I’m not sure how to force it to use the 2k stream for initial testing

Update Aug 29 night

I can now get a 2K stream to /dev/video0

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YU12'
	Name        : Planar YUV 4:2:0
		Size: Discrete 1920x960
			Interval: Discrete 0.033s (30.000 fps)

$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
	/dev/video0

Update Aug 30 morning

I have /dev/video0 working smoothly on x86 with v4l2loopback thanks to @Yu_You submission on GitHub. I added this and the following info to the meetup early access documentation:

  1. video demo and code for DetectNet running on Nvidia Jetson Nano with Z1 using live stream. Good object detection of moving person in real-time
  2. video demo and code for Canny edge detection with Python cv2 module. In real-time with minimal lag
  3. explanation and code snippet to change video resolution from 4K to 2K for testing on Nano

Link to sign up for the meetup is at the Linux Streaming Site. Once you register for the meetup, you’ll see a page immediately with a link to early-access documentation and code. We’re not trying to sell you anything. We’re gating the content so that we can show results to our sponsor, RICOH. This helps us to secure sponsor budget to continue working with the community and produce informal documentation. If you miss the meetup, we’ll put the documentation up on the site in a week or two.

DetectNet running at 23fps. Accurate identification of person and TV.

image

Update Sept 4 - morning

video demo of DetectNet

x86 test using guvcview

Update Sept 4, 2020
Current plan in order of priority

  • publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
  • retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
  • install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
    • if successful, install ROS on x86
  • install ROS on Jetson Nano, ref, run basic tests with tutorial

How to initiate 2 live streaming simultaneously in linux system to make it detect as two video devices. Can you help me understand where I need make changes in gst_viewer.c?

I’ve seen a demo of this on Jetson Xavier, but I didn’t try it myself.

I’ll try it on x86. It’s possible someone else will post the solution before I test it.

Here’s my current test plan in order.

  • publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
  • retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
  • test two RICOH THETA cameras on x86 as two video devices. Example /dev/video1 and /dev/video2. If I fail, contact developer from community that built the original demo and ask for advice.
  • install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
  • if successful, install ROS on x86
  • install ROS on Jetson Nano, ref, run basic tests with tutorial

I briefly looked at the code, it seems like it’s worth a few quick tests before I send a note to the developer.

This section lists the available devices with the “-l” command line option

The pipeline is here.

The device is opened here:

1 Like

For people interested in using Ricoh images for applications in ROS and openCV, check the following github pages:
1> https://github.com/RobInLabUJI/ricoh_camera
2> https://github.com/johnny04thomas/rgbd_stereo_360camera
Hope this helps.

1 Like

Thanks for this information. I’m going to add this to the meetup archive document that I sent out earlier.

I noticed that you opened an issue on the sequoia-ptpy GitHub repo to try and get PTP working with the RICOH THETA Z1.

Have you evaluated the discussion on using gphoto2 Python bindings to talk to the camera using PTP?

Although the topic title says Mac, one of the guys is using ROS, one guy is using Raspian, another person is using Mac and I’m using Ubuntu. However, I’m having some problems right now using it.

As @mhenrie has provided a working code example, and @NaokiSato102 is working on extending it (last he posted), it might be faster to try the Python gphoto2. mhenrie is using his code with a Z1.

I’m planning to test this myself too. If you’ve already looked at gphoto2 Python module and it doesn’t do what you want it to, I’m going to look at PTPy.

You’re Welcome. I haven’t looked into gphoto2 python bindings yet. Thanks for passing the information. I will have a look in a couple of days.

Regarding dual-cameras on single board computer

Note from Craig: Using the note from the developer, I am going to try a solution. Other people should give it a go too. :slight_smile:

Below is a note from the driver developers

We saw pictures of your demo last year using Jetson Xavier and livestreaming 2 THETAs. Is this demo available publicly? We have several contacts that are looking to stream more than one camera at a time.

Unfortunately, I don’t provide multi-camera demo, but support for multi-camera is included in the thetauvc.c, so you can easily realise it by modifyng gstreamer samples.

i.e.

Following functions in thetauvc.c accept ‘index’ or ‘serial number’ which specifies Theta to be used.

thetauvc_find_device(uvc_context_t *ctx, uvc_device_t **devh, unsigned int index)

libuvc-theta-sample/gst/thetauvc.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

thetauvc_find_device_by_serial(uvc_context_t *ctx, uvc_device_t **devh,const char *serial)

libuvc-theta-sample/gst/thetauvc.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

In gst_viewer.c, ‘0’ is hardcoded as index for thetauc_find_device(), thus always use the first Theta found in the device list.

res = thetauvc_find_device(ctx, &dev, 0);

So, you should modify this line to change index value accordingly, or use thetauvc_find_device_by_serial() and specify serial number of the Theta to be used.

Please be noted that the maximum number of concurrent decode session depends on hardware decocder, therefore multiple camera system is

not available on all platforms.

1 Like

Hi, I have purchased theta v model.
My system environment is Ubuntu 18.04.
I want to receive image_raw data from camera through ros package.
But The computer cannot find the camera device.
I turned on the camera, changed to live streaming mode, and connected the USB.

Is there a way to get image raw from theta v camera via ros?
I must work through ros

I think it is possible to receive image raw data through ros.
Because there was data related to thetta s model.(GitHub - ntrlmt/theta_s_uvc: Ricoh Theta S UVC Ros Node)

Detailed instructions are here:

https://theta360.guide/special/linuxstreaming/

Please post again if you still have questions. We are happy to help.

update Sept 22, 2020

This is in response to your DM about building on x86, Ubuntu 18.04. First, thank you for trying the build and asking your question. It is better if you questions in the public forum as we can leverage the wisdom of the community. Other people are running the software on x86 with Ubuntu 18.04 and ROS. They may be able to help as well.

When you first log into the site, the top video is on building for x86, Ubuntu 20.04. I’ll update the document to make this clearer.

Although the default branch from GitHub is now theta_uvc, make sure that you are using theta_uvc branch. Video shows example of using git branch -a. You need to have the development libraries installed for libusb and libjpeg.

Post the output of cmake .. as well as your build error.

Is it possible that you need to install

libusb-1.0-0-dev - userspace USB programming library development files

Maybe this? libjpeg-dev

If you are building the sample app, you need the gstreamer development libraries installed.


Feel free to respond with more questions if you have problems.

I’m trying to follow this guide to run a Theta V on NVIDIA Xavier. The build works fine, but the sample code doesn’t detect the theta (just says “THETA not found!”). One thing I noticed is that in lsusb, my device shows up as 05ca:0368, which is different than the guides.

I tried changing the product id in thetauvc.c to match, but that still didn’t work. Any advice?

Do you have the THETA V in live streaming mode? the word “LIVE” needs to be shown in blue on the body of the camera. If the camera is in LIVE mode, please check the USB cable.

Are you running the sample theta app?

that error message usually only shows up when it is a camera to connection problem (like USB cable is a little wonky) or the camera is not in live streaming mode.

Also, be aware that for Xavier, the gstreamer plug-in selection isn’t working well, so you need to specify the decoder.
libuvc-theta-sample/gst_viewer.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

change
“decodebin ! autovideosink sync=false”
to
“nvv4l2decoder ! nv3dsink sync=false”

However, your problem is likely the cable or the camera mode.

Please post again.

This is with a THETA V.

Post a screenshot of lsusb and the output of your camera firmware version and info.

This is the information on my device.

$ ptpcam --info

Camera information
==================
Model: RICOH THETA V
  manufacturer: Ricoh Company, Ltd.
  serial number: '00105377'
  device version: 3.40.1
  extension ID: 0x00000006
  extension description: (null)
  extension version: 0x006e

this is the lsusb when it is in still image mode:
image

this is the lsusb when it is live streaming mode

The program looks for 2712. So, it’s a problem that you have 0368.

I don’t know it would show 0368. I’m hoping a wonky cable or a firmware upgrade might help.

Craig–

Yes, the camera was not in Live streaming mode, thanks for that tip! I got that working then used the USB interface to keep the camera alive and in Live streaming mode. Next I will modify the libtheta-uvc-sample application to send the h264 stream to a remote computer and then use something like http://wiki.ros.org/gscam on that remote machine to bring the data into ROS.

My initial thought was to use udpsink, but I wasn’t able to get a pipeline working on my Xavier. I tried an RTSP server which worked momentarily, but was not stable. I am not a gstreamer expert, so I’m probably configuring something wrong there… Have you done anything like this or know someone who has? Any advice would be much appreciated!

Thanks,

-Zac

2 Likes

Great news about the progress.

Regarding RTSP or a way to get the stream to another server, I believe other people are working on the same problem, but I do not have an answer at the moment. For example, @Yu_You was asking about this. I believe he moved from the RTSP plug-in to the USB cable with libuvc (the technique in this thread). I do not know if he was then able to use something like gst-rtsp-server to get the stream to another computer.

@Hugues is quick far along using the Janus Gateway to get rtp output on IP networks. I don’t know how busy he is, but if your firm is working on a big project, it might be worthwhile to consider trying to hire him as a consultant. He’s using his FOX SEWER ROVER in production and he’s been freely contributing his knowledge to this group.

Another interesting transmission project is the Lockheed Martin Amelia Drone by @Jake_Kenin, @sjm7783 and others.

There is more information on their project here.

As some of them were in undergraduate school before COVID-19 shut down their project, it might be possible to hire some of the team members as interns.

I’m trying to connect people in parallel to sharing whatever I know because there has been a surge of activity around live streaming and USB camera control, likely due to a maturation of technologies and the increase in demand for remote surveillance and analysis.

There seems like many people are working on foundational knowledge such as transmitting data to a remote server running gscam.

I don’t have ros installed.

Are you saying that you can’t get gscam working on the same computer that the THETA is plugged into with a USB cable.? The documentation examples are using /dev/video*. What is the error message?

Are you using ROS Noetic (Ubuntu 20.04) or ROS Melodic (Ubuntu 18.04) or something else? I’m likely gong to install ROS at some point in the future and test basic camera functionality.

Craig–

Thanks for the information. After trying various approaches, I’ve settled on a setup that I’m happy with. I’ll document it here for posterity.

Firstly, I modified the pipe_proc line in the libuvc-theta-sample program to put the stream into a udpsink:

pipe_proc = " rtph264pay name=pay0 pt=96 ! udpsink host=127.0.0.1 port=5000 sync=false ";

I then run the test-launch program from the gst-rtsp-server server project with the following pipeline:

./test-launch "( udpsrc port=5000 ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96 )"

I tried various other methods of connecting these two gstreamer processes, including shmsink/shmsrc, but ultimately this one worked the best. At some point in the future I may combine the gst_view and test-launch functionality into one executable and do away with some of the needless complexity.

Finally, I used gscam to bring the stream into ROS with the following command:

GSCAM_CONFIG="rtspsrc location=rtspt://10.0.16.1:8554/test latency=400 drop-on-latency=true ! application/x-rtp, encoding-name=H264 ! rtph264depay ! decodebin ! queue ! videoconvert"  roslaunch gscam_nodelet.launch

Note the “rtspt” protocol, it is not a typo. It forces the RTSP connection to go over TCP. When I used UDP there were too many artifacts and corrupted frames

I actually run this last command on a separate machine, just because of my particular network topology. It could be run on the same machine. In fact, it might be necessary to do so because gscam doesn’t seem to handle udp streams well. I also tried using OpenCV VideoCapture to get the data into ros, but that had a couple issues. There are two APIs for VideoCapture that seemed appropriate: Gstreamer and FFmpeg. It turns out that the OpenCV version packaged with ROS is not built with Gstreamer support, so you would have to build OpenCV yourself to use it. For FFmpeg, the version of OpenCV packaged with ROS melodic is 3.2, which is missing a fairly critical change here: https://github.com/opencv/opencv/pull/9292 that allows you to set FFmpeg capture options in an environment variable. I got both of these working by upgrading OpenCV to version 3.4.9 and building from source, but Gstreamer had a low framerate (~5fps) and FFmpeg had a lot of corruption and dropped frames (maybe it was stuck using UDP?). So, I decided to stick with gscam for now.

The latency value of 400 worked for me, but should be tuned depending on your network.

Hope this helps you or someone else wanting to use this camera in ros. So far it looks great and should be perfect for my application. The only negative for me is that I can’t control the power state and streaming state programmatically, unless I missed something in the USB API section. For now I’ve disabled sleep so I only have to turn it on once my robot is up and turn it off when done.

2 Likes