Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

There is only thetauvc.c

libuvc-theta-sample/gst/thetauvc.c at master · ricohapi/libuvc-theta-sample · GitHub

BTW, I was able to compile OpenCV 4.4 on the Nano and use OpenCV with Python accessing the theta on /dev/video0. I’m not familiar with gstreamer and this may not be what you want.

I was also able to use gst-launch from the command line with /dev/video0.

I read this blog and the technique the author used for Python was with gstreamer and opencv.

This video has some information on my initial test

https://youtu.be/At5uMIMfBQY

I’ve since improved the OpenCV Python script performance with the recompile.

The test pipeline is:

$ gst-launch-1.0 -v v4l2src ! videoconvert ! videoscale ! video/x-raw,width=1000,height=500 ! xvimagesink

This is with gst-loopback running.

Also, cuda does appear to be working with OpenCV. I have not run these tests yet, but this looks quite useful.

Now with Canny edge detection in real-time. Code sent to meetup registrants.

Update: Aug 29

I’m having some problems with the nvidia-inference demos streaming from the THETA.

Normal webcam works. I’m trying to reduce the THETA 4k output to 2k, but can’t figure how to do this. The camera does support streaming at 2k, but I’m not sure how to force it to use the 2k stream for initial testing

Update Aug 29 night

I can now get a 2K stream to /dev/video0

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YU12'
	Name        : Planar YUV 4:2:0
		Size: Discrete 1920x960
			Interval: Discrete 0.033s (30.000 fps)

$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
	/dev/video0

Update Aug 30 morning

I have /dev/video0 working smoothly on x86 with v4l2loopback thanks to @Yu_You submission on GitHub. I added this and the following info to the meetup early access documentation:

  1. video demo and code for DetectNet running on Nvidia Jetson Nano with Z1 using live stream. Good object detection of moving person in real-time
  2. video demo and code for Canny edge detection with Python cv2 module. In real-time with minimal lag
  3. explanation and code snippet to change video resolution from 4K to 2K for testing on Nano

Link to sign up for the meetup is at the Linux Streaming Site. Once you register for the meetup, you’ll see a page immediately with a link to early-access documentation and code. We’re not trying to sell you anything. We’re gating the content so that we can show results to our sponsor, RICOH. This helps us to secure sponsor budget to continue working with the community and produce informal documentation. If you miss the meetup, we’ll put the documentation up on the site in a week or two.

DetectNet running at 23fps. Accurate identification of person and TV.

image

Update Sept 4 - morning

video demo of DetectNet

x86 test using guvcview

Update Sept 4, 2020
Current plan in order of priority

  • publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
  • retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
  • install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
    • if successful, install ROS on x86
  • install ROS on Jetson Nano, ref, run basic tests with tutorial

How to initiate 2 live streaming simultaneously in linux system to make it detect as two video devices. Can you help me understand where I need make changes in gst_viewer.c?

I’ve seen a demo of this on Jetson Xavier, but I didn’t try it myself.

I’ll try it on x86. It’s possible someone else will post the solution before I test it.

Here’s my current test plan in order.

  • publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
  • retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
  • test two RICOH THETA cameras on x86 as two video devices. Example /dev/video1 and /dev/video2. If I fail, contact developer from community that built the original demo and ask for advice.
  • install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
  • if successful, install ROS on x86
  • install ROS on Jetson Nano, ref, run basic tests with tutorial

I briefly looked at the code, it seems like it’s worth a few quick tests before I send a note to the developer.

This section lists the available devices with the “-l” command line option

The pipeline is here.

The device is opened here:

1 Like

For people interested in using Ricoh images for applications in ROS and openCV, check the following github pages:
1> https://github.com/RobInLabUJI/ricoh_camera
2> https://github.com/johnny04thomas/rgbd_stereo_360camera
Hope this helps.

1 Like

Thanks for this information. I’m going to add this to the meetup archive document that I sent out earlier.

I noticed that you opened an issue on the sequoia-ptpy GitHub repo to try and get PTP working with the RICOH THETA Z1.

Have you evaluated the discussion on using gphoto2 Python bindings to talk to the camera using PTP?

Although the topic title says Mac, one of the guys is using ROS, one guy is using Raspian, another person is using Mac and I’m using Ubuntu. However, I’m having some problems right now using it.

As @mhenrie has provided a working code example, and @NaokiSato102 is working on extending it (last he posted), it might be faster to try the Python gphoto2. mhenrie is using his code with a Z1.

I’m planning to test this myself too. If you’ve already looked at gphoto2 Python module and it doesn’t do what you want it to, I’m going to look at PTPy.

You’re Welcome. I haven’t looked into gphoto2 python bindings yet. Thanks for passing the information. I will have a look in a couple of days.

Regarding dual-cameras on single board computer

Note from Craig: Using the note from the developer, I am going to try a solution. Other people should give it a go too. :slight_smile:

Below is a note from the driver developers

We saw pictures of your demo last year using Jetson Xavier and livestreaming 2 THETAs. Is this demo available publicly? We have several contacts that are looking to stream more than one camera at a time.

Unfortunately, I don’t provide multi-camera demo, but support for multi-camera is included in the thetauvc.c, so you can easily realise it by modifyng gstreamer samples.

i.e.

Following functions in thetauvc.c accept ‘index’ or ‘serial number’ which specifies Theta to be used.

thetauvc_find_device(uvc_context_t *ctx, uvc_device_t **devh, unsigned int index)

libuvc-theta-sample/gst/thetauvc.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

thetauvc_find_device_by_serial(uvc_context_t *ctx, uvc_device_t **devh,const char *serial)

libuvc-theta-sample/gst/thetauvc.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

In gst_viewer.c, ‘0’ is hardcoded as index for thetauc_find_device(), thus always use the first Theta found in the device list.

res = thetauvc_find_device(ctx, &dev, 0);

So, you should modify this line to change index value accordingly, or use thetauvc_find_device_by_serial() and specify serial number of the Theta to be used.

Please be noted that the maximum number of concurrent decode session depends on hardware decocder, therefore multiple camera system is

not available on all platforms.

1 Like

Hi, I have purchased theta v model.
My system environment is Ubuntu 18.04.
I want to receive image_raw data from camera through ros package.
But The computer cannot find the camera device.
I turned on the camera, changed to live streaming mode, and connected the USB.

Is there a way to get image raw from theta v camera via ros?
I must work through ros

I think it is possible to receive image raw data through ros.
Because there was data related to thetta s model.(GitHub - ntrlmt/theta_s_uvc: Ricoh Theta S UVC Ros Node)

Detailed instructions are here:

https://theta360.guide/special/linuxstreaming/

Please post again if you still have questions. We are happy to help.

update Sept 22, 2020

This is in response to your DM about building on x86, Ubuntu 18.04. First, thank you for trying the build and asking your question. It is better if you questions in the public forum as we can leverage the wisdom of the community. Other people are running the software on x86 with Ubuntu 18.04 and ROS. They may be able to help as well.

When you first log into the site, the top video is on building for x86, Ubuntu 20.04. I’ll update the document to make this clearer.

Although the default branch from GitHub is now theta_uvc, make sure that you are using theta_uvc branch. Video shows example of using git branch -a. You need to have the development libraries installed for libusb and libjpeg.

Post the output of cmake .. as well as your build error.

Is it possible that you need to install

libusb-1.0-0-dev - userspace USB programming library development files

Maybe this? libjpeg-dev

If you are building the sample app, you need the gstreamer development libraries installed.


Feel free to respond with more questions if you have problems.

I’m trying to follow this guide to run a Theta V on NVIDIA Xavier. The build works fine, but the sample code doesn’t detect the theta (just says “THETA not found!”). One thing I noticed is that in lsusb, my device shows up as 05ca:0368, which is different than the guides.

I tried changing the product id in thetauvc.c to match, but that still didn’t work. Any advice?

Do you have the THETA V in live streaming mode? the word “LIVE” needs to be shown in blue on the body of the camera. If the camera is in LIVE mode, please check the USB cable.

Are you running the sample theta app?

that error message usually only shows up when it is a camera to connection problem (like USB cable is a little wonky) or the camera is not in live streaming mode.

Also, be aware that for Xavier, the gstreamer plug-in selection isn’t working well, so you need to specify the decoder.
libuvc-theta-sample/gst_viewer.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

change
“decodebin ! autovideosink sync=false”
to
“nvv4l2decoder ! nv3dsink sync=false”

However, your problem is likely the cable or the camera mode.

Please post again.

This is with a THETA V.

Post a screenshot of lsusb and the output of your camera firmware version and info.

This is the information on my device.

$ ptpcam --info

Camera information
==================
Model: RICOH THETA V
  manufacturer: Ricoh Company, Ltd.
  serial number: '00105377'
  device version: 3.40.1
  extension ID: 0x00000006
  extension description: (null)
  extension version: 0x006e

this is the lsusb when it is in still image mode:
image

this is the lsusb when it is live streaming mode

The program looks for 2712. So, it’s a problem that you have 0368.

I don’t know it would show 0368. I’m hoping a wonky cable or a firmware upgrade might help.

Craig–

Yes, the camera was not in Live streaming mode, thanks for that tip! I got that working then used the USB interface to keep the camera alive and in Live streaming mode. Next I will modify the libtheta-uvc-sample application to send the h264 stream to a remote computer and then use something like http://wiki.ros.org/gscam on that remote machine to bring the data into ROS.

My initial thought was to use udpsink, but I wasn’t able to get a pipeline working on my Xavier. I tried an RTSP server which worked momentarily, but was not stable. I am not a gstreamer expert, so I’m probably configuring something wrong there… Have you done anything like this or know someone who has? Any advice would be much appreciated!

Thanks,

-Zac

2 Likes

Great news about the progress.

Regarding RTSP or a way to get the stream to another server, I believe other people are working on the same problem, but I do not have an answer at the moment. For example, @Yu_You was asking about this. I believe he moved from the RTSP plug-in to the USB cable with libuvc (the technique in this thread). I do not know if he was then able to use something like gst-rtsp-server to get the stream to another computer.

@Hugues is quick far along using the Janus Gateway to get rtp output on IP networks. I don’t know how busy he is, but if your firm is working on a big project, it might be worthwhile to consider trying to hire him as a consultant. He’s using his FOX SEWER ROVER in production and he’s been freely contributing his knowledge to this group.

Another interesting transmission project is the Lockheed Martin Amelia Drone by @Jake_Kenin, @sjm7783 and others.

There is more information on their project here.

As some of them were in undergraduate school before COVID-19 shut down their project, it might be possible to hire some of the team members as interns.

I’m trying to connect people in parallel to sharing whatever I know because there has been a surge of activity around live streaming and USB camera control, likely due to a maturation of technologies and the increase in demand for remote surveillance and analysis.

There seems like many people are working on foundational knowledge such as transmitting data to a remote server running gscam.

I don’t have ros installed.

Are you saying that you can’t get gscam working on the same computer that the THETA is plugged into with a USB cable.? The documentation examples are using /dev/video*. What is the error message?

Are you using ROS Noetic (Ubuntu 20.04) or ROS Melodic (Ubuntu 18.04) or something else? I’m likely gong to install ROS at some point in the future and test basic camera functionality.

Craig–

Thanks for the information. After trying various approaches, I’ve settled on a setup that I’m happy with. I’ll document it here for posterity.

Firstly, I modified the pipe_proc line in the libuvc-theta-sample program to put the stream into a udpsink:

pipe_proc = " rtph264pay name=pay0 pt=96 ! udpsink host=127.0.0.1 port=5000 sync=false ";

I then run the test-launch program from the gst-rtsp-server server project with the following pipeline:

./test-launch "( udpsrc port=5000 ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96 )"

I tried various other methods of connecting these two gstreamer processes, including shmsink/shmsrc, but ultimately this one worked the best. At some point in the future I may combine the gst_view and test-launch functionality into one executable and do away with some of the needless complexity.

Finally, I used gscam to bring the stream into ROS with the following command:

GSCAM_CONFIG="rtspsrc location=rtspt://10.0.16.1:8554/test latency=400 drop-on-latency=true ! application/x-rtp, encoding-name=H264 ! rtph264depay ! decodebin ! queue ! videoconvert"  roslaunch gscam_nodelet.launch

Note the “rtspt” protocol, it is not a typo. It forces the RTSP connection to go over TCP. When I used UDP there were too many artifacts and corrupted frames

I actually run this last command on a separate machine, just because of my particular network topology. It could be run on the same machine. In fact, it might be necessary to do so because gscam doesn’t seem to handle udp streams well. I also tried using OpenCV VideoCapture to get the data into ros, but that had a couple issues. There are two APIs for VideoCapture that seemed appropriate: Gstreamer and FFmpeg. It turns out that the OpenCV version packaged with ROS is not built with Gstreamer support, so you would have to build OpenCV yourself to use it. For FFmpeg, the version of OpenCV packaged with ROS melodic is 3.2, which is missing a fairly critical change here: https://github.com/opencv/opencv/pull/9292 that allows you to set FFmpeg capture options in an environment variable. I got both of these working by upgrading OpenCV to version 3.4.9 and building from source, but Gstreamer had a low framerate (~5fps) and FFmpeg had a lot of corruption and dropped frames (maybe it was stuck using UDP?). So, I decided to stick with gscam for now.

The latency value of 400 worked for me, but should be tuned depending on your network.

Hope this helps you or someone else wanting to use this camera in ros. So far it looks great and should be perfect for my application. The only negative for me is that I can’t control the power state and streaming state programmatically, unless I missed something in the USB API section. For now I’ve disabled sleep so I only have to turn it on once my robot is up and turn it off when done.

2 Likes

Love it!

Thanks so much for sharing this.

See this video, I think is what you want.

More information is here is the camera section.

https://theta360.guide/special/linuxstreaming/

There is sample code for Jetson Nano as well in that document.

If this is what you are looking for, feel free to ask more questions.

Note that I haven’t tested isolating individual USB ports on the Jetson. I don’t know if it is possible to just reset the USB port that the THETA is attached to. On the Raspberry Pi, all the USB ports are reset when the camera is restarted. If you have other devices attached the USB port of the Xavier that you need, please adjust accordingly. This is not an approved part of the API. If this is something you need for production, we’ll try and test it more.

If your application can live with sleep and wake, it is better as it is part of the official API.

The API supports switching from image to live streaming and to video file.

Note if you need to save video to file, there is a third parameter you must set. It’s documented in on the guide and also on the USB thread of this forum. I think it is something like adding ,0,0,1 to the
end of the hex value for video to file. If you have problems, we can check it.

1 Like

I have live streaming working on Mac and everything is great - but I am trying it on ubuntu and its not working. At all. It doesn’t recognize that the camera is present.

Is there a way to make the Ricoh theta v work on ubuntu?

This was an old question that I merged with this topic to help people using the search feature of this forum. yes, it’s possible. Please review the topic at the topic and ask more question if you have any problems. Hope you’re still using the camera. :slight_smile:

Hi I made a few small modifications to a github entry to transform equirectangular images/frames to perspective images. This can really help when using existing deep learning models or when creating new models based on regular (non 360 camera footage. The Github link can be found here.

Original Images

Transformed image

I will try some of this out on some Mobilenet V2 models later.

Community Ask: Functionalize this in a way that is fast and uses Nvidea GPU on Jetson or Xavier AGX.

1 Like

Nice. This is great. Thanks for sharing it.

I tested it with Python3 on x86 using the standard opencv shipped with Ubuntu 20.04 and Z1 jpeg images. Works great.

sudo apt install python3-opencv

The only sad thing I noticed is that all the test pictures in my camera are of me sitting in front of a computer. :frowning:

Have you tried it on a video stream without applying Mobilenet v2 analysis? I’m wondering how fast the Equirec2Perspec can handle the frames.

I’m still trying to learn more about this new frontier (for me) of GPU acceleration on the Jetson.

Just to clarify the challenge you’re proposing, do you mean to test OpenCV on NVIDIA Jetson with CUDA acceleration?

As my knowledge is weak, I do not know if I need to modify the Python code to load the CV portions into the GPU or if compiling OpenCV with CUDA support somehow automatically does this for me.

Most of the documentation I can find just focuses on compiling CUDA support into OpenCV

I’ve compiled OpenCV from source with CUDA support on the Nvidia Jetson, but I’m not sure if I need to do stuff like cv.cuda…

If you have Equirec2Perspec working on a stream, we can also just test it and see what the latency is. If it’s too slow, then we can try and modify the code with the cv.cuda names.

Thanks to this topic, I was able to run gst_viewer on my Ubuntu laptop and view streaming video of THETA V.

Now,I have a question about THETA video streaming over USB.
This may seem like an odd question, but I would like to ask someone to give me an advice if possible.

I’m trying to view streaming video on ffplay with gst_loopback,v4l2loopback and ffmpeg as well as gst_viewer.
You might think "Why is this guy trying to use ffplay even though gst_viewer is available?”. I know, but it’ll be a long story.So I’ll skip it now.

Anyway, I connected THETA and ran gst_loopback and ran the following command.
("/dev/video1" is a device file created by v4l2loopback.)

ffmpeg -i /dev/video1 -f matroska - | ffplay -

However, as shown in the screenshot, bit rate indicates N/A and no streaming image was displayed.

I changed /dev/video1 to /dev/video0 (the web-camera built into the laptop) and ran the same command and confirmed that the video from the web-camera was correctly displayed.
To further confirm, I ran the following command and saved a 10 second video.

ffmpeg -i /dev/video1 -f matroska output.mp4

The length of the output.mp4 was 0 seconds and only one image was stored.

gst_viewer is working fine, so I think I’m missing some settings or making a mistake on the v4l2loopback, ffmpeg side, but I couldn’t find a good solution.
Does anyone have an advice on this?

environment:
camera:THETA V
OS:Ubuntu 18.04 LTS 64bit
CPU:i7-9750H
RAM:16GB

1 Like