Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

I also had latency when using Jetson nano, but it was due to high CPU usage.
When I set the power mode of Jetson nano to MaxN of 10W, the latency became small.

1 Like

Thanks. I have the nvpmodel set to 0, which is the 10W mode. I have the 5V 4A 20W barrel connector power supply.

$ sudo nvpmodel --query
NVPM WARN: fan mode is not set!
NV Power Mode: MAXN
0
craig@jetson:~$

I do not have a fan on the CPU. There may be thermal throttling. Do you have a fan on your CPU?

Also, do you have any luck using C++ OpenCV applications on the Jetson? Do you have any advice?

$ g++ -L/usr/lib/aarch64-linux-gnu -I/usr/include/opencv4 frame.cpp -o frame  -lopencv_videoio
/usr/bin/ld: /tmp/ccPItXPw.o: undefined reference to symbol '_ZN2cv8fastFreeEPv'
//usr/lib/aarch64-linux-gnu/libopencv_core.so.4.1: error adding symbols: DSO missing from command line
collect2: error: ld returned 1 exit status
craig@jetson:~/Documents/Development/opencv$

I’d like to compare the C++ and Python module performance, but I can’t seem to build a C++ program.

$ cat frame.cpp
#include "opencv2/opencv.hpp"
#include "opencv2/videoio.hpp"


using namespace cv;
int main(int argc, char** argv)
{
    VideoCapture cap;
    // open the default camera, use something different from 0 otherwise;
    // Check VideoCapture documentation.
    if(!cap.open(0))
        return 0;
    for(;;)
    {
          Mat frame;
          cap >> frame;
          if( frame.empty() ) break; // end of video stream
          imshow("this is you, smile! :)", frame);
          if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
    }
    // the camera will be closed automatically upon exit
    // cap.close();
    return 0;
}
craig@jetson:~/Documents/Development/opencv$

I am running Jetpack 4.4.

I would like to get a demo such as open pose or other running. Do you have a suggestion?

This article indicates that I can’t use open pose on Jetpack 4.4 due to move to CuDNN 8.0 and lack of support in caffe for this newer version of CuDNN.


Question Update: 8/26/2020

I’m now trying to use cv2.VideoWriter on the Jetson, but there seems to be a general problem with writing to file. If anyone has it working, please let me know the technique.

This guy indicated that he had to recompile OpenCV from source. I have not been able to source compile OpenCV on the Jetson Nano.

OpenCV Video Write Problem - #2 by DaneLLL - Jetson TX2 - NVIDIA Developer Forums

Several other people are having problems.

cv2.VideoWriter doesn't work well on Jetson Nano - #2 by DaneLLL - Jetson Nano - NVIDIA Developer Forums

Update on OpenCV 4.4 with cuda

I managed to compile OpenCV 4.4 on the nano and install it.

>>> import cv2
>>> cv2.__version__
'4.4.0'
>>> cv2.cuda.printCudaDeviceInfo(0)
*** CUDA Device Query (Runtime API) version (CUDART static linking) *** 

Device count: 1

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.20 / 10.20
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3956 MBytes (4148391936 bytes)
  GPU Clock Speed:                               0.92 GHz
  Max Texture Dimension Size (x,y,z)             1D=(65536), 2D=(65536,65536), 3D=(4096,4096,4096)
  Max Layered Texture Size (dim) x layers        1D=(16384) x 2048, 2D=(16384,16384) x 2048
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per block:           1024
  Maximum sizes of each dimension of a block:    1024 x 1024 x 64
  Maximum sizes of each dimension of a grid:     2147483647 x 65535 x 65535
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and execution:                 Yes with 1 copy engine(s)
...
 Compute Mode:
      Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) 

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version  = 10.20, CUDA Runtime Version = 10.20, NumDevs = 1

This repo worked.

I have a 12V fan blowing on the heat sink. The Nano is supposed to use a 5V fan, but the only fan I had in my drawer was this 12V one. It works so far.

image

1 Like

I would like to know if it is possible to synchronize the live streaming from two ricoh theta Z1 in a linux distribution? Whether its possible to give an external trigger to initiate live streaming. Synchronization would become important for any stereo vision applications.

Thanking you in advance.

You can initiate live streaming with the USB API. With an Jetson Xavier, you can run 2 4K streams from the THETA.

What is needed to synchronize the streams? Is there a working system you have that use two normal webcams (not THETA) on /dev/video0 and /dev/video1

I am also working on some python Gstreamer projects. Is the code thetauvc.py available somewhere? I may help it on Linux.

Thanks.

Usually, in multicamera systems, you can have an external physical device, sometimes the camera itself to send an external physical signal to initiate a capture. This would ensure that the capture occurs at the same time instant. Even if I use a USB API, I believe absolute synchronization can only be guaranteed from a real-time operating system but not with normal Ubuntu distribution. Such synchronized data is a necessity for multi-view applications like for finding correspondence to estimate disparity and eventually depth. Any slight change in the instance of capture could possibly affect the algorithm output.
I was using two theta S cameras in Ubuntu system, which essentially gets detected as /dev/video0 and /dev/video1. But we were not able to achieve absolute synchronization. We tried to reduce latency and improve synchronization using VideoCaptureAsync class in OpenCV along with ApproximateTimeSynchronizer in ROS. However, this does not achieve absolute synchronization as we have no control over the camera to initiate a capture.

I do not think that the solution in this thread with libuvc-theta will be able to meet the requirements for VideoCaptureAsync.

Ideas for further searching:

  • look for anyone using a libuvc (not libuvc-theta) with syncrhonization. Can a UVC 1.5 camera provide the synchronization signal?
  • look another project that uses two physical Android devices into another computer. The THETA V/Z1 run Android 7 internally. You may be able to write your own pipeline inside the camera

The V/Z1 plug-in also supports NDK. The internal OS is Android on SnapDragon ARM from Qualcomm.

Can you use two stereo cameras from another system for disparity and depth and then use a third camera (the THETA) for the 360 view and data.

Although using three cameras (2 not-THETA synchronized, and 1 THETA) is more complex, it may be easier to implement.

It seems that you can use two cheap cameras with the Jetson Nano and either a specialized board or an Arduino for the external signal.

It is possible to use the THETA for general surrounding information and the specialized dual-camera synced setup (not-THETA) for the detailed depth and distance estimation? The 360 view might provide your system with clues on what to focus on the stereo cameras on.

The reason we are interested in specifically using Ricoh cameras as stereo setup is to use the entire 360 view provided by the camera to estimate depth. This I can only achieve with omnidirectional cameras such as Ricoh. The below figure shows such a 3D reconstruction that we obtained from two ricoh theta S cameras.


We hope to improve the quality by using the higher resolution offered by THETA Z1 and also my properly synchronizing the images.
In the video you uploaded, the camera hardware has the facility to provide external trigger and obtain very good synchronziation (of the order of nanosecond). For Ricoh camera, I do not see any such hardware facility. But, the USB API you mentioned would help in synchronization. Since it is essentially a PTP cam we should be able to synchronize the clocks (of the order of microsecond). Right now, I have contacted another Engineer who is more familiar with the protocol. If I find a solution, I will surely share with the community.
Also, do you know if Ricoh cameras are global shutter or rolling shutter?

3 Likes

There is only thetauvc.c

libuvc-theta-sample/gst/thetauvc.c at master · ricohapi/libuvc-theta-sample · GitHub

BTW, I was able to compile OpenCV 4.4 on the Nano and use OpenCV with Python accessing the theta on /dev/video0. I’m not familiar with gstreamer and this may not be what you want.

I was also able to use gst-launch from the command line with /dev/video0.

I read this blog and the technique the author used for Python was with gstreamer and opencv.

This video has some information on my initial test

https://youtu.be/At5uMIMfBQY

I’ve since improved the OpenCV Python script performance with the recompile.

The test pipeline is:

$ gst-launch-1.0 -v v4l2src ! videoconvert ! videoscale ! video/x-raw,width=1000,height=500 ! xvimagesink

This is with gst-loopback running.

Also, cuda does appear to be working with OpenCV. I have not run these tests yet, but this looks quite useful.

Now with Canny edge detection in real-time. Code sent to meetup registrants.

Update: Aug 29

I’m having some problems with the nvidia-inference demos streaming from the THETA.

Normal webcam works. I’m trying to reduce the THETA 4k output to 2k, but can’t figure how to do this. The camera does support streaming at 2k, but I’m not sure how to force it to use the 2k stream for initial testing

Update Aug 29 night

I can now get a 2K stream to /dev/video0

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YU12'
	Name        : Planar YUV 4:2:0
		Size: Discrete 1920x960
			Interval: Discrete 0.033s (30.000 fps)

$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
	/dev/video0

Update Aug 30 morning

I have /dev/video0 working smoothly on x86 with v4l2loopback thanks to @Yu_You submission on GitHub. I added this and the following info to the meetup early access documentation:

  1. video demo and code for DetectNet running on Nvidia Jetson Nano with Z1 using live stream. Good object detection of moving person in real-time
  2. video demo and code for Canny edge detection with Python cv2 module. In real-time with minimal lag
  3. explanation and code snippet to change video resolution from 4K to 2K for testing on Nano

Link to sign up for the meetup is at the Linux Streaming Site. Once you register for the meetup, you’ll see a page immediately with a link to early-access documentation and code. We’re not trying to sell you anything. We’re gating the content so that we can show results to our sponsor, RICOH. This helps us to secure sponsor budget to continue working with the community and produce informal documentation. If you miss the meetup, we’ll put the documentation up on the site in a week or two.

DetectNet running at 23fps. Accurate identification of person and TV.

image

Update Sept 4 - morning

video demo of DetectNet

x86 test using guvcview

Update Sept 4, 2020
Current plan in order of priority

  • publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
  • retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
  • install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
    • if successful, install ROS on x86
  • install ROS on Jetson Nano, ref, run basic tests with tutorial

How to initiate 2 live streaming simultaneously in linux system to make it detect as two video devices. Can you help me understand where I need make changes in gst_viewer.c?

I’ve seen a demo of this on Jetson Xavier, but I didn’t try it myself.

I’ll try it on x86. It’s possible someone else will post the solution before I test it.

Here’s my current test plan in order.

  • publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
  • retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
  • test two RICOH THETA cameras on x86 as two video devices. Example /dev/video1 and /dev/video2. If I fail, contact developer from community that built the original demo and ask for advice.
  • install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
  • if successful, install ROS on x86
  • install ROS on Jetson Nano, ref, run basic tests with tutorial

I briefly looked at the code, it seems like it’s worth a few quick tests before I send a note to the developer.

This section lists the available devices with the “-l” command line option

The pipeline is here.

The device is opened here:

1 Like

For people interested in using Ricoh images for applications in ROS and openCV, check the following github pages:
1> https://github.com/RobInLabUJI/ricoh_camera
2> https://github.com/johnny04thomas/rgbd_stereo_360camera
Hope this helps.

1 Like

Thanks for this information. I’m going to add this to the meetup archive document that I sent out earlier.

I noticed that you opened an issue on the sequoia-ptpy GitHub repo to try and get PTP working with the RICOH THETA Z1.

Have you evaluated the discussion on using gphoto2 Python bindings to talk to the camera using PTP?

Although the topic title says Mac, one of the guys is using ROS, one guy is using Raspian, another person is using Mac and I’m using Ubuntu. However, I’m having some problems right now using it.

As @mhenrie has provided a working code example, and @NaokiSato102 is working on extending it (last he posted), it might be faster to try the Python gphoto2. mhenrie is using his code with a Z1.

I’m planning to test this myself too. If you’ve already looked at gphoto2 Python module and it doesn’t do what you want it to, I’m going to look at PTPy.

You’re Welcome. I haven’t looked into gphoto2 python bindings yet. Thanks for passing the information. I will have a look in a couple of days.

Regarding dual-cameras on single board computer

Note from Craig: Using the note from the developer, I am going to try a solution. Other people should give it a go too. :slight_smile:

Below is a note from the driver developers

We saw pictures of your demo last year using Jetson Xavier and livestreaming 2 THETAs. Is this demo available publicly? We have several contacts that are looking to stream more than one camera at a time.

Unfortunately, I don’t provide multi-camera demo, but support for multi-camera is included in the thetauvc.c, so you can easily realise it by modifyng gstreamer samples.

i.e.

Following functions in thetauvc.c accept ‘index’ or ‘serial number’ which specifies Theta to be used.

thetauvc_find_device(uvc_context_t *ctx, uvc_device_t **devh, unsigned int index)

libuvc-theta-sample/gst/thetauvc.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

thetauvc_find_device_by_serial(uvc_context_t *ctx, uvc_device_t **devh,const char *serial)

libuvc-theta-sample/gst/thetauvc.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

In gst_viewer.c, ‘0’ is hardcoded as index for thetauc_find_device(), thus always use the first Theta found in the device list.

res = thetauvc_find_device(ctx, &dev, 0);

So, you should modify this line to change index value accordingly, or use thetauvc_find_device_by_serial() and specify serial number of the Theta to be used.

Please be noted that the maximum number of concurrent decode session depends on hardware decocder, therefore multiple camera system is

not available on all platforms.

1 Like

Hi, I have purchased theta v model.
My system environment is Ubuntu 18.04.
I want to receive image_raw data from camera through ros package.
But The computer cannot find the camera device.
I turned on the camera, changed to live streaming mode, and connected the USB.

Is there a way to get image raw from theta v camera via ros?
I must work through ros

I think it is possible to receive image raw data through ros.
Because there was data related to thetta s model.(GitHub - ntrlmt/theta_s_uvc: Ricoh Theta S UVC Ros Node)

Detailed instructions are here:

https://theta360.guide/special/linuxstreaming/

Please post again if you still have questions. We are happy to help.

update Sept 22, 2020

This is in response to your DM about building on x86, Ubuntu 18.04. First, thank you for trying the build and asking your question. It is better if you questions in the public forum as we can leverage the wisdom of the community. Other people are running the software on x86 with Ubuntu 18.04 and ROS. They may be able to help as well.

When you first log into the site, the top video is on building for x86, Ubuntu 20.04. I’ll update the document to make this clearer.

Although the default branch from GitHub is now theta_uvc, make sure that you are using theta_uvc branch. Video shows example of using git branch -a. You need to have the development libraries installed for libusb and libjpeg.

Post the output of cmake .. as well as your build error.

Is it possible that you need to install

libusb-1.0-0-dev - userspace USB programming library development files

Maybe this? libjpeg-dev

If you are building the sample app, you need the gstreamer development libraries installed.


Feel free to respond with more questions if you have problems.

I’m trying to follow this guide to run a Theta V on NVIDIA Xavier. The build works fine, but the sample code doesn’t detect the theta (just says “THETA not found!”). One thing I noticed is that in lsusb, my device shows up as 05ca:0368, which is different than the guides.

I tried changing the product id in thetauvc.c to match, but that still didn’t work. Any advice?

Do you have the THETA V in live streaming mode? the word “LIVE” needs to be shown in blue on the body of the camera. If the camera is in LIVE mode, please check the USB cable.

Are you running the sample theta app?

that error message usually only shows up when it is a camera to connection problem (like USB cable is a little wonky) or the camera is not in live streaming mode.

Also, be aware that for Xavier, the gstreamer plug-in selection isn’t working well, so you need to specify the decoder.
libuvc-theta-sample/gst_viewer.c at f8c3caa32bf996b29c741827bd552be605e3e2e2 · ricohapi/libuvc-theta-sample · GitHub

change
“decodebin ! autovideosink sync=false”
to
“nvv4l2decoder ! nv3dsink sync=false”

However, your problem is likely the cable or the camera mode.

Please post again.

This is with a THETA V.

Post a screenshot of lsusb and the output of your camera firmware version and info.

This is the information on my device.

$ ptpcam --info

Camera information
==================
Model: RICOH THETA V
  manufacturer: Ricoh Company, Ltd.
  serial number: '00105377'
  device version: 3.40.1
  extension ID: 0x00000006
  extension description: (null)
  extension version: 0x006e

this is the lsusb when it is in still image mode:
image

this is the lsusb when it is live streaming mode

The program looks for 2712. So, it’s a problem that you have 0368.

I don’t know it would show 0368. I’m hoping a wonky cable or a firmware upgrade might help.

Craig–

Yes, the camera was not in Live streaming mode, thanks for that tip! I got that working then used the USB interface to keep the camera alive and in Live streaming mode. Next I will modify the libtheta-uvc-sample application to send the h264 stream to a remote computer and then use something like http://wiki.ros.org/gscam on that remote machine to bring the data into ROS.

My initial thought was to use udpsink, but I wasn’t able to get a pipeline working on my Xavier. I tried an RTSP server which worked momentarily, but was not stable. I am not a gstreamer expert, so I’m probably configuring something wrong there… Have you done anything like this or know someone who has? Any advice would be much appreciated!

Thanks,

-Zac

2 Likes