Ricoh Theta V: Livestreaming with Jetson Xavier, ROS, OpenCV, NUC

Congratulations on getting the video stream working with

pipe_proc = "omxh264dec ! gldownload ! glimagesink ! videoconvert n-thread=0 ! "
            		"video/x-raw,format=I420 ! identity drop-allocation=true !"
            		"v4l2sink device=/dev/video7 qos=false sync=false";

As a test, can you view the stream in something like VLC or or gst-launch-1.0 on /dev/video7?

Example

$ cvlc v4l2:///dev/video7
VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
[000055573aea4db0] dummy interface: using the dummy interface module...

You should also be able to get the THETA information with vl2-ctl. Note that I reduced the resolution in the example below to 1920x960 as I only have a Jetson Nano and it was struggling with OpenCV at 4K.

$ v4l2-ctl --list-formats-ext --device  /dev/video7
ioctl: VIDIOC_ENUM_FMT
    Type: Video Capture

    [0]: 'YU12' (Planar YUV 4:2:0)
        Size: Discrete 1920x960
            Interval: Discrete 0.033s (30.000 fps)

Yes, thank you for your help. I really appreciate it. :slight_smile:

For some reason, everytime I launch VLC in the Jetson Xavier, the app keeps crashing. I guess the app is not compatible.

Regarding the information provided by v4l2-ctl. I am only getting the following:

$ v4l2-ctl --list-formats-ext --device  /dev/video7
ioctl: VIDIOC_ENUM_FMT
$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
	/dev/video7

Failed to open /dev/video0: No such file or director
$ v4l2-ctl -d7 -D
Driver Info (not using libv4l2):
	Driver name   : v4l2 loopback
	Card type     : Dummy video device (0x0000)
	Bus info      : platform:v4l2loopback-000
	Driver version: 4.9.201
	Capabilities  : 0x85208003
		Video Capture
		Video Output
		Video Memory-to-Memory
		Read/Write
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x85208003
		Video Capture
		Video Output
		Video Memory-to-Memory
		Read/Write
		Streaming
		Extended Pix Format
		Device Capabilities

Seems like the problem with ROS is due to OpenCV. I will try to launch a code in python for OpenCV to fix it, and the go back to ROS.

Did you build OpenCV from source?

Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson - #78 by zdydek

gscam doesn’t seem to handle udp streams well. I also tried using OpenCV VideoCapture to get the data into ros, but that had a couple issues. There are two APIs for VideoCapture that seemed appropriate: Gstreamer and FFmpeg. It turns out that the OpenCV version packaged with ROS is not built with Gstreamer support, so you would have to build OpenCV yourself to use it. For FFmpeg, the version of OpenCV packaged with ROS melodic is 3.2, which is missing a fairly critical change here: https://github.com/opencv/opencv/pull/9292 that allows you to set FFmpeg capture options in an environment variable. I got both of these working by upgrading OpenCV to version 3.4.9 and building from source, but Gstreamer had a low framerate (~5fps) and FFmpeg had a lot of corruption and dropped frames (maybe it was stuck using UDP?). So, I decided to stick with gscam for now.

The latency value of 400 worked for me, but should be tuned depending on your network.

Yes, I installed it directly from the source. The thing is that when I tried the same version of OpenCV and ROS in the Intel NUC there is no problem with the gstreamer. It is only now that I am setting the camera in the Jetson Xavier that things are getting complicated.

:camera: OpenCV


It seems that the problem is the pipeline from gstream to OpenCV. For now I will just be focusing in obtaining in make this connection using the simple code you suggested:


import numpy as np
import cv2

cap = cv2.VideoCapture(7, cv2.CAP_GSTREAMER)

while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Display the resulting frame
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

But I am getting the following error:

[ WARN:0] global /home/spacer/Downloads/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (2057) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/spacer/Downloads/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (1034) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/spacer/Downloads/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Traceback (most recent call last):
  File "test.py", line 11, in <module>
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(4.5.3) /home/spacer/Downloads/opencv-4.5.3/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

I also tried the following and the outcome was the same:

gst = "omxh264dec ! gldownload ! glimagesink ! videoconvert n-thread=0 ! video/x-raw,format=I420 ! identity drop-allocation=true ! v4l2sink device=/dev/video7 qos=false sync=false"

and

gst = "v4l2src device=/dev/video7 ! video/x-raw,width=1920,height=1080,format=I420,framerate=30/1 ! videoconvert ! video/x-raw,format=BGR ! appsink"

in

cap = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)

I can’t get the gstream pipeline to work directly from the terminal… this might be the issue

gst-launch-1.0 v4l2src device=/dev/video7 ! video/x-raw,framerate=30/1 ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Did ./gst_loopback work fine on a NUC?

@ bot-lin Yes, I am using Intel NUC10i7fnh, and it worked great with the following code in the gstviewer file:

if (strcmp(cmd_name, "gst_loopback") == 0)
		pipe_proc = "decodebin ! autovideoconvert ! "
			"video/x-raw,format=I420 ! identity drop-allocation=true !"
			"v4l2sink device=/dev/video7  qos=false sync=false";
	else
		
		pipe_proc = " decodebin ! autovideosink sync=false qos=false";

I have tried Jetson Nano, and I met the same problem as yours. The JetPack version which I used is 4.6 but it was 4.4 posted herer. I am trying to recover the system to 4.4 and see what is going on.

I did a series of tests with a Jetson Nano yesterday streaming with a THETA V running firmware 3.70.1.

I can’t replicate the problem.

  • OpenCV 4.4.0 works with Python
  • DetectNet works
  • gst-launch-1.0 works

My configuration

  • L4T 32.4.4
  • power is from barrel connection with external fan
  • using 2K resolution for DetectNet and OpenCV as the 4K stream slows down the Nano
  • RICOH THETA V with firmware 3.70.1 connected with microUSB cable

I’ll try this with Jetpack 4.6 later

Thanks, Craig, how can I check the firmware version of my Theta V?

connect the camera with a USB cable to either a Mac or Windows computer.

https://support.theta360.com/en/download/

I confirm that my firmware was 3.40. Now I have upgraded it to 3.70.1 and am going to test it on Jetpack 4.6 again.

OK. On Jetson Nano, this is my pipeline:


	if (strcmp(cmd_name, "gst_loopback") == 0)
		pipe_proc = "decodebin ! autovideoconvert ! "
			"video/x-raw,format=I420 ! identity drop-allocation=true !"
			"v4l2sink device=/dev/video1 qos=false sync=false";
	else
		pipe_proc = " decodebin ! autovideosink sync=false qos=false";

this is how I am controlling the resolution from the command line. The nano can’t effectively process the 4K streams for object recognition in my tests. You may get better results.

	if (argc > 1 && strcmp("--format", argv[1]) == 0) {
		if (argc > 2 && strcmp("4K", argv[2]) == 0) {
			printf("THETA live video is 4K");
			res = thetauvc_get_stream_ctrl_format_size(devh,
				THETAUVC_MODE_UHD_2997, &ctrl);	
		} else if (argc > 2 && strcmp("2K", argv[2]) == 0) {
			printf("THETA live video is 2K");
			res = thetauvc_get_stream_ctrl_format_size(devh,
				THETAUVC_MODE_FHD_2997, &ctrl);				
		}
		
		else {
			printf("specify video device. --format 4K or --format 2K\n");
			goto exit;
		}
		
	}

Based on Ubuntu 18.04.5 LTS

lsb_release  -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.5 LTS
Release:	18.04
Codename:	bionic

confirmed with nvv4l2decoder

tested on Jetson Nano. You may need to specify nvv4l2decoder for Xavier as I’ve heard that decodebin may not work.

	if (strcmp(cmd_name, "gst_loopback") == 0)
		// for Jetson Nano
		pipe_proc = "nvv4l2decoder ! autovideoconvert !"
		// pipe_proc = "decodebin ! autovideoconvert ! "
			"video/x-raw,format=I420 ! identity drop-allocation=true !"
			"v4l2sink device=/dev/video1 qos=false sync=false";
	else
		// pipe_proc = " decodebin ! autovideosink sync=false qos=false";
		// tested on Jetson Nano.  Should work on Xavier
		pipe_proc = "nvv4l2decoder ! nv3dsink sync=false qos=false";

starting gst_loopback

Ignoring errors about pixformat. See below. The OP got it to work with the error.

Gstreamer1.0 v4l2sink will not work at all for me. · Issue #137 · umlaeute/v4l2loopback · GitHub

./gst_loopback  --format 2K
Opening in BLOCKING MODE 
libv4l2: error getting pixformat: Invalid argument
Opening in BLOCKING MODE 
start, hit any key to stop
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 

testing OpenCV python script

Note my THETA V is on /dev/video1
tested on Aug 31, 2021 with firmware 3.70.1

python canny.py --video_device 1

Test code

Installed VS Code

To make testing easier, I installed VS Code ARM64 on the Nano. The Jetson is such a cool little device. IMO, it’s much easier to work on than a Raspberry Pi. However, I guess it does cost more than an RPi and likley has more power requirements. The entire Jetson line is great.

image

I changed the codes to

if (strcmp(cmd_name, "gst_loopback") == 0)
		// for Jetson Nano
		pipe_proc = "nvv4l2decoder ! autovideoconvert !"
		// pipe_proc = "decodebin ! autovideoconvert ! "
			"video/x-raw,format=I420 ! identity drop-allocation=true !"
			"v4l2sink device=/dev/video0 qos=false sync=false";
	else
		// pipe_proc = " decodebin ! autovideosink sync=false qos=false";
		// tested on Jetson Nano.  Should work on Xavier
		pipe_proc = "nvv4l2decoder ! nv3dsink sync=false qos=false";

and by running ./gst_loopback I got

./gst_loopback --format 2K
Opening in BLOCKING MODE
ArgusV4L2_Open failed: No such file or directory
Opening in BLOCKING MODE 
libv4l2: error getting pixformat: Invalid argument
Opening in BLOCKING MODE 
start, hit any key to stop
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Error: Internal data stream error.
stop

I googled the ArgusV4L2_Open thing and still have no idea about what it is.
:sweat_smile:

I just tried JetPack 4.4.1, but when I run apt upgrade, I got

Errors were encountered while processing:
 nvidia-l4t-bootloader
E: Sub-process /usr/bin/dpkg returned an error code (1)

in the end.

Has anyone experienced this?

I’m not sure what ArgusV4L2 is myself. However, I think that libargus is the library that the Jetpack Linux4Tegra uses for some cameras.
Welcome — Jetson Linux<br/>Developer Guide 34.1 documentation

Jetson Linux API Reference: Libargus Camera API | NVIDIA Docs

You may be able to get additional help on the NVIDIA developer forum. I’m not sure why it wouldn’t be installed with JetPack. The file should be there.

I’ve never seen this before. Did you create a new microSD card?

Yes, I downloaded Jetpack 4.4.1 and cloned it to a new SD card.