As a test, can you view the stream in something like VLC or or gst-launch-1.0 on /dev/video7?
Example
$ cvlc v4l2:///dev/video7
VLC media player 3.0.9.2 Vetinari (revision 3.0.9.2-0-gd4c1aefe4d)
[000055573aea4db0] dummy interface: using the dummy interface module...
You should also be able to get the THETA information with vl2-ctl. Note that I reduced the resolution in the example below to 1920x960 as I only have a Jetson Nano and it was struggling with OpenCV at 4K.
$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video7
Failed to open /dev/video0: No such file or director
$ v4l2-ctl -d7 -D
Driver Info (not using libv4l2):
Driver name : v4l2 loopback
Card type : Dummy video device (0x0000)
Bus info : platform:v4l2loopback-000
Driver version: 4.9.201
Capabilities : 0x85208003
Video Capture
Video Output
Video Memory-to-Memory
Read/Write
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x85208003
Video Capture
Video Output
Video Memory-to-Memory
Read/Write
Streaming
Extended Pix Format
Device Capabilities
gscam doesn’t seem to handle udp streams well. I also tried using OpenCV VideoCapture to get the data into ros, but that had a couple issues. There are two APIs for VideoCapture that seemed appropriate: Gstreamer and FFmpeg. It turns out that the OpenCV version packaged with ROS is not built with Gstreamer support, so you would have to build OpenCV yourself to use it. For FFmpeg, the version of OpenCV packaged with ROS melodic is 3.2, which is missing a fairly critical change here: https://github.com/opencv/opencv/pull/9292 that allows you to set FFmpeg capture options in an environment variable. I got both of these working by upgrading OpenCV to version 3.4.9 and building from source, but Gstreamer had a low framerate (~5fps) and FFmpeg had a lot of corruption and dropped frames (maybe it was stuck using UDP?). So, I decided to stick with gscam for now.
The latency value of 400 worked for me, but should be tuned depending on your network.
Yes, I installed it directly from the source. The thing is that when I tried the same version of OpenCV and ROS in the Intel NUC there is no problem with the gstreamer. It is only now that I am setting the camera in the Jetson Xavier that things are getting complicated.
It seems that the problem is the pipeline from gstream to OpenCV. For now I will just be focusing in obtaining in make this connection using the simple code you suggested:
import numpy as np
import cv2
cap = cv2.VideoCapture(7, cv2.CAP_GSTREAMER)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
But I am getting the following error:
[ WARN:0] global /home/spacer/Downloads/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (2057) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/spacer/Downloads/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (1034) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/spacer/Downloads/opencv-4.5.3/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Traceback (most recent call last):
File "test.py", line 11, in <module>
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(4.5.3) /home/spacer/Downloads/opencv-4.5.3/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
I also tried the following and the outcome was the same:
I can’t get the gstream pipeline to work directly from the terminal… this might be the issue
gst-launch-1.0 v4l2src device=/dev/video7 ! video/x-raw,framerate=30/1 ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
I have tried Jetson Nano, and I met the same problem as yours. The JetPack version which I used is 4.6 but it was 4.4 posted herer. I am trying to recover the system to 4.4 and see what is going on.
this is how I am controlling the resolution from the command line. The nano can’t effectively process the 4K streams for object recognition in my tests. You may get better results.
if (argc > 1 && strcmp("--format", argv[1]) == 0) {
if (argc > 2 && strcmp("4K", argv[2]) == 0) {
printf("THETA live video is 4K");
res = thetauvc_get_stream_ctrl_format_size(devh,
THETAUVC_MODE_UHD_2997, &ctrl);
} else if (argc > 2 && strcmp("2K", argv[2]) == 0) {
printf("THETA live video is 2K");
res = thetauvc_get_stream_ctrl_format_size(devh,
THETAUVC_MODE_FHD_2997, &ctrl);
}
else {
printf("specify video device. --format 4K or --format 2K\n");
goto exit;
}
}
Based on Ubuntu 18.04.5 LTS
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
confirmed with nvv4l2decoder
tested on Jetson Nano. You may need to specify nvv4l2decoder for Xavier as I’ve heard that decodebin may not work.
if (strcmp(cmd_name, "gst_loopback") == 0)
// for Jetson Nano
pipe_proc = "nvv4l2decoder ! autovideoconvert !"
// pipe_proc = "decodebin ! autovideoconvert ! "
"video/x-raw,format=I420 ! identity drop-allocation=true !"
"v4l2sink device=/dev/video1 qos=false sync=false";
else
// pipe_proc = " decodebin ! autovideosink sync=false qos=false";
// tested on Jetson Nano. Should work on Xavier
pipe_proc = "nvv4l2decoder ! nv3dsink sync=false qos=false";
starting gst_loopback
Ignoring errors about pixformat. See below. The OP got it to work with the error.
To make testing easier, I installed VS Code ARM64 on the Nano. The Jetson is such a cool little device. IMO, it’s much easier to work on than a Raspberry Pi. However, I guess it does cost more than an RPi and likley has more power requirements. The entire Jetson line is great.
You may be able to get additional help on the NVIDIA developer forum. I’m not sure why it wouldn’t be installed with JetPack. The file should be there.