Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

I do not have a solution, but I can replicate the problem. I will keep trying.

I have the same problem that you do.

The only thing I got working is that ffplay works with THETA on /dev/video0

$ ffplay -f v4l2 /dev/video0
ffplay version 3.4.8-0ubuntu0.2 Copyright (c) 2003-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04)
  configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100

1 Like

I put it into a loop to get the frames from RICOH THETA Z1 to see if it was using the GPU and I don’t think it is.

image

This screen grab is coming over an X forwarding session to another computer, so it may be faster on a monitor plugged into the Jetson. If I increase the window size, the latency gets worse. The interesting challenge is to get cv2.cuda.remap working, which I don’t have working.

equi_test

import cv2
import os
import Equirec2Perspec2 as E2P

print(f"OpenCV version {cv2.__version__}")
cap = cv2.VideoCapture(0)

# Check if the webcam is opened correctly
if not cap.isOpened():
    raise IOError("Cannot open webcam")

while True:
    ret, frame = cap.read()
    #print(frame.shape)    
    equ = E2P.Equirectangular(frame)
    frame = equ.GetPerspective(120, 180, -15, 400, 400)
    cv2.imshow('Input', frame)

    c = cv2.waitKey(1)
    if c == 27:
        break

cap.release()
cv2.destroyAllWindows()

in Equirec2Perspec2.py

class Equirectangular:
    def __init__(self, img):
        self._img = img
        [self._height, self._width, _] = img.shape

There’s a guy at the link below trying to use cv2.cuda.remap for the same thing we are.

In the comments, it ends with a happy “it’s working” with a plethora of exclamation points.

I’m not sure what this cv2.fisheye… is doing

map1, map2 = cv2.fisheye.initUndistortRectifyMap(K, D, np.eye(3), K, DIM, cv2.CV_16SC2)#

But, other than that, loading the map looks doable.

        #map1 = map1.astype(np.float32)
        #map2 = map2.astype(np.float32)
        #map1 = cv2.cuda_GpuMat(map1)
        #map2 = cv2.cuda_GpuMat(map2)

then at some point, he’s able to use cv2.cuda.resize with hopefully faster processing…

        #resized = cv2.cuda.resize(img2, (new_w, new_h), interpolation=cv2.INTER_LINEAR)

Another example of using cv.cuda.remap by the same guy, with another “it’s working” at the end

Some friendly guy posted this, which appears to work, according to the original poster.

cuDst = cv.cuda_GpuMat(cuMat.size(),cuMat.type())
cv.cuda.remap(cuMat,cuMapX,cuMapY,dst=cuDst,interpolation=cv.INTER_LINEAR)

We probably just need to read up on cv.cuda and the GpuMat and maybe we’ll get it too, like the happy guy posting on the OpenCV forums. :slight_smile:

2 Likes

The only sad thing I noticed is that all the test pictures in my camera are of me sitting in front of a computer.

Ha, for me it is my basement, time to clean up. No place to hide stuff in a 360 image :slight_smile:

Have you tried it on a video stream without applying Mobilenet v2 analysis? I’m wondering how fast the Equirec2Perspec can handle the frames.

Not yet, doing this on a video stream is my next to do. I will time that process first before applying any NN. My intuition leads me to believe that many NN run fast enough (100 fps + on the AGX Xavier) that a non-linear transformation will form the bottleneck. The good thing is that I don’t need very fast times. Anything around a second would work for me.

Just to clarify the challenge you’re proposing, do you mean to test OpenCV on NVIDIA Jetson with CUDA acceleration?

No Craig, I was referring to this transformation function specifically. I am already worried that this could be a bottleneck in any application. A C++ implementation using some of the Nvidia accelerations could be very helpful.

1 Like

Hello @craig, Were you able to implement this solution?

Got it to work!! it is streaming and correcting in realtime. Let me play around with the code a little.

pers_1

I can even create a panorama from perspective corrected images.

import cv2
import os
import Equirec2Perspec2 as E2P
import numpy as np

print(f"OpenCV version {cv2.__version__}")
cap = cv2.VideoCapture(1)

# Check if the webcam is opened correctly
if not cap.isOpened():
    raise IOError("Cannot open webcam")

while True:
    ret, frame = cap.read()
    #print(frame.shape)
    equ = E2P.Equirectangular(frame)

    frame1 = equ.GetPerspective(90, 0, 0, 400, 400)
    frame2 = equ.GetPerspective(90, 90, 0, 400, 400)
    frame3 = equ.GetPerspective(90, 180, 0, 400, 400)
    frame4 = equ.GetPerspective(90, 270, 0, 400, 400)

    hor_concat_top = np.concatenate((frame1, frame2), axis=1)
    hor_concat_bottom = np.concatenate((frame3, frame4), axis=1)

    all_four_frames = np.concatenate((hor_concat_top, hor_concat_bottom), axis=1)

    cv2.imshow('Input', all_four_frames)

    c = cv2.waitKey(1)
    if c == 27:
        break

cap.release()
cv2.destroyAllWindows()

-Jaap

2 Likes

Congratulations and great work. :slight_smile:

Are you next going to experiment with some type of object detection on the frames?

The technique would also be useful for human analysis. Selecting portions for humans to inspect.

Sorry, I haven’t tried yet.

The primary algorithm looks like it is feasible to convert to cv2.cuda.remap, but I believe, we need to use cv2.cuda_GpuMat() on each frame and then upload the frame to the GPU.

persp = cv2.remap(self._img, lon.astype(np.float32), lat.astype(np.float32), cv2.INTER_CUBIC, borderMode=cv2.BORDER_WRAP)

This is quite an interesting challenge as I think we can get some big gains on a wide range of frame processing with little effort. We just need to figure the correct sequence of steps for a few common methods such as remap, resize, cvtColor.

Hey, nice going, Jaap!

1 Like

Dear Ricoh Theta V friends,

We got internal data stream error when using gst_loopback even a dummy /dev/video* is created and is assigned by v4l2loopback.
By following this nice tutorial, both ptpcam and gst_viewer works well.

But we stuck on gst_loopback, it outputs below error:

start, hit any key to stop
Error: Internal data stream error.
stop

Seems like we are very close, one step to get it working, were wondering if any one got the same issue.

Our desktop testing environment is Ubuntu 18.04 LTS, with Linux 4.15.0-118-generic, x86-64
Camera is Theta V with the firmware 3.50.1

Best,
– Luke

when you do lsmod, can you see the v4l2loopback module?

$ sudo modprobe v4l2loopback
[sudo] password for craig: 
$ lsmod
Module                  Size  Used by
v4l2loopback           40960  0
btrfs                1253376  0
...

Do you have more than one camera on your system? If not, did you modify the device to /dev/video0 on this line?

On x86, change the line 190 as follows:

if (strcmp(cmd_name, "gst_loopback") == 0)
    pipe_proc = "decodebin ! autovideoconvert ! "
        "video/x-raw,format=I420 ! identity drop-allocation=true !"
        "v4l2sink device=/dev/video0 qos=false sync=false";

Make sure qos=false

If you moved the camera around to different systems. Reboot the camera. Hold the power down for about 15 seconds and then reboot it while it is plugged it. I sometimes have problems if I have the camera plugged into one system and then move it.

Please confirm and report back with additional errors. It should work on your system.

Craig, thanks for your reply. We have double checked lsmod, /dev/video1 (we have one existing webcam), added qos=false and reboot, and tried exclusive_caps=1 or 0. But internal data stream error is still there. We have two linux systems getting the same error.

As far as we know, the error message is from v4l2loopback, a most related discussion and a potential solution was discussed below:


Before digging into above, we are getting one more theta V to double check it’s not caused by the specific camera.

Will keep you posted.
Best,
– Luke

What is the graphics card and driver you are using?

You can get some information with nvidia-smi

It will look something like this:

Note that the reason is says, “Off” next to the GeForce GTX 950 is because the “Off” refers to Persistence-M (row above). It does not indicate if the GPU is enabled or not. In this case, the driver is the proprietary Nvidia driver 450.66.

The v4l2loopback does not work on all systems.

There is a small note on the README.md for libuvc-theta-sample with potentially big meaning.

If you have an nvidia graphics card, you may need the nvidia plug-in for gstreamer, which I think is in the “plug-ins bad” group, but appears to work. I got this information from someone that is more knowledgeable about gstreamer than I am.

In my case, I don’t really understand gstreamer, so I installed everything.

https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c

apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio

You may not need the step above. You can likely just install the individual plug-in categories.

There may still be a graphics card problem.

You can try running the X.org driver, nouveau.

Once you switch to nouveau, you can verify the driver with:

$ glxinfo -B
name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: nouveau (0x10de)
    Device: NV126 (0x1402)
    Version: 20.0.8
    Accelerated: yes
    Video memory: 2024MB

or sudo lshw -c video

Note that I normally use the nvidia driver. I switched over to nouveau after I saw your post to try and test it. I did basic testing with the nouveau driver and the THETA. It appears to run about the same.

1 Like

i get a slightly different result on my nvidia NX and generic ubuntu 18.04:

nvidia@nx:~/build/libuvc-theta/build$ ./example
UVC initialized
uvc_find_device: No such device (-4)
UVC exited

also note that i do not see any /dev/video* devices but lsusb shows:

nvidia@nx:~/build/libuvc-theta/build$ lsusb
Bus 002 Device 002: ID 0bda:0489 Realtek Semiconductor Corp.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 13d3:3549 IMC Networks
Bus 001 Device 005: ID 0c45:7403 Microdia Foot Switch
Bus 001 Device 004: ID 1a40:0101 Terminus Technology Inc. Hub
Bus 001 Device 006: ID 05ca:0368 Ricoh Co., Ltd
Bus 001 Device 002: ID 0bda:5489 Realtek Semiconductor Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

If you’re just getting started, I suggest you use this repository for an example application.

I just posted a Linux Streaming quick start guide on this forum.
There is extensive information on this site

i tried that as well. it calls the you modified libusb and returns “THETA NOT FOUND”. lsusb recognizes that the camera is connected, however there is no /dev/video* device whatsoever.

There are a couple of different issues. the THETA NOT FOUND is one issue and the /dev/video* is a different issue.

The THETA won’t appear on the /dev/video* until you configure v4l2loopback as explained in the documentation and YouTube video.

With the THETA plugged in and in LIVE streaming mode, are you running ./gst_viewer or ./gst_loopback? I suggest using ./gst_viewer first.

Is there the word “LIVE” on the body of the camera?

Can you post a screen grab of your exact commands and the output of lsusb while the camera is plugged in in LIVE mode?

The ID you posted from the camera is 05ca:0368. Note that my ID is 05ca:2712 Either the computer is not detecting the camera in live streaming mode or the camera is not functioning properly in live streaming mode. If the ID is not 2712, the application will not find the camera.

An ID of 0368 may indicate that either the camera is "off’ or the computer is not detecting the camera accurately. If the camera is on and in Live streaming mode, meaning the ID is 2712, you should next consider changing the USB cable or switching the USB port. As long as the ID is detected as 0368, the sample gstreamer app will not work.

In the header file, you can see that the app needs to find 0x2712 (THETA V) or 0x2715 (Z1)

Please feel free to post again. This is new stuff for everyone. You don’t need to hold back.

Unlike a web cam, you need to manually turn the THETA on and physically press the “mode” button on the side of the camera until you see a word, “LIVE”. For the word, “LIVE” to appear, the camera needs to be plugged into the computer with a USB cable. It will not appear if it is not plugged in. The cable itself may also not be functioning. If you have a USB adapter on the computer, it may be incompatible. It’s also possible that security software on your computer may be interfering with detection.

Although you can turn the camera on and switch it to LIVE mode with the USB cable, it does not do this by default.

image

image

If you are building something like a robot where you need to power on the camera from the robot, please post again. We did a test here to show that it is possible.

thanks. i did not know that i had to put the camera into “live” mode first. after doing that, i get a little bit further with ./gst_viewer, but all i get is a pure red display. also there are still no /dev/video* devices.

image.png

also, is there a way to control the camera via the USB interface as opposed to putting it into live first?

Are you using an NVIDIA Jetson Nano or an Xavier?

In this line:

To the line below:

"nvv4l2decoder ! nv3dsink sync=false"

If it works with gst_viewer, you will also need to the change line 188 to match. It is possible that gstreamer is not auto-detecting your video system and you need to specify the video sink and decoder.

The gstreamer pipeline line above is specific to NVIDIA Jetson. If you do the same test on x86, you will need to change the line back to autodetect or specify a pipeline specific to your setup.

Getting /dev/video0 to work

If you are using a Jetson, you may have only one video device on your system. If this is the case, change line 190 to device=/dev/video0. This is a case where you do not have another webcam on your system and the THETA is /dev/video0

Install v4l2loopback

There is a detailed video on this process at the linux streaming site.

summary:

$ git clone https://github.com/umlaeute/v4l2loopback.git
$ cd v4l2loopback
$ make
$ sudo make install
$ sudo depmod -a
$ sudo modprobe v4l2loopback

verify that you have v4l2loopback installed

Post a screenshot of the output of lsmod.

image

Now run gst_loopback to pipe the video from the sample app to the loopback device. You should now able to use things like OpenCV on /dev/video0

Feel free to ask additional questions. We are happy to help you.

USB API

There is detailed information on using the USB API on the Linux streaming site. The process is involved. The basic steps:

  1. install libptp2 from source. You will like need to install libusb-dev with apt.
  2. run ldconfig with sudo ldconfig. Note. hopefully, /usr/local/lib is in your library load path, which it should be by default on JetPack 4.4 and likely earlier versions too.
  3. set to live streaming with the command below.
$ ptpcam --set-property=0x5013 --val=0x8005

If you want to switch the camera back to still image to take a detailed shot, it is

$ ptpcam --set-property=0x5013 --val=0x0001

You can also do the following:

  • put camera to sleep
  • wake camera from sleep
  • turn camera power off
  • turn camera power on

Let me know if you want to schedule a 15 minute or 30 minute Google Meet video conference or Zoom call. I am in Palo Alto, California. I can answer basic questions. I am not an experienced developer, but I have some knowledge of the camera due to using it and getting information from people on this forum. Be aware that I don’t work for RICOH. It’s likely that you want to get past this stage quickly and focus on AI or object detection stuff and that you’ve just received the THETA. There may be basic RICOH THETA-specific information that could help. Happy to continue to answer questions on this forum as well.

Hello

Have my Theta V and Jetson all up and running due to the great work of the contributors. gst_viewer works perfectly, lsmod shows v4l2loopback and I have a /dev/video0

However the following ($OUTPUT is a YouTube URL and secret key)

ffmpeg -re -f v4l2 -i /dev/video0 -re -f lavfi -i anullsrc -c:v libx264 -preset veryfast -maxrate 3000k \
-bufsize 6000k -pix_fmt yuv420p -g 50 -an -f flv $OUTPUT

gives this error

[video4linux2,v4l2 @ 0x55a3f3a860] ioctl(VIDIOC_G_FMT): Invalid argument
/dev/video0: Invalid argument

Am I missing something or is this not something that can be done?

Sorry that I don’t have a solution.

Just letting you know that we can’t get it to work with ffmpeg either.

Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson - #85 by craig

Another member posted information using gst-rtsp-server.

I see on stackoverflow that someone is using rtmpsink with gstreamer to get it to YouTube. This was back in 2015. There may be a better solution now. If you get it working with gstreamer, please post your pipeline.

I have gst-rtsp-server working on a Jetson Nano and can help with that if you have problems. I think I compiled it from source using the specific version of gstreamer that came with JetPack 4.4. I don’t think it was the newest version. I think I took notes. I hope I did…


Note that I moved the topic to this topic as there is at least one other person working on the same ffmpeg problem. The search on this forum is not that great and things are difficult to find.

1 Like