Hello sir, I am trying to do a stream with 2 of my theta V on my ubuntu 20 computer, I followed all the steps you did with the jetson nano BUT with my computer directly. I have a lot of troubles, as I only get one frame every ~20 seconds…
Did you face this problem ? or have any idea on how to fix it please?
I did it with only one single camera and got that result, I think if I try to do it with a stereo my computer will crash. please I need your help.
If it still doesn’t work, you can try to disable or remove the NVIDIA graphics card on your system and use the integrated GPU on your Intel processor.
If it still doesn’t work, you can try to compile the nvdec gstreamer plug-in, but the process is involved.
The Jetson Nano is likely using the onboard NVIDIA hardware decoding of the H.264 stream and the x86 computer is likely getting stuck with software decoding.
If you run, nvidia-smi on the x86 machine, does it show the GPU utilization increasing when you start the stream?
Update on using nvdec
I’ve been tweaking the pipeline using nvdec and have managed to reduce latency by 100ms on my computer. That is 8.787 on the foreground screen capture and 8.567 on the video from the theta. Appears to be 220ms delay to get it to the screen on my Ubuntu 20.04 system.
By adding “qos=false” it is now working, with 550-600 ms delay.
I also installed the nvdec nvenc plugins, I am unsure it they are being applied right now as I am unsure on how decodebin chooses the decoders.
However, “nvidia-smi” does show usage of 20-30 % though.
You get immediate access after you put in an email.
nvdec pushes it to the gpu. You can get the opengl direct to the screen from the gpu without going into system memory.
I do not think that decodebin automatically chooses nvdec. I think you need to specify nvdec in the pipeline.
@Mehdi_Zayene, saw your comment on YouTube about saving to file. I tested this on x86 and can save to file using the v4l2loopback with both VLC and OBS. I haven’t figured out how to save to file directly with gstreamer. Maybe someone on this forum knows how to save it directly to file from within gst_viewer.c
$ gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-raw,framerate=30/1 ! xvimagesink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Stupidity on my part, realised that gst_loopback was set to /dev/video1 by default, changing that to /dev/video0 meant that FFmpeg then works if you set the input device to /dev/video0. Streaming to YouTube working. If anyone is interested this is the FFmpeg command
Ahh this is just step 1 and the Jetson is just a cheap device to get something working. However I was intrigued as to how much it would do. I need to use FFmpeg as I have some other built in encoders that do some special stuff when it comes to processing the video. So this is the first stage of the pipeline. YouTube is just a for free test environment.
The Jetson does however open up the possibility of a simple portable 360 streaming device though.
Many people may not realize is that UDP is lossy and TCP does retransmissions which will cause additional latency. I haven’t looked at the characteristics of the theta V stream, but if it is not constant bit rate, there will be network nano-bursts. (I’m talking about bursts at timescales shorter than 1ms) These bursts can overflow buffer space causing packet losses. For TCP connections, there is a retry-retransmission protocol that is a normal part of flow control, but will increase packet jitter and latency.
Thanks. This is very useful to know as the 4K streams are a bit heavy to deal with. There’s some nice satisfaction in pushing that little Nano. The Xavier NX would also be nice, but a bit out of my budget at the moment.
Actually even the first line alone gives me the following :
GStreamer-CRITICAL **: 00:03:56.424: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
WARNING : erroneous pipeline: no element “video”
NB: I am running it on my Jetson Nano, with gst_loopback running