Is it possible to reduce the framerate of the Theta?
All my tests have been with 2K, when I try it with 4K, the sink of the pipeline has an increasing delay over time.
I think it takes too much time per frame due to the increased resolution. Or is it possible to drop frame if they’re too old?
Gst_viewer has a minimal delay for 2K and 4K, however.
I believe the output of the camera is locked at 30fps.
How long do you use the stream before the latency starts increasing? We can try and replicate the test to isolate a way to avoid this.
It would be good to know:
- beginning latency
- end latency
- elapsed time
- your setup (such as if you’re using gst_viewer, the loopback, other software such as OpenCV)
I stream the theta and use:
sudo modprobe v4l2loopback exclusive_caps=1 max_buffer=8 width=3840 height=1920 framerate=30000/1001 and gst_loopback
if I use:
pipe_proc = "vaapidecodebin ! videoconvert ! videoscale ! video/x-raw,format=I420,width=1920,height=960,framerate=30000/1001 ! identity drop-allocation=true ! v4l2sink device=/dev/video2 qos=false sync=false";
and THETA MODE UHD(4K) or FHD(2K).
OR
pipe_proc = "vaapidecodebin ! videoconvert ! videoscale ! video/x-raw,format=I420 ! identity drop-allocation=true ! v4l2sink device=/dev/video2 qos=false sync=false";
with THETA MODE FHD(2K)
it comes in with barely any latency(500ms?) in 2K:
if I use
pipe_proc = "vaapidecodebin ! videoconvert ! videoscale ! video/x-raw,format=I420 ! identity drop-allocation=true ! v4l2sink device=/dev/video2 qos=false sync=false";
OR
pipe_proc = "vaapidecodebin ! videoconvert ! videoscale ! video/x-raw,format=I420,width=3840,height=1920,framerate=30000/1001 ! identity drop-allocation=true !
, and
THETA MODE UHD(4K)
It comes in with 4K, it starts with barely any latency however the longer it runs the higher the latency becomes. It did not matter with what I opened the v4l2loopback device (vlc, webpage, opencv).
The increase in latency happens once the v4l2loopback is accessed. It did not matter how long gst_loopback ran beforehand. Though the increased latency persists between sessions of accessing the v4l2loopback and does not reset. It resets if I stop and rerun gst_loopback.
If I run THETA MODE UHD(4K) with gst_viewer it comes in fine without latency.
Hello Everyone,
Does anyone know the Gstreamer pipeline needed to record the streaming output with a Jetson AGX Xavier?
Just got mine today
I can do some tests about latency and performance of the AGX Xavier compared to the Jetson Nano as I have both, but the Gstremer pipelines I used to display and record the video do not seem to work on the Jetson AGX Xavier
gst_viewer.c is working though
The autovideo detection does not seem to work on Xavier. you need to specify the decoder.
Example:
“decodebin ! autovideosink sync=false” to “nvv4l2decoder ! nv3dsink sync=false”
My gst_viewer is working but with vlc i can only get 1 frame then it just sits there. My webcam works fin with everything. I tried getting a video but it can only record 1 frame and the video won’t capture. Need to get this working ASAP i have been trying for days and I’v come so far to get it working up to this point. Seems like there is something wrong with v4l2loopback and the frame rate/resolution. This is a brand new install of Ubuntu 20.04 so all libraries are new.
I replied in the other topic. Confirm that you added qos=false
to the pipeline.
It should look something like this:
if (strcmp(cmd_name, "gst_loopback") == 0)
pipe_proc = "decodebin ! autovideoconvert ! "
"video/x-raw,format=I420 ! identity drop-allocation=true !"
"v4l2sink device=/dev/video0 qos=false sync=false";
Also, you may be using software rendering, not hardware accelerated rendering. Post info on your GPU setup and also if you are using decodebin
or nvdec
.
You can also more questions, but just to let you know that there’s a bunch of information in the doc available here https://theta360.guide/special/linuxstreaming/ and there is search capability on that document.
Again, no problem if you keep asking questions. Just trying to help you out.
Hey Criag, Need your help while am trying to get live streaming for the Theta S 360 camera on Ubuntu 18. I followed steps to get gstviewer up and running and camera is also connected to the USB drive but not sure why but getting error ‘Theta not found’ while I run ./gst_viewer after installing libuvc-theta-sample. Any help, would be appreciated. Do you have all steps to be followed when live streaming of a Theta Camera is concern on Ubuntu 18?
The THETA S streams in MotionJPEG, which Linux supports without the driver. You do not need gstviewer with the THETA S.
Hi Folks.
I am trying to find a 360 deg camera suited to the Nitrogen6X board (1GHz quad cord ARM Cortex-A9 with 1 Gb ram) which we use. This is running Ubuntu 18.04. We wish to overlay our measurement from another sensor on a 360 deg camera feed and display it as a equirectangular stream. Is the theta S or theta V better suited to this (I understand the S offloads some of the image processing to the acquisition computer, which is in our case, low spec and that the V does this on board but might require newer OS and/or libraries which we may be unable to upgrade to). I could be mistaken in these understandings, which is why I am asking. Ideally we want to tax the processor on the Nitrogen6X board as little as possible and want to crack open the camera feed and overlay our data before displaying it.
Thanks for any input you have.
Get the THETA V.
- output is in equirectangular for the V
- output of the S in dual-fisheye format which you need to stitch yourself on the ARM board
- S output is motionJPEG
- V output is H.264
- V can stream at 4K. S can stream at 2K
To use the S, the Linux kernel can handle it out of the box.
To use the V, you need to use the drivers documented on this thread and this site.
We have more examples running Ubuntu 20.04. I have not tried it with 18.04. Hopefully, it will work without additional modifications.
If your application benefits from a dual-fisheye feed, then the S might be easier for you to use. It should be lower cost as it is old.
You can try and download and compile the driver with the 18.04 before you get the camera. If you can compile gst_loopback and the sample code, then suggest you buy the V from a place with a return policy and test it soon after you get it.
You won’t be able to run the sample code without the camera as it looks for the USB ID of the camera when it first runs.
Hi Craig,
Thanks so much for your info. You are reaffirming what I had already understood. The open question is whether the linux streaming works on the V with 18.04. I will compile the code as you suggested and see what I get.
Cheers,
Andrew
actually, I think it will work as snafu66 is running Ubuntu 18.04 on NVIDIA Jetson and I actually am running JetPack which is based on 18.04.
Here’s the process on a Jetson nano. It may be similar on your ARM board.
I didn’t encounter any problems on the nano.
Written documentation of the process, the video above, another video for compilation of v4l2loopback is on the site I posted above.
The video has no audio. However, the process is covered in detail in the written doc.
Hi Craig,
I was able to compile the libuvc-theta and libuvc-theta-sampe code (and the v4l2loopback kernal module).
Running the gst_viewer I get “THETA not found” which is what we expected. Is there anything else I can check before I buy it?
Thanks for your help.
- Does the board you are using have H.264 hardware decoding and can you test it with a normal webcam?
- Do you need 4k video in your application? Or, is 2k sufficient? If you need 4k, can the board support a normal 4k usb web cam with h.264 input to board?
If you don’t have a 4k webcam, search on Google for your specific board and if people are using it with a normal 4k usb webcam.
I suspect it will only work at 2k if you need the board to process object detection.
Hi Craig,
The spec sheet for the board quotes " Video Encode / Decode 1080p30 H.264 / 1080p60 H.264"
I am not sure we care so much about 4k. We’re moving from a forward facing instrument to a full 4pi instrument. We need to start somewhere with overlaying out measurements to an optical camera feed. In further iterations we can perhaps use a more powerful board if needs be.
We bought the camera yesterday. I should have it by the end of the week for testing.
Thanks for the update. We look forward to your test results.
I’ll try and supply some info ahead of your test to help with the debugging if you encounter any problems. The reason that most people are not using a Raspberry Pi and are using a Jetson is due to the ease of getting hardware acceleration working on the Nvidia Jetson. When we first did the tests, the RPi 4 was newish and the hardware acceleration components were not easy to get running with the Linux kernel. since most people on this forum were prototyping applications, most people used x86 or the Jetson since NVIDIA had extensive documentation for the video hardware on the Jetson.
Time has passed and it’s possible that things could just work.
If the board attempts to use software rendering on the H.264 video stream, the video frame will likely hang. You will see a few seconds of the video and then the frame will stop.
It seems like you have an existing application that is already on the Boundary Devices Nitrogen6X board (1GHz quad cord ARM Cortex-A9 with 1 Gb ram).
That board looks wonderful for mass production:
The Nitrogen6X is designed for mass production use with a guaranteed 10 year lifespan, FCC Pre-scan results, and a stable supply chain. Industrial temperature and conformal coating options are available. It can be modified by de-populating unused components or fully customized for cost reduction.
As there are less people on this forum using that board, you may encounter a few problems getting the driver to work. Or, it could just work.
You should use the sample gst_viewer
first and display the video feed to your display with gstreamer first, not use v4l2loopback initially. v4l2loopback adds another layer of complexity. If you can display gst_viewer
to your monitor and the video is smooth, then try to compile v4l2loopback and run it (if your application needs the video on /dev/video*
.
We had a meetup a while back and during the meetup, people were asking about 2K versus 4K. I did a quick test to switch between the resolutions:
GitHub - codetricity/libuvc-theta-sample
The main line is that somewhere in your code, you should set this:
res = thetauvc_get_stream_ctrl_format_size(devh,
THETAUVC_MODE_FHD_2997, &ctrl);
When you do the test and if you want to use the loopback to get /dev/video* working, you should confirm it is actually 2K.
$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'YU12'
Name : Planar YUV 4:2:0
Size: Discrete 1920x960
Interval: Discrete 0.033s (30.000 fps)
If it were me, I would do the following:
- get camera. update firmware
- connect to x86 computer running Ubuntu 18.04 and test the camera using this forum and the documentation on this site as a reference
- Once it is working at 2K on an x86 Ubuntu 18.04 machine, then test on the Nitrogen6x board
It may be easier to use if you don’t need the video stream on /dev/video*
. Does your application look for a device on /dev/video*
or have you written something with gstreamer, ffmpeg libraries or other directly?
Hi Craig,
Thanks for the help with this. The application which acquires the optical camera feed is written in house by my colleague (I have not read the code as yet, but it’s not doing anything fancy). I will likely take on the the development of this (now that we’re moving to 4pi) and overlay our sensor data on top of the image. We’ll have to map the coordinates between the two instruments for overlaying etc etc. I’ll follow the steps you suggest (x86 computer running Ubuntu 18.04) and work from there before trying the nitrogen6x board. Thanks for your help with this. I have reached out to others for help with other 4pi/multi camera systems and you’re miles ahead with your support to me on this.