Live Streaming over USB on Ubuntu and Linux, NVIDIA Jetson

Hello Everyone,

Does anyone know the Gstreamer pipeline needed to record the streaming output with a Jetson AGX Xavier?
Just got mine today :slight_smile:

I can do some tests about latency and performance of the AGX Xavier compared to the Jetson Nano as I have both, but the Gstremer pipelines I used to display and record the video do not seem to work on the Jetson AGX Xavier

gst_viewer.c is working though

The autovideo detection does not seem to work on Xavier. you need to specify the decoder.

Example:

“decodebin ! autovideosink sync=false” to “nvv4l2decoder ! nv3dsink sync=false”

1 Like

My gst_viewer is working but with vlc i can only get 1 frame then it just sits there. My webcam works fin with everything. I tried getting a video but it can only record 1 frame and the video won’t capture. Need to get this working ASAP i have been trying for days and I’v come so far to get it working up to this point. Seems like there is something wrong with v4l2loopback and the frame rate/resolution. This is a brand new install of Ubuntu 20.04 so all libraries are new.

I replied in the other topic. Confirm that you added qos=false to the pipeline.

It should look something like this:

if (strcmp(cmd_name, "gst_loopback") == 0)
    pipe_proc = "decodebin ! autovideoconvert ! "
        "video/x-raw,format=I420 ! identity drop-allocation=true !"
        "v4l2sink device=/dev/video0 qos=false sync=false";

Also, you may be using software rendering, not hardware accelerated rendering. Post info on your GPU setup and also if you are using decodebin or nvdec.

You can also more questions, but just to let you know that there’s a bunch of information in the doc available here https://theta360.guide/special/linuxstreaming/ and there is search capability on that document.

Again, no problem if you keep asking questions. Just trying to help you out.

Hey Criag, Need your help while am trying to get live streaming for the Theta S 360 camera on Ubuntu 18. I followed steps to get gstviewer up and running and camera is also connected to the USB drive but not sure why but getting error ‘Theta not found’ while I run ./gst_viewer after installing libuvc-theta-sample. Any help, would be appreciated. Do you have all steps to be followed when live streaming of a Theta Camera is concern on Ubuntu 18?

The THETA S streams in MotionJPEG, which Linux supports without the driver. You do not need gstviewer with the THETA S.

Hi Folks.

I am trying to find a 360 deg camera suited to the Nitrogen6X board (1GHz quad cord ARM Cortex-A9 with 1 Gb ram) which we use. This is running Ubuntu 18.04. We wish to overlay our measurement from another sensor on a 360 deg camera feed and display it as a equirectangular stream. Is the theta S or theta V better suited to this (I understand the S offloads some of the image processing to the acquisition computer, which is in our case, low spec and that the V does this on board but might require newer OS and/or libraries which we may be unable to upgrade to). I could be mistaken in these understandings, which is why I am asking. Ideally we want to tax the processor on the Nitrogen6X board as little as possible and want to crack open the camera feed and overlay our data before displaying it.

Thanks for any input you have.

Get the THETA V.

  • output is in equirectangular for the V
  • output of the S in dual-fisheye format which you need to stitch yourself on the ARM board
  • S output is motionJPEG
  • V output is H.264
  • V can stream at 4K. S can stream at 2K

To use the S, the Linux kernel can handle it out of the box.

To use the V, you need to use the drivers documented on this thread and this site.

We have more examples running Ubuntu 20.04. I have not tried it with 18.04. Hopefully, it will work without additional modifications.

If your application benefits from a dual-fisheye feed, then the S might be easier for you to use. It should be lower cost as it is old.

You can try and download and compile the driver with the 18.04 before you get the camera. If you can compile gst_loopback and the sample code, then suggest you buy the V from a place with a return policy and test it soon after you get it.

You won’t be able to run the sample code without the camera as it looks for the USB ID of the camera when it first runs.

Hi Craig,

Thanks so much for your info. You are reaffirming what I had already understood. The open question is whether the linux streaming works on the V with 18.04. I will compile the code as you suggested and see what I get.

Cheers,

Andrew

actually, I think it will work as snafu66 is running Ubuntu 18.04 on NVIDIA Jetson and I actually am running JetPack which is based on 18.04.

Here’s the process on a Jetson nano. It may be similar on your ARM board.

https://youtu.be/GoYi1tSIV80

I didn’t encounter any problems on the nano.

Written documentation of the process, the video above, another video for compilation of v4l2loopback is on the site I posted above.

The video has no audio. However, the process is covered in detail in the written doc.

Hi Craig,

I was able to compile the libuvc-theta and libuvc-theta-sampe code (and the v4l2loopback kernal module).

Running the gst_viewer I get “THETA not found” which is what we expected. Is there anything else I can check before I buy it?

Thanks for your help.

  1. Does the board you are using have H.264 hardware decoding and can you test it with a normal webcam?
  2. Do you need 4k video in your application? Or, is 2k sufficient? If you need 4k, can the board support a normal 4k usb web cam with h.264 input to board?

If you don’t have a 4k webcam, search on Google for your specific board and if people are using it with a normal 4k usb webcam.

I suspect it will only work at 2k if you need the board to process object detection.

Hi Craig,

The spec sheet for the board quotes " Video Encode / Decode 1080p30 H.264 / 1080p60 H.264"

I am not sure we care so much about 4k. We’re moving from a forward facing instrument to a full 4pi instrument. We need to start somewhere with overlaying out measurements to an optical camera feed. In further iterations we can perhaps use a more powerful board if needs be.

We bought the camera yesterday. I should have it by the end of the week for testing.

Thanks for the update. We look forward to your test results.

I’ll try and supply some info ahead of your test to help with the debugging if you encounter any problems. The reason that most people are not using a Raspberry Pi and are using a Jetson is due to the ease of getting hardware acceleration working on the Nvidia Jetson. When we first did the tests, the RPi 4 was newish and the hardware acceleration components were not easy to get running with the Linux kernel. since most people on this forum were prototyping applications, most people used x86 or the Jetson since NVIDIA had extensive documentation for the video hardware on the Jetson.

Time has passed and it’s possible that things could just work.

If the board attempts to use software rendering on the H.264 video stream, the video frame will likely hang. You will see a few seconds of the video and then the frame will stop.

It seems like you have an existing application that is already on the Boundary Devices Nitrogen6X board (1GHz quad cord ARM Cortex-A9 with 1 Gb ram).

That board looks wonderful for mass production:

The Nitrogen6X is designed for mass production use with a guaranteed 10 year lifespan, FCC Pre-scan results, and a stable supply chain. Industrial temperature and conformal coating options are available. It can be modified by de-populating unused components or fully customized for cost reduction.

As there are less people on this forum using that board, you may encounter a few problems getting the driver to work. Or, it could just work.

You should use the sample gst_viewer first and display the video feed to your display with gstreamer first, not use v4l2loopback initially. v4l2loopback adds another layer of complexity. If you can display gst_viewer to your monitor and the video is smooth, then try to compile v4l2loopback and run it (if your application needs the video on /dev/video*.

We had a meetup a while back and during the meetup, people were asking about 2K versus 4K. I did a quick test to switch between the resolutions:

GitHub - codetricity/libuvc-theta-sample

The main line is that somewhere in your code, you should set this:

			res = thetauvc_get_stream_ctrl_format_size(devh,
				THETAUVC_MODE_FHD_2997, &ctrl);	

When you do the test and if you want to use the loopback to get /dev/video* working, you should confirm it is actually 2K.

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'YU12'
	Name        : Planar YUV 4:2:0
		Size: Discrete 1920x960
			Interval: Discrete 0.033s (30.000 fps)

If it were me, I would do the following:

  1. get camera. update firmware
  2. connect to x86 computer running Ubuntu 18.04 and test the camera using this forum and the documentation on this site as a reference
  3. Once it is working at 2K on an x86 Ubuntu 18.04 machine, then test on the Nitrogen6x board

It may be easier to use if you don’t need the video stream on /dev/video*. Does your application look for a device on /dev/video* or have you written something with gstreamer, ffmpeg libraries or other directly?

Hi Craig,

Thanks for the help with this. The application which acquires the optical camera feed is written in house by my colleague (I have not read the code as yet, but it’s not doing anything fancy). I will likely take on the the development of this (now that we’re moving to 4pi) and overlay our sensor data on top of the image. We’ll have to map the coordinates between the two instruments for overlaying etc etc. I’ll follow the steps you suggest (x86 computer running Ubuntu 18.04) and work from there before trying the nitrogen6x board. Thanks for your help with this. I have reached out to others for help with other 4pi/multi camera systems and you’re miles ahead with your support to me on this.

1 Like

First, thank you for posting your progress. We are an independent site that receives sponsorship from RICOH. If you have time, please send @jcasman and me a description of your project, either by DM on this forum or email (jcasman@oppkey.com). I understand that it may be proprietary and you may not be able to disclose it. We would like to pass the information on to our contacts at RICOH to show how people use live streaming. If there is a potential for high-volume industrial use, it may impact RICOH’s product decisions in the future.

Second, I think it’s important for us to be candid when people are evaluating the camera in prototyping. It’s a great consumer camera that streams 4K or 2K 360 video. It is widely used in commercial applications and is durable.

However, most of the commercial applications have a human that views the output. For industrial use, the camera has many features that may not be used, thus you’re paying for features that are unused. This is fine for prototyping and low-volume systems, but may become a problem at higher volumes.

Here’s a summary of some shortcomings for large-scale industrial use:

  • Camera handles three colors. For object detection, you may need only one color
  • Rolling shutter. Most cameras, including the THETA, use this, but more expensive industrial cameras use global shutter
  • Camera has a battery, increasing weight and possibly heat. You cannot easily bypass or take out the battery. For the THETA V, the camera will slowly drain while you are streaming, giving you a continuous stream time of less than 12 hours (though there are some workarounds)
  • In most cases, you will need to manually press the power button on the camera to start the stream. Though, you can power cycle the USB port of the board the camera is plugged into programmatically to trigger the camera to turn on remotely

The upside is that the camera produces a nice 2K or 4K equirectangular stream with nice stitching and good quality at a reasonable cost for prototyping. It’s also widely used, so there are more examples of people using it.

It’s theoretically possible to buy two component units of Sony IMX219 image sensors with very wide angle lenses and process each camera sensor independently. Various manufacturers make those sensors with CSI cables and they sell for under $50 in low volume. Though, that would be a bigger project and depends on how accurate you need the stitching and how fast an external board needs to stitch it. The advantage of the THETA is that the internal Snapdragon board inside the camera does very nice stitching inside the camera before it outputs the stream.

Note on Sensor Overlay Data

If the sensor is a point sensor, such as a radiation or invisible spectrum wave, the sensor data is in a sphere. It is likely easier to work with the THETA S with two dual-fisheye streams. Otherwise, you will need to de-fish the equirectangular frame to match your sensor data.

Hi Craig,

Just to update you on some progress (today was the 1st day I had time to look at the camera).

I updated the firmware and got the gst_viewer code to run out of the box on an x86 machine with ubuntu 20.04. The loopback didn’t work (not sure why).

Next I ran on the gst_viewer sample code on the nitrogen6x board running ubuntu 18.04 and got no feed and the following error.

start, hit any key to stop
[INFO] bitstreamMode 1, chromaInterleave 0, mapType 0, tiled2LinearEnable 0
Error: Internal data stream error.
stop
XIO: fatal IO error 22 (Invalid argument) on X server “:0.0”
after 8 requests (8 known processed) with 0 events remaining.

Then I set the resolution in the source code to THETAUVC_MODE_FHD_2997 as you had suggested and I got a streamed feed. SUCCESS!!!..but the latency was about 2seconds. After 90seconds of feed the latency was up to 10seconds. Perhaps there is a some way in the code to fix this. A 2 second lag would likely not kill us in our application, but it would have to be fairly static.

The loopback did not work. "Could not open device /dev/video1 for reading and writing. I am unsure what’s going on here. The kernal module was loaded etc. I am uncertain how to be sure the correct /dev/videoX is being selected. Not sure how to proceed here or if its worth it.

I tried the libptp examples and was able to set state parameters on the camera, capture images and copy them off the camera. In some settings this might even work for us…I will have to do some more testing.

I could not try the streaming over wifi, because I could not load the rtsp plugin onto the theta V, that requires a windows or mac box and I don’t have one of those here ATM.

So some success. If there is a way to stabilize latency the in gst feed that would be great. If there was a way to speed up the still image capture / transfer that would be great too. If you have ideas about the loopback let me know.

Cheers,

Andrew


I can see some artifacts on the stitching which are not fantastic. Is there a way to re-run some calibration for this? or is this unavoidable? or is this unit defective?