Linux Live Streaming Quick Start on Ubuntu x86 - How to Build libuvc for RICOH THETA V and Z1

This is based on a video available here.

Equipment in this test:

  • Ubuntu 20.04
  • Intel i7-6800K (also tested on Intel Pentium G3258 and other CPUs)
  • NVIDIA GeForce GTX 950 (also tested with GTX 650)
    • Tested with both NVIDIA driver 450.80 and X.Org driver. See site for more info on differences
  • libuvc-theta
  • libuvc-theta-sample

Note: see site for detailed instructions on using /dev/video0 or equivalent.

clone libuvc-theta

The master branch is now merged with theta_uvc and you should be able to compile and run the sample code from the master branch as well as the theta_uvc branch.

image

You probably no longer need this step as the master branch should be the same as theta_uvc, but just in case you have problems.

build libuvc-theta

$ mkdir build
$ cd build
$ cmake ..

Check to make sure you have libusb-1.0 and JPEG support.

If packages are missing, search for the package name with Google and then install with apt. For example:

$ sudo apt install libusb-dev
$ sudo apt install libjpeg-dev

install libuvc

This section refers to the modified libuvc that you built above.

$ sudo make install

Run ldconfig.

$ sudo ldconfig

Hopefully, you’re good and your system picked up the libs in /usr/local/lib.

image

On most systems, /etc/ld.so.conf.d/libc.conf already has /usr/local/lib

In the screenshot below, the library path was already on my system.

image

clone libuvc-theta-sample

$ git clone https://github.com/ricohapi/libuvc-theta-sample

build libuvc-theta-sample

$ cd libuvc-theta-sample/
$ l
gst/  LICENSE.txt  README.md
$ cd gst/
$ make
$ ls
gst_loopback  gst_viewer.c  Makefile    thetauvc.h
gst_viewer    gst_viewer.o  thetauvc.c  thetauvc.o

Test the sample application

  1. Plug RICOH THETA V or Z1 into your computer’s USB port
  2. Turn camera on, put into “LIVE” mode by pressing the physical mode button the side of the camera button. The OLED or LED on the camera needs to say “LIVE”.

For the THETA V, you will see an LED like this:
image

The OLED on the Z1 will look like the left part of this:

image

$ ./gst_viewer

Test Clip with NVIDIA Driver

Test Clip with X.Org and VLC

This clip is using v4l2loopback on /dev/video1. See site for more details. It will give you a feel for the latency as I found a better online stopwatch. :slight_smile: I think the NVIDIA driver has lower latency.

Is it possible to record or save frames while using gst_viewer? How would one go about that

yes, it’s possible to save the video to local storage.

Type in an email on this site.

RICOH THETA360.guide Independent developer community

From the main document, go to Examples → Save to File

lossless huffman encoded raw file

gst-launch-1.0 v4l2src device=/dev/video99 ! video/x-raw,framerate=30/1 \
! videoconvert \
! videoscale \
! avenc_huffyuv \
! avimux \
! filesink location=raw.hfyu

h.264 on Jetson

Change the nvvidconv and omxh264enc if you are not on Jetson.

gst-launch-1.0 v4l2src device=/dev/video99 ! video/x-raw,framerate=30/1 \
! nvvidconv \
! omxh264enc \
! h264parse ! matroskamux \
! filesink location=vid99.mkv

h.264 on x86

$ gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-raw,framerate=30/1 ! autovideoconvert ! nvh264enc ! h264parse ! matroskamux ! filesink location=vid_test.mkv

Once you have the video file on your Linux machine, there are many ways to get video frames from a video file.

Example:

ubuntu - How to extract images from video file? - Unix & Linux Stack Exchange

Type in ‘extract images from video file’ in Google and see several examples for different software.

Let us know how it goes. If you get a working workflow, I would like to add your workflow to the community document.

Also, can you let us know what you’re using the individual frames for? Do you need the frames for object detection or analysis?

Thanks for the reply Craig! to preface I’m on Ubuntu 20.04.2

Does a Ricoh Theta V need to be in “Live Video” mode for this?

I modified the command a little bit for the device that v4l2 set for me. but after trying the

gst-launch-1.0 v4l2src device=/dev/video99 ! video/x-raw,framerate=30/1 \
! videoconvert \
! videoscale \
! avenc_huffyuv \
! avimux \
! filesink location=~Desktop/raw.hfyu

Should i be getting a preview window? or is it just recording to a video file at the designated location.
Also is there a specific command/hotkey to stop recording gracefully or how does it know when to stop.

my ./gst_viewer works perfectly but i’m also trying to capture the frames

I’m currently still just learning, but trying to get the individual frames working for object detection using Facebook’s Detectron2.

Thanks again.

It needs to be in Live Mode with a blue flashing “LIVE” on the front.

You will not get a preview window with the pipeline above. You can set up a tee to preview and save to local storage. The tee will likely increase latency.

Plugins

Description

Split data to multiple pads. Branching the data flow is useful when e.g. capturing a video where the video is shown on the screen and also encoded and written to a file. Another example is playing music and hooking up a visualisation module.


Does it stop when press a key on the console?

If that doesn’t work, try CTRL-C.

I don’t recall any problems with file corruption when I did the test. I may have stopped it with a keypress on the computer.

I think it should stop when a key is pressed, but I’m not sure.


important the examples provided earlier are using gst_loopback, which requires that you have v4l2loopback installed and working.

Can you using something like VLC media player on your Linux box on /dev/video0 or /dev/video1?

If you don’t have the video device working, the examples provided will need to be modified.

Although you don’t need to get the v4l2loopback working to save to file, I think you probably want to anyway to run other tests for other types of object detection with other libraries.

Sorry, what i meant was that after using

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1 ! videoconvert ! videoscale ! avenc_huffyuv ! avimux ! filesink location=Video.hfyu

It looks like it is working, this was the output on the terminal

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw,framerate=30/1 ! videoconvert ! videoscale ! avenc_huffyuv ! avimux ! filesink location=~/Desktop/SampleVid.hfyu
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock

However when i go to open the video file i get an error “An error occurred, No valid frames decoded before end of stream”. I was thinking it was because I did not stop it correctly but I’m not sure.

Is there a graceful way to stop this or is it just CTRL+C and i have a different problem?

Thanks again Craig, appreciate the feedback.

My test this morning using CTRL-C to stop the stream.

Equipment - NVIDIA Jetson Nano 4GB, Jetpack 4.4.

 gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,framerate=30/1 ! videoconvert ! videoscale ! avenc_huffyuv ! avimux ! filesink location=raw.hfyu
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
 ^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:14.015587013
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Test with VLC

Result: Successful opening of .hfyu file

Sample Video File

From Jetson Nano

https://drive.google.com/file/d/1DHDjsXRazMjycaIGwKFhCmBL9V0711mI/view?usp=sharing

I wrote a simple C++ package for decoding the H264 stream online using libavcodec-libavformat-libswscale, and visualizing images in OpenCV (cv::imshow). No need for gst, GStreamer or v4l2loopback plugins. It runs with minimal changes from the user in the CMakeLists.txt file to point to your libuvc-theta library location. Camera must be in live mode. For anyone looking for something similar:

https://github.com/RISC-NYUAD/CV_Ricoh

Platform: Ricoh Theta V, Ubuntu 20, OpenCV4.

2 Likes

Nice job! Thanks for sharing this.