The reason we are interested in specifically using Ricoh cameras as stereo setup is to use the entire 360 view provided by the camera to estimate depth. This I can only achieve with omnidirectional cameras such as Ricoh. The below figure shows such a 3D reconstruction that we obtained from two ricoh theta S cameras.
We hope to improve the quality by using the higher resolution offered by THETA Z1 and also my properly synchronizing the images.
In the video you uploaded, the camera hardware has the facility to provide external trigger and obtain very good synchronziation (of the order of nanosecond). For Ricoh camera, I do not see any such hardware facility. But, the USB API you mentioned would help in synchronization. Since it is essentially a PTP cam we should be able to synchronize the clocks (of the order of microsecond). Right now, I have contacted another Engineer who is more familiar with the protocol. If I find a solution, I will surely share with the community.
Also, do you know if Ricoh cameras are global shutter or rolling shutter?
I’m having some problems with the nvidia-inference demos streaming from the THETA.
Normal webcam works. I’m trying to reduce the THETA 4k output to 2k, but can’t figure how to do this. The camera does support streaming at 2k, but I’m not sure how to force it to use the 2k stream for initial testing
Update Aug 29 night
I can now get a 2K stream to /dev/video0
$ v4l2-ctl --list-formats-ext
Index : 0
Type : Video Capture
Pixel Format: 'YU12'
Name : Planar YUV 4:2:0
Size: Discrete 1920x960
Interval: Discrete 0.033s (30.000 fps)
$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
Update Aug 30 morning
I have /dev/video0 working smoothly on x86 with v4l2loopback thanks to @Yu_You submission on GitHub. I added this and the following info to the meetup early access documentation:
video demo and code for DetectNet running on Nvidia Jetson Nano with Z1 using live stream. Good object detection of moving person in real-time
video demo and code for Canny edge detection with Python cv2 module. In real-time with minimal lag
explanation and code snippet to change video resolution from 4K to 2K for testing on Nano
Link to sign up for the meetup is at the Linux Streaming Site. Once you register for the meetup, you’ll see a page immediately with a link to early-access documentation and code. We’re not trying to sell you anything. We’re gating the content so that we can show results to our sponsor, RICOH. This helps us to secure sponsor budget to continue working with the community and produce informal documentation. If you miss the meetup, we’ll put the documentation up on the site in a week or two.
DetectNet running at 23fps. Accurate identification of person and TV.
Update Sept 4 - morning
video demo of DetectNet
x86 test using guvcview
Update Sept 4, 2020
Current plan in order of priority
publish meetup-specific documentation and technical QA to people that registered for documentation. Likely next week.
retest gphoto2 on x86 or Nano (Ubuntu) with THETA USB API based on this discussion, Test gp Python module for easier access of THETA USB API from inside of Python programs
install separate SSD on x86 and retest gstreamer and graphics system with hardware acceleration and resolution at 4K and 2K
if successful, install ROS on x86
install ROS on Jetson Nano, ref, run basic tests with tutorial
Note from Craig: Using the note from the developer, I am going to try a solution. Other people should give it a go too.
Below is a note from the driver developers
We saw pictures of your demo last year using Jetson Xavier and livestreaming 2 THETAs. Is this demo available publicly? We have several contacts that are looking to stream more than one camera at a time.
Unfortunately, I don’t provide multi-camera demo, but support for multi-camera is included in the thetauvc.c, so you can easily realise it by modifyng gstreamer samples.
Following functions in thetauvc.c accept ‘index’ or ‘serial number’ which specifies Theta to be used.
thetauvc_find_device(uvc_context_t *ctx, uvc_device_t **devh, unsigned int index)
Hi, I have purchased theta v model.
My system environment is Ubuntu 18.04.
I want to receive image_raw data from camera through ros package.
But The computer cannot find the camera device.
I turned on the camera, changed to live streaming mode, and connected the USB.
Please post again if you still have questions. We are happy to help.
update Sept 22, 2020
This is in response to your DM about building on x86, Ubuntu 18.04. First, thank you for trying the build and asking your question. It is better if you questions in the public forum as we can leverage the wisdom of the community. Other people are running the software on x86 with Ubuntu 18.04 and ROS. They may be able to help as well.
When you first log into the site, the top video is on building for x86, Ubuntu 20.04. I’ll update the document to make this clearer.
Although the default branch from GitHub is now theta_uvc, make sure that you are using theta_uvc branch. Video shows example of using git branch -a. You need to have the development libraries installed for libusb and libjpeg.
Post the output of cmake .. as well as your build error.
Is it possible that you need to install
libusb-1.0-0-dev - userspace USB programming library development files
Maybe this? libjpeg-dev
If you are building the sample app, you need the gstreamer development libraries installed.
Feel free to respond with more questions if you have problems.
I’m trying to follow this guide to run a Theta V on NVIDIA Xavier. The build works fine, but the sample code doesn’t detect the theta (just says “THETA not found!”). One thing I noticed is that in lsusb, my device shows up as 05ca:0368, which is different than the guides.
I tried changing the product id in thetauvc.c to match, but that still didn’t work. Any advice?
Yes, the camera was not in Live streaming mode, thanks for that tip! I got that working then used the USB interface to keep the camera alive and in Live streaming mode. Next I will modify the libtheta-uvc-sample application to send the h264 stream to a remote computer and then use something like http://wiki.ros.org/gscam on that remote machine to bring the data into ROS.
My initial thought was to use udpsink, but I wasn’t able to get a pipeline working on my Xavier. I tried an RTSP server which worked momentarily, but was not stable. I am not a gstreamer expert, so I’m probably configuring something wrong there… Have you done anything like this or know someone who has? Any advice would be much appreciated!
Regarding RTSP or a way to get the stream to another server, I believe other people are working on the same problem, but I do not have an answer at the moment. For example, @Yu_You was asking about this. I believe he moved from the RTSP plug-in to the USB cable with libuvc (the technique in this thread). I do not know if he was then able to use something like gst-rtsp-server to get the stream to another computer.
@Hugues is quick far along using the Janus Gateway to get rtp output on IP networks. I don’t know how busy he is, but if your firm is working on a big project, it might be worthwhile to consider trying to hire him as a consultant. He’s using his FOX SEWER ROVER in production and he’s been freely contributing his knowledge to this group.
As some of them were in undergraduate school before COVID-19 shut down their project, it might be possible to hire some of the team members as interns.
I’m trying to connect people in parallel to sharing whatever I know because there has been a surge of activity around live streaming and USB camera control, likely due to a maturation of technologies and the increase in demand for remote surveillance and analysis.
There seems like many people are working on foundational knowledge such as transmitting data to a remote server running gscam.
I don’t have ros installed.
Are you saying that you can’t get gscam working on the same computer that the THETA is plugged into with a USB cable.? The documentation examples are using /dev/video*. What is the error message?
Are you using ROS Noetic (Ubuntu 20.04) or ROS Melodic (Ubuntu 18.04) or something else? I’m likely gong to install ROS at some point in the future and test basic camera functionality.
I tried various other methods of connecting these two gstreamer processes, including shmsink/shmsrc, but ultimately this one worked the best. At some point in the future I may combine the gst_view and test-launch functionality into one executable and do away with some of the needless complexity.
Finally, I used gscam to bring the stream into ROS with the following command:
Note the “rtspt” protocol, it is not a typo. It forces the RTSP connection to go over TCP. When I used UDP there were too many artifacts and corrupted frames
I actually run this last command on a separate machine, just because of my particular network topology. It could be run on the same machine. In fact, it might be necessary to do so because gscam doesn’t seem to handle udp streams well. I also tried using OpenCV VideoCapture to get the data into ros, but that had a couple issues. There are two APIs for VideoCapture that seemed appropriate: Gstreamer and FFmpeg. It turns out that the OpenCV version packaged with ROS is not built with Gstreamer support, so you would have to build OpenCV yourself to use it. For FFmpeg, the version of OpenCV packaged with ROS melodic is 3.2, which is missing a fairly critical change here: https://github.com/opencv/opencv/pull/9292 that allows you to set FFmpeg capture options in an environment variable. I got both of these working by upgrading OpenCV to version 3.4.9 and building from source, but Gstreamer had a low framerate (~5fps) and FFmpeg had a lot of corruption and dropped frames (maybe it was stuck using UDP?). So, I decided to stick with gscam for now.
The latency value of 400 worked for me, but should be tuned depending on your network.
Hope this helps you or someone else wanting to use this camera in ros. So far it looks great and should be perfect for my application. The only negative for me is that I can’t control the power state and streaming state programmatically, unless I missed something in the USB API section. For now I’ve disabled sleep so I only have to turn it on once my robot is up and turn it off when done.
There is sample code for Jetson Nano as well in that document.
If this is what you are looking for, feel free to ask more questions.
Note that I haven’t tested isolating individual USB ports on the Jetson. I don’t know if it is possible to just reset the USB port that the THETA is attached to. On the Raspberry Pi, all the USB ports are reset when the camera is restarted. If you have other devices attached the USB port of the Xavier that you need, please adjust accordingly. This is not an approved part of the API. If this is something you need for production, we’ll try and test it more.
If your application can live with sleep and wake, it is better as it is part of the official API.
The API supports switching from image to live streaming and to video file.
Note if you need to save video to file, there is a third parameter you must set. It’s documented in on the guide and also on the USB thread of this forum. I think it is something like adding ,0,0,1 to the
end of the hex value for video to file. If you have problems, we can check it.
This was an old question that I merged with this topic to help people using the search feature of this forum. yes, it’s possible. Please review the topic at the topic and ask more question if you have any problems. Hope you’re still using the camera.
Hi I made a few small modifications to a github entry to transform equirectangular images/frames to perspective images. This can really help when using existing deep learning models or when creating new models based on regular (non 360 camera footage. The Github link can be found here.