On my Jetson Xavier NX, I have been able to successfully connect the Theta V and create a /dev/video0 device that streams the full 360 video (from both lenses) in VLC as well as via gst-launcher
However, when I call the following on Jetson, I only see the 180 video from one lens.
import jetson.utils
camera = jetson.utils.videoSource("/dev/video0")
Full source code is here:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="detectnet-camera-2.md">Back</a> | <a href="segnet-console-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Object Detection</sup></p>
# Coding Your Own Object Detection Program
In this step of the tutorial, we'll walk through the creation of the previous example for realtime object detection on a live camera feed in only 10 lines of Python code. The program will load the detection network with the [`detectNet`](https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.inference.html#detectNet) object, capture video frames and process them, and then render the detected objects to the display.
For your convenience and reference, the completed source is available in the [`python/examples/my-detection.py`](../python/examples/my-detection.py) file of the repo, but the guide below will act like they reside in the user's home directory or in an arbitrary directory of your choosing.
Here's a quick preview of the Python code we'll be walking through:
``` python
import jetson.inference
import jetson.utils
net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera = jetson.utils.videoSource("csi://0") # '/dev/video0' for V4L2
display = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file
This file has been truncated. show original
What do I need to do in order to see the full 360 video using jetson.utils.videoSource?
I found the solution. I had to explicitly resize the input image (3840x1920) to (1920x1080).
2 Likes
craig
January 7, 2021, 10:16pm
3
Thanks for reporting back on your success. Did you intentionally set the live stream resolution to 1920x1080 in order to get DetectNet working?
I have my NVIDIA Jetson working with a 1920x1080 stream, but I was hoping that the Jetson Xavier NX could handle 4K streaming.
I’m getting 23fps with the 2K stream.
https://youtu.be/ykta9Hn2ESs
Perhaps like you, I went through a few of the DetectNet training tutorials here https://github.com/dusty-nv/jetson-inference
Those tutorials by Dusty are quite good.
As my son is studying aquatic biology, I’ve wanted to go through this tutorial on a fish detector.
https://jkjung-avt.github.io/fisheries-dataset/
However, I have not tried to build the tutorial yet.
Hi Craig,
Yes, here are the 2 approaches that worked.
Set the Ricoh Theta resolution to 4k. Then, in dusty’s tutorials, rescale to 1920x1080. See dusty’s solution and response to me here:
https://forums.developer.nvidia.com/t/question-on-jetson-utils-videosource/165342/8
I updated gst_viewer.c to set the camera to FHD instead of UHD. Then, all the code in dusty’s tutorial works fine as is
Since I am using Xavier, I am seeing between 120-150 FPS depending on the approach you take
1 Like