Craig–
Thanks for the information. After trying various approaches, I’ve settled on a setup that I’m happy with. I’ll document it here for posterity.
Firstly, I modified the pipe_proc line in the libuvc-theta-sample program to put the stream into a udpsink:
pipe_proc = " rtph264pay name=pay0 pt=96 ! udpsink host=127.0.0.1 port=5000 sync=false ";
I then run the test-launch program from the gst-rtsp-server server project with the following pipeline:
./test-launch "( udpsrc port=5000 ! application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264 ! rtph264depay ! h264parse ! rtph264pay name=pay0 pt=96 )"
I tried various other methods of connecting these two gstreamer processes, including shmsink/shmsrc, but ultimately this one worked the best. At some point in the future I may combine the gst_view and test-launch functionality into one executable and do away with some of the needless complexity.
Finally, I used gscam to bring the stream into ROS with the following command:
GSCAM_CONFIG="rtspsrc location=rtspt://10.0.16.1:8554/test latency=400 drop-on-latency=true ! application/x-rtp, encoding-name=H264 ! rtph264depay ! decodebin ! queue ! videoconvert" roslaunch gscam_nodelet.launch
Note the “rtspt” protocol, it is not a typo. It forces the RTSP connection to go over TCP. When I used UDP there were too many artifacts and corrupted frames
I actually run this last command on a separate machine, just because of my particular network topology. It could be run on the same machine. In fact, it might be necessary to do so because gscam doesn’t seem to handle udp streams well. I also tried using OpenCV VideoCapture to get the data into ros, but that had a couple issues. There are two APIs for VideoCapture that seemed appropriate: Gstreamer and FFmpeg. It turns out that the OpenCV version packaged with ROS is not built with Gstreamer support, so you would have to build OpenCV yourself to use it. For FFmpeg, the version of OpenCV packaged with ROS melodic is 3.2, which is missing a fairly critical change here: https://github.com/opencv/opencv/pull/9292 that allows you to set FFmpeg capture options in an environment variable. I got both of these working by upgrading OpenCV to version 3.4.9 and building from source, but Gstreamer had a low framerate (~5fps) and FFmpeg had a lot of corruption and dropped frames (maybe it was stuck using UDP?). So, I decided to stick with gscam for now.
The latency value of 400 worked for me, but should be tuned depending on your network.
Hope this helps you or someone else wanting to use this camera in ros. So far it looks great and should be perfect for my application. The only negative for me is that I can’t control the power state and streaming state programmatically, unless I missed something in the USB API section. For now I’ve disabled sleep so I only have to turn it on once my robot is up and turn it off when done.