I am trying to use the ricoh theta X for streaming for robotics applications.
I would like to explore the best way to reduce latency in streaming
the following options I have tried:
(A) wifi streaming via getLivePreview @ 1024x512 resolution + jpeg decoding on the cpu
(B) USB streaming @ 2K + gpu H264 decoding
I have measured ~400ms latency for (A) and ~250ms for (B)
I have been trying to figure out what exactly is contributing to the latencies.
By measuring latency using the screen on theta x itself, I believe we can infer a lower bound possible for latency. I have done this both with and without stitching
I am happy to report that in both cases I measure ~150ms latency. I speculate the latencies beyond ~150ms observed are due to ondevice encoding, data transfer, decoding
I also measure the similar latency with and without stitching for (A). This seems to suggest that on device stitching latency is negligible, or stitching overhead is still being incurred even when stitching is switched off. (150ms for sensor to screen latency seems surprisingly long)
I am unable to perform this experiment for (B) as the
_imagestitching config does not seem to apply to livestreaming via usb
unfortunately I can’t do a real apples to apples comparison between (A) and (B) since I am unable to stream at the same resolution in both cases on the theta X. Therefore I can’t infer the latency contributed by data transfer. I would expect data transfer over usb to be much faster than data transfer over wifi. I observer much higher variance in latency with streaming over wifi as well.
Therefore, the next thing I would like to try is to stream 2K raw over USB. at 30fps I would estimate that to be ~1.3Gbps for raw rgb, and half that for I420, which is well within bandwidth for USB3.0 (5~10Gbps). I would also expect the device to heat up less quickly without having to encode the stream. I believe I can get the raw stream within a plugin, but would not be able to stream to another machine via USB. is there a way to extend the USB API to allow raw uvc streaming? Motion JPG encoding is probably a good option as well.
I have seen latency for (B) hit 300ms quite often but i’m not sure when it happens (perhaps when the device is running low on battery life),
I test with 2 different pipelines
appsrc name=ap ! queue ! h264parse ! queue ! nvdec ! gldownload ! autovideoconvert ! autovideosink sync=false and
appsrc name=ap ! queue ! h264parse ! queue ! nvdec ! glimagesink and do not observe much difference, likely due to the compute characteristics on my machine (it’s a gaming PC)