I have a live MJPEG preview running over a USB-C ethernet adapter to the Ricoh Theta X.
Here is a screen shot of the preview running from Visual Studio Code:
Written in Phyton using OpenCV to display the MJPEG stream.
#Begin Phyton Code
import requests
from requests.auth import HTTPDigestAuth
import cv2
import numpy as np
url = http://[Camera IP Address]/osc/commands/execute
username = "CameraSerialNumber"
password = "DigitsOnlyofCameraSerialNumber"
payload = {
"name": "camera.getLivePreview"
}
headers = {
"Content-Type": "application/json;charset=utf-8"
}
response = requests.post(url, auth=HTTPDigestAuth(username, password), json=payload, headers=headers, stream=True)
if response.status_code == 200:
bytes_ = bytes()
for chunk in response.iter_content(chunk_size=1024):
if chunk:
bytes_ += chunk
a = bytes_.find(b'\xff\xd8')
b = bytes_.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes_[a:b+2]
bytes_ = bytes_[b+2:]
img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
cv2.imshow("Preview", img)
if cv2.waitKey(1) == 27:
break
else:
print("Error: ", response.status_code)
cv2.destroyAllWindows()
#End Python Code
Looking good! I’m working on converting this to .NET as well. Having a bit of difficulty with the CV2 in .net. Happy to share the code I have so far if you want to take a crack at it.
The quality of the stream from OpenCV is better than most client implementations that I’ve seen. Your technique produces a nice result. I made a video of the video stream so that other people can easily see the quality of the camera → computer stream.
I’ve only used C# with Unity, so I’m not sure I’d be of any help. However, I could test it on my Windows 11 laptop. It’s possible other people may be able to provide assistance.
The python OpenCV technique is pretty cool. I’m going to try some transformations with OpenCV and build a simple GUI for the Python script to make it more fun for people in my office to play around with it.
That is cool. I’ll have to play around with that. I’m also looking into using CV2 to stitch our images together as I’ve been looking for a better method than we currently use., which does work well. Here is an example of what we use the 360 cameras for.
The live stream over the USB cable has a higher framerate and resolution. If your application allows, connect the camera to a computer with a USB cable and either connect the headset to the same computer or retransmit it to the headset.
HI @craig and @katterr thank you for the code and explanation of how it will work. I have a question, I want to save an image instead of a live video can I use cv2.imwrite() to get the image, and also will it be a 360-degree image or normal 2D image? I will try and have a look at it to get the video first.
I am not that familiar with OpenCV. However, after this line, you will have the img stored in the variable as a jpeg image. You may have to write as bytes. I don’t know the equivalent in OpenCV.
There are issues with this approach:
the frames will be coming in at around 30fps, so you’ll get a lot of frames. you’ll need to increment the filenames
each frame will be a lower resolution than if you took an image
each frame will not have spherical metadata. Some viewers may not be able to display image without the metadata
For a test, you can use this tool to inject the spatial media metadata back into the image frame.
This is an example in dart of saving the frame.
Multiple frames
Instead of extracting the frame, does your application allow you to take an image of the scene that you want?
If not, can you take an 8K 2fps video and then extract the frames from the video file?
If you advise on your use case, we may be able to provide more ideas.
For the live preview I use Datastead’s TVideoGrabber SDK (TVideoGrabber SDK – Video SDK) for capturing images and video from the Ricoh. Howerver for image and video capture I use the API to ensure I the highest quality output possible. The Live Preview is lower resolution than the native capture the API can capture.
Thank you for the inputs but I am facing issues connecting with my richo theta, http://192.168.1.1/osc/commands/execute this URL does not seem to be accessible also /osc/info as well.
so how can I able to connect the Richo cameras? Update:
I can able to access the URL now, but only info have some information
thank you for the help @craig , finally i can get the image and save it.here is the python code incase anyone needed
Note: I am having issues with connecting, I have to disconnect and reconnect again if any error happens or process the code one time. I am checking for how to reset the camera so I don’t need to disconnect and connect again. If anyone knows how to do that that would be helpful.