I have a live MJPEG preview running over a USB-C ethernet adapter to the Ricoh Theta X.
Here is a screen shot of the preview running from Visual Studio Code:
Written in Phyton using OpenCV to display the MJPEG stream.
#Begin Phyton Code
import requests
from requests.auth import HTTPDigestAuth
import cv2
import numpy as np
url = http://[Camera IP Address]/osc/commands/execute
username = "CameraSerialNumber"
password = "DigitsOnlyofCameraSerialNumber"
payload = {
"name": "camera.getLivePreview"
}
headers = {
"Content-Type": "application/json;charset=utf-8"
}
response = requests.post(url, auth=HTTPDigestAuth(username, password), json=payload, headers=headers, stream=True)
if response.status_code == 200:
bytes_ = bytes()
for chunk in response.iter_content(chunk_size=1024):
if chunk:
bytes_ += chunk
a = bytes_.find(b'\xff\xd8')
b = bytes_.find(b'\xff\xd9')
if a != -1 and b != -1:
jpg = bytes_[a:b+2]
bytes_ = bytes_[b+2:]
img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR)
cv2.imshow("Preview", img)
if cv2.waitKey(1) == 27:
break
else:
print("Error: ", response.status_code)
cv2.destroyAllWindows()
#End Python Code
Looking good! I’m working on converting this to .NET as well. Having a bit of difficulty with the CV2 in .net. Happy to share the code I have so far if you want to take a crack at it.
The quality of the stream from OpenCV is better than most client implementations that I’ve seen. Your technique produces a nice result. I made a video of the video stream so that other people can easily see the quality of the camera → computer stream.
I’ve only used C# with Unity, so I’m not sure I’d be of any help. However, I could test it on my Windows 11 laptop. It’s possible other people may be able to provide assistance.
The python OpenCV technique is pretty cool. I’m going to try some transformations with OpenCV and build a simple GUI for the Python script to make it more fun for people in my office to play around with it.
That is cool. I’ll have to play around with that. I’m also looking into using CV2 to stitch our images together as I’ve been looking for a better method than we currently use., which does work well. Here is an example of what we use the 360 cameras for.