Wireless Streaming to VR Headset With On-Premise Equipment

So my original plan was to use Electron, and I thought that A-Frame / WebVR would be able to handle the connection with the Oculus Rift. From what I have been reading the fork of Chromium that Electron uses doesn’t have complete WebVR support. Not to mention that playing streaming video (ie. like the preview from the Theta V) doesn’t appear to be a feature of any of the existing WebVR libraries.

So I have decided that I may have better luck using React360 with a custom player. I can write a custom player that parses the livestream from the camera, and that should give me the functionality I am looking for.

Once I get something going I will be sure to make a thread and share. Probably gonna get it done over the next two weeks while I am on break.

1 Like

The React360 player might be more optimized for video from file.

This project works with the THETA V and can stream into an Oculus Browser.

https://mganeko.github.io/aframe/

There is more information on using the project here:

Great rec! I had originally written this sample off because we are streaming our video over WiFi, and this sample was using video sent over USB. But this code used for displaying the 360 video in A-Frame is great!

I am curious though what the “datatype” for the stream variable referenced here is:

This function is used to set the source of an AFrame video shown here:

We intend to have our system work offline, so we won’t be able to use Skyway’s API or services. That means that we will have to replicate this stream variable properly.

1 Like

Unfortunately, I won’t be of much help. I’m not familiar with WebRTC and may steer you in the wrong direction.

I believe that WebRTC signalling is based on websockets and you may be able to build your own server.
https://www.webrtc-experiment.com/docs/WebRTC-Signaling-Concepts.html

There’s been several other implementations using cloud-based servers. For example this one:

There was additional discussion here:

image

As far as I know, no one was able to implement an on-premise private signalling server. I’m not sure if it is difficult.

You’ve been a ton of help, no worries =)

I am going to make a development log thread so as I don’t hijack this one.

1 Like

I moved the thread to a new topic.

There are three possible solutions:

method statusnote
RTMPworks with YouTubehigh latency. Video goes to YouTube
WebRTCworks with external signalling server lower latency than RTMP
MotionJPEGworks at 2K and low FPSeasiest to implement in browser

You could break your project into two phases. The first phase could be to show a proof of concept with MotionJPEG and 2K low FPS video. You could then use A-Frame and threejs to display it in a browser in a headset.

Once you have it working, you could then try another solution with WebRTC and potentially an on-premise signalling server (not sure if that is possible).

Alright, so I have made some headway, sort of.

Currently I am able to authenticate with the Theta via JS using an electron app. I hit some snags though.

First, if I want to be able to use A-Frame to connect to my Occulus Rift, I need to actually host the page on a live webserver, not just load the file in to the browser directly (ie. using file://my_web_page.html). To do this I just used an npm app called live-server. But this led me to another issue that didn’t present itself until I was actually hosting the web page: CORS issues.

CORS errors occur when you try to access an API on a domain different from the one that is hosting the code trying to access the API. So if I was loading the JS directly in the browser using file://, hosted the webpage on the Theta V’s webserver, or used a node app this wouldn’t happen. But again, I needed to use A-Frame, so I needed to actually host the web page. So I browsed the forums and found an old post that mentioned using a reverse proxy so that all the requests and responses appeared to be coming from the same domain. To accomplish this, I used another npm application called cors-anywhere, which setup a simple reverse proxy in the node app. This way anytime I want to access the Theta’s API, I simply access http://0.0.0.0:8080/api_endpoint/goes_here. In the end this ends up being a pretty gnarly URL: http://0.0.0.0:8080/192.168.50.227/osc/info, where 192.168.50.227 is the IP of the Theta on my network.

Finally I had to implement my own digest authentication code so that way I can authenticate with the Theta. Without this code, I would just get constant 401 Unauthorized errors when trying to access the API. This is a work in progress, and I am looking to see if I can use some other in-browser digest authentication code, since mine doesn’t currently have any actual session functionality and JS is new to me.

Next step for me is to figure out the whole process of fetching the MJPEG stream, getting each stream as a frame, and using that to update the A-Frame scene. There are some issues with this, most notably that the MJPEG stream is initiated immediately upon request the preview via camera.getLivePreview.I don’t know how to do a multithreaded application in JS, or if it is even possible. So I am looking at using the MJPEG endpoints generated using Device WebAPI which can initialize an MJPEG stream and gives you a URL endpoint at which you can access the stream. I have seen code where you can just set the src tag of an A-Frame element to the MJPEG URL and it will stream the video. Unfortunately that means I have to reverse engineer the API used by the plugin, because it isn’t very well documented for the Theta (or at least for the live preview / stream functionality) and I haven’t managed to find the source code for the plugin anywhere.

But for now I will manually initiate the stream by going to the Device WebAPI web page and starting the stream, and then manually set the A-Frame element’s src to the generated MJPEG URL. By doing this I can at least verify that A-Frame is working in Electron. From there I can figure out how I will automate this process, or perform my own decoding of the preview stream from the Ricoh API. One option if I can’t do all the parsing in the browser, I can do the processing in the Electron app and use Electron’s interprocess communication to send the frames of the video to the A-Frame window.

Lots of work to go. My source code is available in GitHub.

2 Likes

Update on using the Device WebAPI plugin: seems like no matter what settings I use the live preview from the WebAPI only displays 180 degrees of the entire video, so that route is no good. Going to double down on the custom MJPEG code. Might have to write a small plugin for the Theta that creates an MJPEG stream similar to WebAPI but with the entire 360 FOV.

2 Likes

Great news!

I have managed to get the Theta streaming to A-Frame running in Electron using camera.getLivePreview, all over WiFi! There are some weird stutters every once and awhile, but other than that its great! Things are still really unpolished, so I plan on cleaning things up and making it more user friendly tonight and tomorrow, but the functionality is there, so I am pretty happy.

The current approach is I use camera.getLivePreview to open an MJPEG stream. This stream keeps running asynchronously and every time a new frame is finished being received an event is fired with the Object URL to the image as a Blob as the payload. The tiny A-Frame component I made listens for this event, and sets the source of the skybox image to the newly received frame. This whole process works pretty well, with a delay of ~500 ms for 1920x960 video.

So next steps for now are adding in some actual UI. After the basics for the UI is finished I will start hacking on writing my own Theta Plugin. The current API limits the preview resolution to 1920x960 at 8 FPS, so I will have to write my own plugin that runs on the Theta that sends out 4K video at 30 FPS. While I am doing this I might change to transmitting using a different format (since MJPEG isn’t actually a standard), but I am hesitant.

As always, my currently sloppy code can be found on GitHub here.

2 Likes

Congratulations! Thanks for the update. This is awesome.

The API for the camera Live View is here:

https://api.ricoh/docs/theta-plugin-reference/camera-api/

Full example with RTMP.

Really appreciate you sharing it with the community, Jake. Sounds like cool progress. Have fun building the UI, sometimes that’s the fun part.