Thank you for the article, that’s the kind of project I was looking for.
What I need to do is to press a button and start a live stream on my computer on two THETA Vs for real-time computer vision application. So the two videos streams have to be synchronized to correctly reconstruction the 3D image.
I have not constraints on the use of USB or Wi-Fi for streaming. I will probably test with both and see which one works the best. Do you have any advice on that ?
One of the big issues with this type of application is latency. You will likely experience a delay of at least 200 ms even over USB.
Most people pursuing this type of application are using Unity on a PC and connecting the THETA to the PC with a USB cable.
This project is also using Unity and a USB cable for the stream. It is only using a single camera.
If you use a USB cable, you must physically press the mode button of the THETA V to put the camera into live streaming mode. As far as I know, there is no way to have the camera automatically boot into live streaming mode. As far as I know, there is no way to send a API command to the camera to switch it into live streaming mode. The UVC 1.5 stream won’t work with Linux without driver modification (no known solution). Meaning if your application runs on Linux, you will need to have a Windows or Mac receive the stream and send it to your application.
The advantages of the USB cable:
more projects have tried this and thus there are more examples with Unity
seems to be more reliable
If you use the Wi-Fi plug-in (list of open source sample plug-ins available here), there is more heat and the camera more overheat with 4K streaming over a 24 hour period. You may have issues with Wi-Fi interference and dropped packets.
The advantages of the Wi-Fi plug-in approach:
HoloBuilder has tested streaming for 24 hours using Wi-Fi. So, the camera appears to be able to run for a while with an external USB power supply. Note that some people have reported heat problems.
you can send the stream using WebRTC or RTMP untethered
you can select stitched or unstitched
You may want to try and contact the researchers at TwinCamGo before making your purchase and get first-hand feedback on their design choices. I believe they are using the USB cable and Unity.
This is a new concept and there are limited examples of success. If you pursue this path, you would be pioneering new concepts.
Note that in the example below, TwinCam Go applied an unknown technique to reduce latency. If you just plug the THETA V into your PC, you will get 200ms latency on the stream. They’re doing a bunch of things that are more advanced.
Thank you for all the infos about the USB and Wi-Fi connection mode. I will probably start with the usb if it is more reliable. Is there no charging issue in USB mode since it is pluged to a PC and not to a wall outlet ?
When will @jcasman do the interview ? I am really curious to know how they manage to synchronize the two THETA V camera and to reduce the latency. I would be happy discuss with them directly but I did not found there email address. Do you know where I could get it ?
There may be a problem with the camera losing charge when powered by the USB cable connected to the computer and streaming at the same time.
At trade shows, @jcasman will stream 4K video for 8 hours. By the end of the show, the camera is losing charge, but still operational. He’s using the laptop in this video, plugged into the USB 3.0 port.
He’s talking to a professor in Tokyo about the TwinCamGo project next week.
These are the questions Jesse plans to ask professor Ikei. Please feel free to add additional ideas.
Prof Ikei - TwinCam Go - Interview Questions
What are the advantages of using THETA Vs as a part of the TwinCam Go system?
Have you developed a THETA V plug-in to handle the live streaming? If not, how do you trigger live streaming?
People in the community have experimented with HEVC video stream encoding for the THETA V plug-ins. Is this type of compression relevant to your project? What video compression are you using? Do you know what the bitrate encoding is for the stream you are using? What framerate and resolution do you use for the video stream?
What are you using to transmit audio? Are you using mono, stereo or 4 channel? If you are using 4 channel, what are some considerations for standards for spatial audio? I hear that the standards are still not fully decided on? If you’re not using audio now, do you plan on incorporate audio in the future?
Do you have any charging issues with the THETA Vs. Have you added an extra battery or anything? Can the THETAs stay charged long enough using just USB?
You use Unity and WebRTC to display images in the Oculus headset. Is this code available to others?
WebRTC can be transmitted over Internet. Have you see issues with latency? Do you do any data compression or is that handled by WebRTC? What have you done to handle latency issues?
Why did you choose WebRTC over RTMP or other protocols. Does WebRTC have lower latency compared to RTMP? What library are you using for WebRTC?
What about latency between the 2 THETA Vs? How is synchronizing the 2 cameras handled?
Segway can spin in a full circle - Will the viewer in the chair also spin? In the demo (YouTube video) it appears there is a limited range of movement for the chair. You’ve built some ratio between the movement of the Segway and the movement of the chair?
How important is adding the kinesthetic sensation (9-axis motor) to the chair? Does this help limit nausea? Or does it just help increase the illusion of “immersion”?
Is this system available commercially? Can people in the US purchase it?
Like @codetricity says, if anyone has questions they’d like to ask, please add them here. Ideally, it will be by the end of today (Mon, Oct 22), since the interview will be tomorrow end of day (Tues, Oct 23). Thank you!
We heard from the TwinCamGo team that they have not resolved the problem of the THETA V losing power during long periods of live streaming. We are in the process of collecting more information. Wanted to give you a quick update on this piece of the puzzle.
Yes, we completed the interview and will post the article today. As the bulk of their work focuses on reducing latency to reduce VR sickness (motion sickness), the answer is more complex. However, they don’t synchronize the two cameras as the latency difference between the two images is small enough as to not affect their application right now.
There’s a longer set of information about what they’re doing to reduce latency and why they chose VP9 for the video compression of the stream. He selected webRTC instead of RTMP because of lower latency. They used Skyway by NTT Communications to implement the webRTC. Each THETA is transmitting single channel audio. Together, they make stereo audio.
I have a question for you regarding audio. Do you need spatial audio (4 channel) for your application? Or, is stereo audio good-enough right now?
@dbraun We did the interview, it was really cool talking with Professor Ikei. He’s really into continuing to improve the project, he says it’s not completed yet, he wants to work on stability and other issues. He is planning to build a THETA V plug-in to better improve control of the camera angles and more.