THETA S WiFi Streaming with Unity


This is a demo of live video streaming from the THETA S using WiFi and Unity.

If you’re interested in using the THETA for more projects with Unity or live streaming, take a look at global illumination with THETA and Unity, this demo about THETA live streaming on YouTube, this post about using THETA 360 video from a drone, and this tutorial for using THETA live view with Unity

Tutorial: Live Ricoh Theta S Dual Fish Eye for SteamVR in Unity
Live preview in JavaScript

Are there any English translations for this? Google translate leaves many questions.




I think @jcasman can translate it in 10 minutes if you pressure him just a little. I think he likes the pressure because it helps us understand the information people want. BTW, thanks for coming to the meetup last night. :slight_smile: :theta:


@jcasman: Poke. Prod.


I translated the blog and readme.


Theta S Wifi Streaming Demo with Unity

Things to Prepare

  1. Theta S
  2. A PC or Mac that is connected to Wifi. iOS or Android will probably work as well.


  1. Turn the THETA S on with Wifi mode enabled.
  2. Establish the WiFi connection between your PC/Mac and the THETA. The password is written on the THETA, the 8 characters after XS (not including XS)
  3. When you open the project in Unity, the streaming image will appear on the material of Sphere100


Translation of Noshipu’s blog

This article was originally written by @noshipu,
CEO of ViRD, Inc.


In order to use Wifi live streaming, you must use the _getLivePreview API.
Official Reference

NOTE from Craig: This was replaced by getLivePreview in version 2.1 of the API. This blog by Noshipu-san refers to the 2.0 API, which is still supported by
the THETA S. Be aware of the differences in your code.

Unlike the other APIs, _getLivePreview is different because the data is in a stream and keeps going. You will not be able to get a WWW class to wait until the request is complete (maybe).

NOTE from Craig: This is the major problem developers have when working with getLivePreview. As the data
is a stream, you can’t want for the data to end before running your next command. For example, it’s
different from downloading and displaying an image or video file because you know when the transfer is

Processing Flow

Set the POST request to create a HttpWebRequest class

string url = "Enter HTTP path of THETA here";
var request = HttpWebRequest.Create (url);
HttpWebResponse response = null;
request.Method = "POST";
request.Timeout = (int) (30 * 10000f); // to ensure  no timeout
request.ContentType = "application/json; charset = utf-8";

byte [] postBytes = Encoding.Default.GetBytes ( "Put the JSON data here");
request.ContentLength = postBytes.Length;

Generate a class of BinaryReader to get the byte data (you get the bytes one by one)

// The start of transmission of the post data
Stream reqStream = request.GetRequestStream ();
reqStream.Write (postBytes, 0, postBytes.Length) ;
reqStream.Close ();
stream = request.GetResponse () .GetResponseStream ();

BinaryReader reader = new BinaryReader (new BufferedStream (stream), new System.Text.ASCIIEncoding ());

Get the start and stop bytes of 1 frame of the MotionJPEG and cut out one frame

With the byte, check the partion value of the MotionJPEG.

0xFF 0xD8      --|
[jpeg data]      |--1 frame of MotionJPEG
0xFF 0xD9      --|
0xFF 0xD8      --|
[jpeg data]      |--1 frame of MotionJPEG
0xFF 0xD9      --|

Please refer this answer on StackOverflow to
How to Parse MJPEG HTTP stream from IP camera?

The starting 2 bytes are 0xFF, 0xD8. The end bye is 0xD9

The code is shown below.

List<byte> imageBytes = new List<byte> ();
bool isLoadStart = false; // 画像の頭のバイナリとったかフラグ
byte oldByte = 0; // 1つ前のByteデータを格納する
while( true ) {
    byte byteData = reader.ReadByte ();

    if (!isLoadStart) {
        if (oldByte == 0xFF){
            // First binary image
        if (byteData == 0xD8){
           // Second binary image

           // I took the head of the image up to the end binary
           isLoadStart = true;
    } else {
        // Put the image binaries into an array

        // if the byte was the end byte
        // 0xFF -> 0xD9 case、end byte
        if(oldByte == 0xFF && byteData == 0xD9){
            // As this is the end byte
            // we'll generate the image from the data and can create the texture
            // imageBytes are used to reflect the texture
            // imageBytes are left empty
            // the loop returns the binary image head
            isLoadStart = false;
    oldByte = byteData;

Texture Generation Separated by Byte

This is the byte to reflect the texture.

mainTexture.LoadImage ((byte[])imageBytes.ToArray ());


If you’re a python man, here’s the relevant portion of the StackOverflow response.

import cv2
import urllib 
import numpy as np

while True:
    a = bytes.find('\xff\xd8')
    b = bytes.find('\xff\xd9')
    if a!=-1 and b!=-1:
        jpg = bytes[a:b+2]
        bytes= bytes[b+2:]
        i = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8),cv2.CV_LOAD_IMAGE_COLOR)
        if cv2.waitKey(1) ==27:

Mjpeg over http is multipart/x-mixed-replace with boundary frame info and jpeg data is just sent in binary. So you don’t really need to care about http protocol headers. All jpeg frames start with marker 0xff 0xd8 and end with 0xff 0xd9. So the code above extracts such frames from the http stream and decodes them one by one. like below.

0xff 0xd8      --|
[jpeg data]      |--this part is extracted and decoded
0xff 0xd9      --|
0xff 0xd8      --|
[jpeg data]      |--this part is extracted and decoded
0xff 0xd9      --|


Spent some time playing with this. Simply open the project in Unity and it works.



It seems THETA V cannot directly use this demo?
I played it on Unity and error shows failure.
------------------frome here---------------

if (myWww.error == null) {
		outputJson = JsonMapper.ToObject( myWww.text );
		print( myWww.text );


well, I solved this problem.
This Wifi stream now work on THETA V too.

Then I try to package it into HoloLens like other app I developed before.
but it failed. It seems some Unity API need to update. and the usage was changed.
I will continue to work on it.

Continuing the discussion from HoloLens?:


well,I updated my unity to 2018.2.14f1 and did few modification, then it worked on HoloLens.
However,there are some problems:
The delay is very serious. Theta V camera stream video showed only about 0.5 FPS.
I think wifi signal and code compatibility issue coused this problem together.
when I wear HoloLens and walk, the HoloLens camera view moved with big lag, like not enough CPU resource.
when camera and HoloLens divided by a wall, the signal lost an HoloLens might be freezing.

any good idea?


Thanks for sharing your progress. Great news about the update.

The code base you are looking at is a bit old and using MotionJPEG.

Another strategy could be to use the plug-in API.


With the Hololens, is the only way to get the video to the headset through Wi-Fi?

This technique of using an external signalling server might be a fast way for you to get something going.

As you’re in Japan, you could try and contact the team at TwinCam Go, Tokyo Metropolitan University. They may be able to give you some tips with their project.


Thank you so much!
Sorry for the late reply.
I will check those researches and try to find out some solutions.



I recently got a loaner Oculus Go and will use it to test Wi-Fi streaming with webRTC and the NTT Enterprise Cloud WebRTC platform. I have not started the test yet. Note that the NTT Enterprise Cloud is free for community use with a sizable monthly quota.

There’s also plug-in code examples here:

The theta-plugin-live-streaming is a complete example using WebRTC SFU and the RICOH Cloud.



Sorry that I forget the answer. Hololens basically is a wireless equipment which has 3 connection methods, usb(small like cell phone) , buletooth and Wi-Fi. To get the video in different room, Wi-Fi seems the most suitable one.
Then, the new API you gave me(, is Android standard, I don’t know how to use those API in Unity and don’t know whether it supports HoloLens. And I am seeking some information from the existing demo. Most of those demo is for Android… well, I’m a little upset, now I am learning WebRTC&Unity try to find out something.

By the way, I wanna know the relationship between RICOH THETA API v2.1 and Plug-in API version 1. And can the latter be used on various platforms(Unity to HoloLens)?


Can you use another PC to run the Unity app and then stream it to HoloLens?

Connect the THETA V to the PC with a USB cable. Make sure the Unity app detects the THETA V as a webcam. Run the Unity app on your PC. Try the technique above to get the display onto the headset.

It looks like you should be able to use SteamVR with the Mixed Reality Headset.

Do you have the THETA V running inside of your Unity app on your PC using this technique?

Alternate technique

Have you tried this this technique to use Wi-Fi streaming to another room into HoloLens?


If you’re building your app with Unity, you may not need to use either API. Most people are plugging the THETA V into a USB cable and connecting it to their PC. They run the Unity app on their PC and display it to the headset.