Connecting RICOH THETA Z1 with Unity

Hi @HunterGeitz,

This is what I have done in my code, in the start you check if you have any webCams connected, if not, you end the program.

// Gets the list of devices and prints them to the console.
    private void Start()
    {
        WebCamDevice[] devices = WebCamTexture.devices;
        for (int i = 0; i < devices.Length; i++)
            Debug.Log(devices[i].name);

        defaultBackground = pcImage.texture;

        if (devices.Length == 0)
        {
            Debug.Log("No devices connected!");

            pcCamAvaileble = false;
            phoneCamAvaileble = false;

            return;
        }
        
        pcCam = new WebCamTexture(devices[0].name, Screen.width, Screen.height);
        pcCam.Play();
        pcImage.texture = pcCam;

        pcCamAvaileble = true;

        if (devices.Length > 1)
        {
            phoneCam = new WebCamTexture(devices[1].name, Screen.width, Screen.height);
            phoneCam.Play();
            phoneImage.texture = phoneCam;

            phoneCamAvaileble = true;
        }
    }

Ps: If you need to specify the camera, you can compare the camera name to a string. If it returns false, you end the program, or you can wait for a connection (be careful with infinite loops…).

if( devices[i].name != "My webcam name")
{
     camAvaileble = false;
     return;
}
1 Like

Regarding the 2K or 4K resolution, I’m uncertain. How can I verify if the camera is set to 2K or 4K?

All I did was adjust the capture resolution in OBS, but I believe this doesn’t affect the camera settings.

As for the Mac, I’m unsure. While Unity allows you to compile apps for almost every platform, I’m uncertain if you can run the Unity Editor on a Mac. Furthermore, I lack a Mac to conduct any tests :face_with_diagonal_mouth: (I’m a Google Fan :sweat_smile:).

This document has some information. It is not confidential as it is 2.5 years old.

private_beta_test_guide_2021_12_16.pdf - Google Drive

Note that although the document shows Unity running directly with a USB cable, something has changed in either Windows or Unity. When I tested Unity last year, I couldn’t get the stream to work unless I used the OBS virtual cam.

I seem to recall that the color format might impact Unity. Back when the driver was first written, I got some information from the developer of the driver regarding Unity.

With YUY2 color format output, you can use the driver with Unity and JavaScript Media API.

In addition to NV12 (native color format of the decoder), YUY2 color format is supported for Unity or JavaScript Media API.

Single driver provides all available image size/color format. For compatibility with non-standard Unity Webcam interface, default output is set to 3840x1920/YUY2. The default output is not the driver’s preferred format.

2 Likes

I have some great news!

Following @craig post:

I recently conducted tests using the older Unity version (2020.3.5f1) to determine whether the issue lay with Unity or Windows 11.

It turns out the problem stemmed from Unity itself. Upon further investigation, I stumbled upon this bug report: CRASH ON BASEVIDEOTEXTURE::INITVIDEOMEMORY WHEN CAPTURING THETA V’S OUTPUT TEXTURE

The good news? Unity has already developed a solution for it.

I proceeded to test it with the beta version (as I was unaware of other options at the time! :sweat_smile: ):
Unity 6000.0.0 Beta 11


Now, we no longer require OBS, we can directly utilize the UVC driver within Unity.

Here’s an update on my latency testing:

The screenshot indicates a notable enhancement, with a reduction in latency of 163 ms, equating to a 38% improvement.

I came across an interesting observation when comparing the 2020 version with the Beta release: the time Unity takes to display the camera feed. I’ve termed this the “Average Unity Display Time.” Since I don’t have a direct means to measure this time, I estimated it by subtracting the average latency of the current method from the average latency I obtained from the Z1 to OBS.

My assumption was that OBS instantly displays the feed, so the measured time was the duration it took for the Z1 to capture the image (265 ms).

With this in mind, here are my findings:

  • In the 2020 version, Unity introduced 48 ms of latency.
  • In the Beta release, we get -3 ms. This can be considered an oscillation in measurements, suggesting that Unity didn’t introduce any additional latency compared to OBS.

P.S. @jcasman, as I was browsing through Unity’s Forum, I stumbled upon one of your posts (Solution: Connecting RICOH THETA Z1 with Unity). I also left a comment there, sharing this update.

2 Likes

Wow, that’s fantastic. Nice digging and finding the error, the bug report, and the Unity solution! :tada: :medal_military:

I see your post in the Unity forum, too. That’s great to let Unity developers know about the solution.

Really impressive!

1 Like

I’m planning to delve deeper into the issue of resolution next, as it seems that the capturing time is the primary contributor to latency (around 260 ms).

I’m curious if there’s a method to set the output resolution of the Z1 to either 1920 or 3840 directly. While it was suggested that this could be achieved through OBS, I’m exploring alternative approaches since I’m no longer utilizing OBS.

Any insights or suggestions on this matter?

2 Likes

@Manuel_Couto I believe that you cannot set the output resolution of the Z1 for live streaming through the API.

However, my experience here is limited. @biviel, @craig or others may have some more insight.

From the RICOH API documentation:

1 Like

This is not that relevant since the example is with Linux.

The camera supports 2K streaming. It’s defined in the driver.

this provides some hint.

https://forum.unity.com/threads/is-there-any-possible-to-change-camera-quality-used-by-webcamtexture.461106/

mCamera = [new](http://www.google.com/search?q=new+msdn.microsoft.com) WebCamTexture (1920, 1080, 30);

I have not tried it yet.

Maybe it would work?

It does seem that the resolutions are for the texture.

1 Like

Is there anyway you could share your unity file containing the scripts for putting the output of the Z1 directly to the sphere in Unity without OBS? I’m assuming you still use the RTSP plug in to connect the Z1 wirelessly but don’t know how you would connect it directly to Unity without OBS.

I just ran a test with Unity and a THETA X without using OBS. This should also work with the Z1.

2 Likes

Greetings,

Because I’m aiming to achieve the lowest possible latency, I have opted out of the RTSP approach. Instead, I am now using the Z1 connected to the server PC and utilizing WebRTC to livestream to Unity.

The objective is to utilize a small PC, such as a Raspberry Pi or Jetson, to act as a server to capture the stream from the camera and transmit it via WebRTC.

As for RTSP, Unity does not directly support it; you would need a plugin or OBS for that. There is also another method for wireless transmission; however, this approach will limit you to approximately 10 FPS. (I can’t find the original post for this :face_with_diagonal_mouth:)

It’s possible to use motionjpeg direct to the headset with WifFi if you use browser technology.

1 Like

Since I’ve been talking to ChatGPT, I asked it if I can use RTSP inside of Unity.

I got this response from ChatGPT, but did not try it.


Answer below from ChatGPT. Human did not test

Viewing an RTSP stream within Unity requires a bit more effort since Unity doesn’t natively support RTSP streams. You typically need to use a third-party plugin or library that can handle the RTSP protocol and video decoding, then display the video within your Unity scene. One of the more common approaches is to use the VLC Media Player library, which is capable of playing a wide variety of video formats, including RTSP streams. The LibVLCSharp wrapper allows you to use VLC functionalities within Unity projects.

Here’s a high-level overview of how you can achieve RTSP streaming within Unity using LibVLCSharp:

Step 1: Install LibVLCSharp in Unity

  1. Add the LibVLCSharp Unity Package: You will likely need to import the LibVLCSharp package for Unity. This can usually be done via the NuGet package manager or by manually adding the necessary DLLs to your Unity project. Check the LibVLCSharp GitHub repository for the latest instructions on how to do this in Unity.

Step 2: Set Up the Video Player

  1. Create a Script to Play the RTSP Stream: You will write a script that uses LibVLCSharp to play the RTSP stream. This script will be responsible for initializing VLC, setting up the media player, and playing the stream.

Example Script

Here is a simplified example of what this script might look like:

csharpCopy code

using System;
using UnityEngine;
using LibVLCSharp.Shared;

public class RTSPStreamPlayer : MonoBehaviour
{
    [SerializeField]
    private string rtspUrl = "your_rtsp_stream_url_here";

    private LibVLC _libVLC;
    private MediaPlayer _mediaPlayer;

    void Start()
    {
        // Initialize LibVLC
        Core.Initialize();
        _libVLC = new LibVLC();

        // Create a new MediaPlayer
        _mediaPlayer = new MediaPlayer(_libVLC);

        // Create a new media for the RTSP URL
        using (var media = new Media(_libVLC, rtspUrl, FromType.FromLocation))
        {
            _mediaPlayer.Play(media);
        }
    }

    void OnDestroy()
    {
        _mediaPlayer.Stop();
        _mediaPlayer.Dispose();
        _libVLC.Dispose();
    }
}

Step 3: Displaying the Stream

To display the video in your scene, you can use a VideoView object, which is part of LibVLCSharp. Attach the VideoView component to a GameObject in your scene, and reference it in your script to render the video output. You may need to adapt and extend the script to properly integrate with your Unity project, handling aspect ratios, and ensuring performance is optimized.

Additional Considerations

  • Performance: Streaming video, especially over RTSP, can be resource-intensive. Test performance and adjust your stream’s resolution and bitrate as needed.
  • Compatibility: Ensure that your target platforms (Windows, macOS, Android, iOS, etc.) support the version of LibVLCSharp you are using.
  • Security: If your RTSP stream is over the internet, consider the security implications and ensure your stream is properly secured.

This approach requires integrating external libraries and possibly dealing with platform-specific quirks, so be prepared for a bit of trial and error to get everything working smoothly.

1 Like

That’s the one I was looking for (post).

I tried it out, but I was unable to get more than 10 fps :frowning:

1 Like

I believe that your solution with a server is much better. The motionjpeg from live preview is going to have a lower fps and lower resolution.

How do you get Unity to consume the WebRTC stream? Are you using a separate server for signaling or STUN/TURN?

There were a number of projects in the past using ntt skyway

Many years ago, RICOH had a web conferencing service that used WebRTC.

Currently, I’m utilizing the Bidirectional sample from Unity Render Streaming.

While I could provide guidance on setting it up, Unity does an excellent job of that already. However, if anyone requires assistance, I’m available to offer support.

Looking ahead, my goal is to enhance the sample. As I’m focusing on teleoperation, one of the webRTC peers will be a robot, necessitating a headless setup without human intervention. This setup could also prove useful, for instance, when traveling—you can simply connect your Theta to a Raspberry Pi and let it handle the rest. Additionally, I aim to enable streaming from multiple cameras.

For everyone else just trying to play with webRTC and Unity, the sample is a great tool.

(There’s one line in the code that needs to be added so you can retrieve the texture and output it to a sphere or skybox texture. However, since I don’t have access to my work PC at the moment, I can’t say for sure which line it is, but I will update this topic on Monday.)

Regarding the basic workflow and PC requirements for this:

  • Unity provides a web app server for signaling, which runs on one of the peers. You use the browser to select your camera, resolution, microphone, encoding method, and to see the other person’s live feedback. The second person is connected via the Unity sample.
  • The PC requirements for this setup aren’t too demanding. As I mentioned, I’m pretty sure a simple Raspberry Pi can handle the browser side (not tested yet), while a regular PC can run the Unity app. The advantage of having a better PC is for encoding and decoding, as it can help reduce latency.
1 Like

Would you be willing to make a step by step guide for this setup? I’m also aiming for the lowest possible latency and have no experience with WebRTC so I would greatly appreciate it.

Sure, I can make something, although, I will only be able to do it next week.

That would be very helpful, please reply when you do

Greetings,
Sorry for taking so long, but this last couple of weeks have been chaotic.

Ok, so How to use WebRTC for livestreaming with an Richo Theta to Unity.


Requirements – (Unity Page)

This version of Render Streaming is compatible with the following versions of the Unity Editor

  • Unity 2020.3
  • Unity 2021.3
  • Unity 2022.3
  • Unity 2023.1

Platform

  • Windows (x64 only)
  • Linux
  • macOS (Intel and Apple Slicon)
  • iOS
  • Android (ARM64 only. ARMv7 is not supported)
NOTE

This package depends on the WebRTC package. If you build for mobile platform (iOS/Android), please see the package documentation to know the requirements for building.

Browser support

Unity Render Streaming supports almost all browsers that can use WebRTC.

Browser Windows Mac iOS Android
Google Chrome :white_check_mark: :white_check_mark: :white_check_mark:
Safari :white_check_mark: :white_check_mark:
Firefox :white_check_mark:
Microsoft edge (Chromium based) :white_check_mark:
NOTE

It may not work properly on some browsers caused by depending on the status of support about WebRTC.

NOTE

In Safari and iOS Safari, WebRTC features cannot be used with http. Instead, https must be used.


SETUP – (Unity Page)

1º- Create a Unity project. We used Unity 2022.3.17f1 LTS.

2º- Go to the Package Manager and Add by Name: com.unity.renderstreaming.
You will probably get a warning about the input system, press Yes and wait for Unity to restart.



image
image

3º- After installing the package a new window is going to pop-up “Render Stream Wizard”. This is used to correct all configurations needed for this project. Every time you open up the project this window will apear and check if everything is OK. If not, just press Fix All, and Unity will handle the work for you. (Don’t close the Render Stream Wizard, you will need it in the next step)

4º- The next thing to do is to Download the WebApp. This is a JS script that you will run in one of the peers. In this case, because we are looking to implement this in a robot, we prefer using the JS in the robot side, and the Unity app in the User side. This is also interesting if you are aiming to use a Raspberry PI or a Jetson, that you put in your backpack. That being saied, the Theta camera will be connected by USB-C to the PC with the JS (PC_JS).
So, to download the WebApp scroll down to the end of the Render Stream Wizard window. There you will find an option “Download latest version web app”. Save the file, we suggest creating a “WebAPP Folder” in the Assets, the main reason being, in the future you may want to customize the app, and this way you already have everything organized.
(Now you can close the Render Stream Wizard window)

5º- In the Package Manager import the Unity Render Streaming Samples.


Implementing the Theta

As of this moment, by using any of the samples provided by Unity you will have a fully working WebRTC comunication channel.
So, how can we use this to livestream the 360º Livefeed from the theta?
The best way I found was to build upon the Bidirectional Sample. By doing this we can get a stream going with only 2 steps:

1º- Edit the Bidirectional script:

  • Force the script to always connect to the same channal ID (useful for automation in the future). In this case, the default channel is going to be 00000.
void Awake()
        {
            (...)
            setUpButton.onClick.AddListener(SetUp);
            hangUpButton.onClick.AddListener(HangUp);
            connectionIdInput.onValueChanged.AddListener(input => connectionId = input);

            // Replace this line with the following
            //connectionIdInput.text = $"{Random.Range(0, 99999):D5}";
            connectionIdInput.text = $"{00000:D5}";

            webcamSelectDropdown.onValueChanged.AddListener(index => webCamStreamer.sourceDeviceIndex = index);
            (...)
        }
private void HangUp()
        {
           (...)
            // Comment this line, so the ID channel doesn't change when the call is over;
            //connectionIdInput.text = $"{Random.Range(0, 99999):D5}";

            localVideoImage.texture = null;
        }
  • Retrive the camera texture. In this case, we just need to create a variable for the material, and add a line of code to atribute the texture to the material.

Note: The material must be Unlit, or the image will be darkened.
image

        // Create a new variable for the material
        public Material cam360;

        void Awake()
        {
            (...)

            receiveVideoViewer.OnUpdateReceiveTexture += texture => remoteVideoImage.texture = texture;
            // Copy the line above and edit it so that the texture is saved in the new material
            receiveVideoViewer.OnUpdateReceiveTexture += texture => cam360.mainTexture = texture;


            (...)
        }

2º- Now that we have a material with the camera livestream, we can apply it to anything we need. In our case, we apply it to a sphere. For that, we must create a sphere and them apply the material to it. We also need to invert the sphere!

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class FlipNormals : MonoBehaviour
{
    [SerializeField]
    private Mesh mesh;

    // Start is called before the first frame update
    void Start()
    {
        flipNormals();
    }

    private void flipNormals()
    {
        Vector3[] normals = mesh.normals;
        for (int i = 0; i < normals.Length; i++)
            normals[i] *= -1;

        mesh.normals = normals;

        for (int i = 0; i < mesh.subMeshCount; i++)
        {
            int[] tris = mesh.GetTriangles(i);
            for (int j = 0; j < tris.Length; j += 3)
            {
                //swap order of tri vertices
                int temp = tris[j];
                tris[j] = tris[j + 1];
                tris[j + 1] = temp;
            }

            mesh.SetTriangles(tris, i);
        }
    }
}

Final Steps

Ok, at this moment you should have a fully working livestream going between the PC_JS and Unity. Although, before testing you need to check two things.

  1. The first, and the easiest, you need to confirm that you have the Ricoh Theta UVC installed (download).

  2. The second can be a little bit trickier, since it involves IP adresses. So, if you run the JS script in the same PC that you run the Unity there will be no problems. If not, you need to specify the signaling server IP. So, how can you do this?
    In Unity, you can go to Edit > Project Settings > Render Streaming, in here you will find a URL, it should be something like “ws://127.0.0.1”. That’s the “internal IP of you machine”, it will only work if the JS is executed in the PC with the Unity project.

Now, grab the JS script and run it in some other machine, other that the one running Unity. When it starts, the first few lines will be the IP adresses associeted with the server. Here is an example:

If you have the 2 PCs in the same network use the Local IP (it’s better for safety and it should work right out of the box). If you have the PCs in different networks you have to use the Global IP. For both cases you will probably have to configure your firewall to open the port 80. For the Global IP, you will also have to configure the port-forwarding.

IMPORTANTE NOTE: Messing with IPs and all that stuff can be dangerous for people that don’t know what they are doing. Opening ports its always a big security risk. I’m no expert in the matter, so I can’t provide reliable information in this topic. What I can say is that using a VPN is better than using port-forwarding.

Alright, by now you should have a fully operational livestream set up between PC_JS and Unity. However, before diving into testing, there are a couple of key checks to ensure everything runs smoothly.

  1. Firstly, confirm that you have the Ricoh Theta UVC installed. You can download it from here.
  2. Secondly, you need to configure the signaling server IP. If you’re running the JS script on the same PC as Unity, you’re all set. However, if not, you’ll need to specify it. Here’s how you can do it:
  • In Unity, navigate to Edit > Project Settings > Render Streaming. You will find an URL resembling “ws://127.0.0.1”. This represents the internal IP of your machine and will work only if the JS script is executed on the same PC as the Unity project.
  • Now, run the JS script on a separate machine from the one running Unity. Upon starting, the script will display the server’s associated IP addresses. Here’s an example:
  • If both PCs are on the same network, use the Local IP (it’s safer and should work out of the box). If they’re on different networks, opt for the Global IP. In both cases, you may need to configure your firewall to open port 80. For the Global IP, additional configuration like port forwarding is required.
  • After choosing you IP, go back to Unity and replace the “127.0.0.1” for the IP you picked. Like this:

Important Note: Manipulating IPs and configuring firewalls can pose security risks for those unfamiliar with the process. Opening ports is a significant security concern. While I’m not an expert in this field, so I cannot provide further information on this, I suggest that using a VPN is preferable to port forwarding.


Testing

Now it’s time for testing!

PC_JS

  • Connect the Theta to the PC (PC_JS);

  • Execute the JS script in PC_JS;

  • Open the browser in PC_JS and connect to: http://127.0.0.1. This will connect you to the signaling server.

  • This is the default page (you can change it, we will go over that in a bit). Select the Bidirectional Sample.

  • If no cameras, and or microphones, are detected the list will only show “camera 1”, and or “mic 1”. If that happens, don’t worry, click “Start Video”. Now you will be asked to give permissions to acesse the camera and the microphone. Allow it and reload the page.

  • Now the cameras and microphones will display properly.

  • Set the correct resolution (you can pick custom): 1024x512, 1920x960 or 3840x1920.

  • Press “Start Video”, the Local video should start:

  • Now input you Connection ID, if you change the code in Unity, it should be 00000.

  • Press Setup (the connection will not start since we still have to run Unity).

Unity

  • Press Play;
  • Because this is bidirectional, you can pic a camera and a microfone in the Unity side;
  • Press Start Video/Mic;
  • Now, if you change the code to force the connection channel 00000, you just need to press SetUp, if not, input your channel and press SetUp.

  • Now you can disable the Canvas in the inspector (or add a button to do that) so you can clearly see the sphere.


Final Notes

  • Customizing the web application involves JavaScript scripting and HTML. While I’m not an expert in this area, you can find detailed guidance on the Unity page: Customize web application.
  • I’m exploring the implementation of this in VR. If anyone needs assistance, feel free to reach out, and I can create a separate post to address specific VR-related queries.
  • Once I make progress on customizing the WebApp and adapting the solution to run on a Raspberry Pi or other portable devices, I’ll provide updates in a subsequent post.

I believe I addressed everything in this post, but if someone finds any problems or notices something that I didn’t cover well enough, please let me know so I can update the post.

3 Likes