Successful Theta V stream from drone to VR headset 0.25 miles away

Thank you @Jake_Kenin !! :slight_smile:
Itā€™s too bad I didnā€™t connect with you on here sooner and maybe coulda met up with ya somewhere in Japan! Iā€™m Canadian but my wife is Japanese and Iā€™ve lived here for about 3years nowā€¦ :slight_smile:
I brought my theta into TeamLabā€™s Planets which was pretty cool but my wife and I had a pretty negative experience with Borderlessā€¦ oddly enough on ā€œaccessibilityā€ issues (I use a wheelchair) and poor practices/policies for people with mobility challenges.

Are you able to confirm how you were able to effectively use the osmo without the phone? Orā€¦ does the version 2 not ā€œneedā€ one to get it functioning?
I tried taping the thetaV to a phone, which worked but the gimbal was stressed with the extra payload.
It wonā€™t function properly without phone though.

1 Like

If you have a blog or set of pictures of your bicycle trip, please share the link. :biking_man::rice_ball:

Also, thanks for mentioning teamLab Borderless. I had not heard of this before.

@Jake_Kenin or anyone else, Iā€™m trying to get the mjpeg stream to appear in a browser window and looking for a barebones example that shows the simplest possible way to get the mjpeg stream in a browser window.

I am getting this error:

showPreview:1 Access to fetch at ā€˜http://192.168.1.1/ā€™ from origin ā€˜http://localhost:3000ā€™ has been blocked by CORS policy: No ā€˜Access-Control-Allow-Originā€™ header is present on the requested resource. If an opaque response serves your needs, set the requestā€™s mode to ā€˜no-corsā€™ to fetch the resource with CORS disabled.

I notice that Jake used cors-anywhere

Iā€™m not quite sure how to use this or how to get the live preview to appear in a web page.

Jakeā€™s comments referenced this code:

Iā€™ve been trying to work with the aruntj code as it is smaller and more manageable for my limited knowledge. The amelia_viewer code has much more features. I first want to get any frame to appear and then add features.

I am using node, express, and request.

Iā€™ve been testing Jakeā€™s code more and it is working nicely.

In addition, Iā€™ve created executable binaries for Ubuntu .deb and Windows

Release 0.1 Alpha Ā· codetricity/amelia_viewer Ā· GitHub

Update 2/26/2020

I have it working on Windows now from binary files.

Iā€™m not sure how to create a Windows installer. At this stage, I have a large 89MB zip file that people can extract and execute the amelia_viewer.exe file by double-clicking it. At the moment, all the libraries are in the same folder, but it works on Windows 10.

The binary release for Windows Linux is at the link above.

image

2 Likes

I am a member of the team tasked with continuing this project at RIT. We have made improvements in creating a more elegant transmission hardware system and have added some new, increased functionality to the drone. The aspect that is still holding us back from meeting our requirements is the software side of the project.

We have made minor improvements to the software and our performance is currently at 1920x960 resolution at 15fps. This is being bottlenecked because the software achieves a max bit rate of 22 Mbps. When the data reaches the computer from the receive antenna, the data is coming in at 120 Mbps (what is reported by the Ricoh). However, after it runs through the Ricoh_api the data rate drops down to 22 Mbps, which can only produce video quality at about 1920x960 at 15fps.

Things we have tried to fix this are to use the software on a computer with higher processing power, disabling the video rendering function, and using the software in chrome instead of Firefox. Unfortunately, none of these changes have resulted in improvements to the data rate.

Our theory is that the read function of the Ricoh api is what is taking the most time to execute and if we could speed that up, our video quality would increase. One thing we are trying right now is calling a C++ function in the javascript since this tends to operate faster (10x by our prototyping). We have yet to integrate this into the system though.

Is it possible to modify the Ricoh_api to boost the bit rate? Any suggestions outside of this scope are also welcome.

-Sam Murray
RIT MSD Amelia Team
(more info located here: https://wiki.rit.edu/display/P203190/Amelia+Drone)

1 Like

Are you using this API?
https://api.ricoh/docs/theta-web-api-v2.1/commands/camera.get_live_preview/

to get MotionJPEG?

I believe that Jake tried or at least evaluated the API here:

https://api.ricoh/docs/theta-plugin-reference/camera-api/

As the camera runs Android OS internally, you may want to test with the RicMoviePreview API.

You can also use the Android NDK for some functions to speed up things internal to the camera.

the THETA V is running SnapDragon 625.

To use this technique, you would need to write your own API server inside the camera and bypass the liveView webAPI.

No one has done this and achieved performance gains yet.

People have used adaptive streaming or techniques to show higher resolution only on a portion of the frame.

Reference for plug-in (running app directly in camera)

it would be awesome if you could share the code improvements.

Also, I want to share the binary executables for the Amelia Viewer on this site:

We have the Lockheed Amelia Drone listed, but I want to add a listing for just the viewer because other people can easily use the viewer without the cool drone hardware.

Is there a logo I can use for the software? I would also like to provide better attribution.

Hi Craig,

Thank you for your response!

We are not using the RIOCH API, the previous team found a function online that reads the data stream in JPEG format (getThetaLivePreview() in ricoh_api.js) and then the web url is updated for every new frame. We have used the Firefox browser performance analysis tool to narrow down which section of the code was bottlenecking performance, and weā€™ve found that it occurs when the image buffer is filled by the frame data bytes inside the read() function. When the image resolution is lower, the buffer size is smaller hence the image buffer gets filled up much faster and we were able to reach 1024x512 @ 30fps. On the other hand, the current process does not fill up the bigger buffer fast enough for the higher resolution at 1920x960, therefore we could only achieve 15fps.
read_api|527x500

In response to your suggestion on modifying/creating the camera API. We donā€™t need to improve that portion of the system as weā€™ve verified that the camera is capable of output the bit rate for 4k @ 30fps. We are currently trying to improve the rate we fill up the image buffer, which is creating a C++ function inside the JavaScript function, which writes the frame bytes to the buffer.

Thank you again for your time and suggestions.

Best Regards,
Team Amelia

1 Like

Its cool to see that this was continued! Last year was sort of a shit show because it was a whole lot of work and bootstrapping a bunch of different elements all on an incredibly tight budget, but now you guys can really focus in on improving certain elements and making it less janky.

We are not using the RIOCH API

Just a heads up, you are in fact using the API @codetricity linked. ricoh_api.js is just a file I made that interfaces with that API lol.

as weā€™ve verified that the camera is capable of output the bit rate for 4k @ 30fps

Whoa, what? You got receipts for that lol? Is this a reference to the 120 Mbps number you mention earlier? Is that derived from ā€œlive streamingā€ stat in the Theta V tech specs on their website? That number is regarding streaming over USB. Now, if the Theta V can encode 4K@30 and send it out over USB, then it may be possible to get the video out over the wireless controller as well. It depends on the wireless controller. If you wanted to do this, you are going to have to write your own plugin for the Theta V. And even if you do, it will probably be laggy as hell.

You should probably verify you can actually send 120 Mbps from the Theta V to the base station by writing some test software or using linux commands from the Android OS. But even then, I think the real issue is that you are bumping in to the limits of the live preview function of the Ricoh web API. The buffer isnā€™t filling up slowly because the base station canā€™t process the data fast enough, but because the Theta V isnā€™t sending the data fast enough.

I think the number your seeing, 15 FPS @ 22 Mbps, is in fact an accurate reflection of the systemā€™s max throughput given the Ricoh web API. I wouldnā€™t be surprised if capturing, compressing, and sending 1920x1080 JPEGs @ 15FPS is actually limited by the Theta Vā€™s own processing power and the wireless controller. The Theta V may simply not be able to generate and send you video frames faster than 15 FPS. So before embarking on optimizing the performance on the receiver side (base station), I would look to see what is going on with the Theta V itself. If you are going to want to increase the framerate, I think you are going to need to write your own plugin on the Theta V. This is all a hunch though, and you may want to verify this.

Personally though I think 15 FPS should be fine for most applications and I would focus on reducing latency, since it may be INCREDIBLY difficult to try and get a higher framerate. And even if you do get a higher framerate out of the camera, your data connection layer may not be able to pump it out fast enough without introducing some unacceptable latency. That said, it would be cool to see a 4K stream, even at just 8 FPS.

The current software you are using is sending an MJPEG (motion-JPEG), so its just a series of JPEG images, which are compressed images from the video being captured by the cameras. Note that if you reimplement this API, you can probably configure the quality of your JPEG compression and that may give you more frames/less lag. So currently you are sending the video as a series of images. The alternative is to send an encoded video stream, which uses something like H264 or H265 to compress the video with both temporal AND spatial analysis. You are no longer sending/receiving individual frames, but a series of encoded data that software on the receiver knows how to decode in to individual frames.

I would recommend you write your own Theta V plugin that establishes a WebRTC connection with the base station. WebRTC is a P2P media streaming standard that was designed for real time communication, and uses H.264 (which the Snapdragon 625 has hardware encoding for) for video encoding. It is well documented and is your best bet for low latency video streaming at high resolution and framerate. It is sort of complex and will take some time, but it would be well worth it. Here are some examples of WebRTC. Alternatively, you could try writing a plugin that still streams MJPEG, but converts the images to black and white to allow for higher resolution video at high framerates with less lag. Remember, as each frame requires less data, you will see a decrease in latency. This approach has the added bonus that it wonā€™t require you to completely rewrite the base station software.

I will note that you may be able to simply write a plugin that streams H.264/H.265 streams and see some improvements. The processor on the Theta V has hardware encoding for both of those.

General notes after skimming your project site:

  • I noticed you have color video no longer as requirement. Thatā€™s a good idea, you may be able to get better compression and streaming if its just black and white! But this would require writing your own plugin, and even then the Theta V Android camera API may not support that.
  • In your math for required bandwidth you calculate throughput requirements for RAW video, and donā€™t mention that there is no way to even stream RAW. You will be getting JPEG or H.264 compressed images/streams, so those numbers arenā€™t valuable for anything except recognizing how magical compression is :wink:
  • This is probably way too late to be mentioning, but they do sell MUCH smaller 5.8/2.4 GHz WiFi receivers that are made for embedded systems. Like <500g systems. But integrating it with the Theta would be a major undertaking, and you need to build/buy your own antenna and base station system.
  • Bummed you didnā€™t go with the auto-tracking system, but I totally get why you opted not to. Auto tracking is something that can always be tacked on later once throughput at range is verified with a given sensor configuration.
  • Good stuff setting up the retractable landing gear.
  • Really impressive transmission setup. Really good numbers too. Have you actually tested it out doors yet, at range, and with pushing data? If that setup works that well, well hot damn. Good shit.
  • The Electron app (at least where I left it) did not handle the DHCP work. That was all done by the TFTPD application and done manually. Did you update the electron app to use a DHCP Node module? I believe I mentioned that could be done somewhere. If so, good stuff.
  • I am unclear how you are getting 15 FPS at 1920x1080. I remember seeing this number, but figuring out the inner workings of the API wasnā€™t my top priority at the time. The default Ricoh web API v2.1 claims to only supports 8 FPS at 1920x1080. I think it is actually sending frames as fast as possible, and 8 is the guaranteed minimum FPS? Weird that the API documentation would be inaccurate/not mention this.
  • IDK why the Cloud Key needed to be purchased. This kind of configuration should have been possible through the web interface for the switch. Sucks that this ended up being a blocking item.
  • Use QGCā€™s waypoint based mission planner, so you donā€™t have to worry about a pilot crashing the drone (like I did lol). Obviously keep someone on the sticks in case something goes wrong, but the PX4 on the drone will be able to fly fast and accurately to and from waypoints along a planner mission. This will also make testing a lot easier.
  • There is a bug in the way the Ricoh web API fetches and sends the live preview that causes weird stuttering. You can see it in one of our old videos. It is IMMEDIATELY nauseating in VR. As in, it will bring you to your knees, almost instantly. This bug alone is reason enough IMO to write your own plugin.

I am excited to see how things go! Keep posting and asking questions, I am always happy to contribute.

1 Like

@sjm7783
We have another camera model, the THETA Z1, that uses a faster MCU, something in the Snapdragon 800 series. I canā€™t remember the exact model. The camera is heavier than the V and thus may not be appropriate for the drone. We may be able to loan the team a Z1 for a month for testing if you want to use it to isolate a MCU or GPU bottleneck with the THETA V. Let

I think Jake has some great ideas about converting the frames to black and white or writing a plug-in.

As youā€™re doing a research project, I recommend you take a look at the plug-in technology. Itā€™s free to unlock developer mode on the camera and it will give you more freedom to experiment with different techniques. You can also grab more information from the camera logs.

With the plug-in technology, you can run your own server on the camera and replace the existing API server. You can then do one of the following as Jake suggested:

  1. reduce the color or quality of the JPEG frames using lossy compression prior to transmission
  2. switch to another protocol like WebRTC

If you want to borrow the Z1 for testing, contact @jcasman

@sjm7783
This article published today may be useful for you as well.

1 Like

@craig @Jake_Kenin

Just a quick update on the projectā€¦

With the COVID-19 causing college campuses around the country to close, MSD at RIT has been severely affected. Our team last met in person on March 5th, right before spring break, and were working on writing our own plug-in and also trying to convert the current software to C++. After regrouping in mid to late March, our team was forced to halt any additional prototyping since we no longer have access to the equipment and can not test/work on it together as we had been doing.

Our focus for the remainder of the semester has shifted to documenting what has been done, what our next steps would have been, and how we think the next team should approach the project. There is no guarantee that RIT decides to continue this project next year, but we hope they do as there is still much room for improvement.

Thanks you for the help you provided!

I focused on the transmission hardware side of this project, so let me try to answer a few questions from previous posts that dealt with that.

Yeah we were way under budget and tried to buy plug and play type devices to save time. This way were could focus more other parts of the project.

Same thing really, just to save us time. We did suggest for to a potential future team that this would be cool to implement to make the design more sleek.

The retractable landing gear was a good implementation for us, but broke easily whenever flying the drone. In the future, the landing gear may have to be redesigned all together.

Yes, we the results you are referring to were tested outdoors, pushing data, at max range, but with a single receiver. When we incorporated the full receiver setup with the switch this data rate dropped a bit. We didnā€™t have time to troubleshoot this issue, but from our preliminary research it was either due to improper mounting of receivers by us or the receivers channel frequency needing to be selected as to cut down interference from other receivers/noise in the area. Either way, this should have been not too difficult to correct.

From what we saw, yeah it was sending frames as fast as possible. Which was how we saw the 15 FPS number.

This was extremely frustrating and something that Ubiquiti should really address. It took us forever to figure out the setup and had to get some help from a professor at RIT that has extensive experience with Ubiquiti equipment. After the switch was setup though we never touched the cloud key again.

I really donā€™t have the expertise to answer the other questions, but we are still updating our Confluence page (Amelia Drone - P20319 - RIT Wiki) with more information.

2 Likes

@sjm7783, @Jake_Kenin,

Thanks very much for the update. Hard to hear about the project on hold. How are the documentation updates going? Also, how many of you are graduating? Meaning, thereā€™s lots up in the air certainly, but assuming school life gets back to semi-normal at some point, how many of you will be back at school in the fall? Or, put it this way, are any of you seniors?

Hi, thanks for sharing all this work! It closely resembles the use case that I am thinking of, and I am hoping that someone in this community will be able to help me clarify some aspects.

I am working on a tele-operation system, where I plan to place a 360 camera with a remote robot arm, and live stream the images to a VR headset. The VR headset will track the users hands, and this hand movement will be used to control the robotic arm. Since I am still in the research phase, it is ok for me if the remote robot arm is actually still physically close to the user (i.e. at a distance where I can run a cable to the VR PC). For this specific setup, I have 2 questions:

  1. Obviously, latency is a huge question here, since I hope to test object manipulation tasks (like the user picking up a fragile block in this tele-operated setting), so pushing it down to 50 ms would be ideal. It seems like Jakeā€™s solution is the lowest latency option so far, but my setup does not necessarily require wireless communication. Can I expect a decrease in latency if I would livestream the 360 images via cable to my VR PC?

  2. Has anyone been successful in getting Jakeā€™s solution to run on a Theta Z1, and did that do anything to improve latency/fps/resolution?

Thank you very much for your generous open-sourcing of this project!
Best,
Femke

There is a limitation for the Z1 to stream the individual frames as the frames need to be stitched. I can run Jakeā€™s solution with the Z1.

In this post, he indicated, 250ms latency. Is there another post where he got it lower?

I do not think there is a way to get below 250ms. Over a USB cable, I could only achieve 250ms with the Z1. This is likely a limitation of the camera as it needs to stitch the frames.

Optimization - RICOH THETA Development on Linux

To stream unstitched frames, you would need to build a plug-in and the transmission would need to be over Wi-Fi. There are not tests if this would reduce the latency.

Test on September 22, 2021

Test environment

  • RICOH THETA Z1 firmware 2.00.1
  • Firefox 90.0.2
  • 1920x960 resolution @ 8fps
  • access point mode (theta is at 192.168.1.1 and functioning as the hotspot)
  • Windows 10 21H1

Results

Appears to work as expected. Only tested for a few minutes.

latency of around 600ms at 1920x960, 8fps

left is amelia viewer display. Right is source.

1024 x 512 @ 30fps

under 300ms latency

Left is Amelia viewer display.
Right is source.

Hi Craig,

Thank you so much for your immediate action on my question! This is so much more than I could ever have hoped for. I think that Iā€™ll just have to wait for the tech to develop a little more before turning this into a crazy tele-operation application, but itā€™s extremely valuable for me to know that without actually having to buy the camera and finding out that the latency is too high.

I got the 100 ms reference from the same link that you referred to, but I was referring to Jakeā€™s May '19 update, where he said that he got ~100 ms at 1024x512 @30fps at 0.25 miles. I could very well be misinterpreting something, since Iā€™ve only been reading threads while you actually have the system running :wink:

Thanks again for your help!
Best,
Femke

Oh, I see. He wrote, ā€œI can prioritize the most recent frameā€¦ā€

I havenā€™t looked at the details of his implementation, but he may have taken additional steps to improve latency. There may be another version of Jakeā€™s amelia_viewer or possibly there is something in the code I need to adjust.

As a significant portion of the latency is the time taken to stitch the dual-fisheye image into equirectangular, a smaller frame may show lower latency.

When I did my test with the USB cable, I used 4K video. I may be able to get under 250ms if I used 2K video.

For telepresence, I was assuming that most people would want the 4K. However, is 2K good enough?

With the current technology, people are using two cameras for telepresence, one for non-360 (like a normal webcam) with lower latency, and one for 360 views.

If you incorporate two camera into your robot, you can use the non-360 camera for manual navigation and the 360 camera for analysis of the surrounding as well as archival visual documentation.

The Amelia Drone Project was a research team at RIT that was experimenting with telepresence. They may have advanced the technology and achieved lower latency. Unfortunately, the project got shutdown due to COVID-19 as the team couldnā€™t meet. I hope they started the project up again. They were pushing the limits of the camera technology and we were all benefiting from their research.

I just sent an email to the professor that used to oversee the project to see if they started up the research group again.

Hi Craig,

Itā€™s a good question about 4K vs 2K. Obviously Iā€™d prefer higher resolution, but Iā€™d say that latency is more important for me than resolution. However, even if 2K proved sufficient, our latency would really have to be below 100 ms to say with confidence that the perceptual experiments that we plan to run with this setup our sound.

We plan to map hand movements of the user directly to tele-operated robotic arms+hands, which will also be in view of the camera (and thus the user). Weā€™ll ask the user to perform tasks that require dexterous manipulation, like picking up a slippery object. So, if itā€™s obvious to the user that there is a delay between them moving their hands and the image displayed back to them, weā€™d be in trouble.

Nonetheless, thank you very much again for your efforts, and for contacting the group! If they continue their great work and this tech makes it to a latency of ~70 ms, weā€™d certainly be delighted to try it in our setup.

1 Like

Thank you for this great information.

We will pass the feature requirements on to management at RICOH.

Have a great day.

6 posts were split to a new topic: THETA X Antenna Tower Inspection System Live Stream to Quest 2 Headset