Help! Web API (and or BLE) to control ThetaV

I have a web-app that currently allows users to view and upload media through the normal html/js input-file setup. Works great, mobile phones/tablets/pc all operate as expected (pc you choose files to upload, mobile/tablet you’re asked to get a file or take a picture…perfect).

All my backend is node js, with a reverse proxy serving a few different apps through the one secure (https) port on our domain.

I am having trouble understanding my path to add theta control in the web-app…

How would I make POSTs from my https site to the http running on the Theta V? If I actually connect the wifi to the Theta V, I will obviously not be connected to my server anymore (this is why I’m thinking I need a BLE solution, possibly).

I see lots of topics on here that get close to my situation, but not close enough. Is my best option to run another (non-secure) http server, and issue POST from there? I suppose if so, I’ll have to then communicate between those 2 servers to get the data into the main-https app.

Also, I am severely handicapped by my knowledge of web-dev, as I just got into it maybe 3 months ago.

Any ideas? Thanks!


Web interface example to control camera.

If you want to build desktop apps with web technologies, you can use Electron.

A different approach is here

craig, thanks for the info dump. I’ve been ending up on a LOT of your resources, and I know the answer is in there somewhere, just have to keep digging.

One more question before I read all this, how am I to handle the need to connect to the wifi of the Theta whilst not losing connection to my node js server?

The theta is going to be used in the field, not connected to a wifi router with internet (ie. I do not believe the client mode solution would work), rather just the phone/tablet of the user in the field. The same user also has to be connected to my server (via browser). The user wants to trigger the theta camera, and I need my backend to eventually get the image data, which I’ll then serve back to the browser as a thumbnail (and store the full res image on the server, and write it’s path to a database).

Oh, if you’re using node, then run node on a computer, then use client mode authentication.

If you post more info on what you are doing, I can narrow down the recommendation.

Here’s the setup:

  1. you have a router in your office or home that is assigning the IP addresses. This is the normal router that everyone has in their home or office. In my office, the router is
  2. the router assigns a computer that is running node an IP address. It is
  3. The router assigns the THETA V (or multiple THETA Vs) an address (ideally static ip)

Now, from your node computer, send the request to

Get Familiar with Client Mode

First, familiarize yourself with client mode as an end-user

Once you understand the concept of client mode, then review the requirements for developers.

Once you understand the concepts, use this as a simple test of node with the THETA.

Thanks again Craig!

My situation is exactly this:

  • I serve multiple apps from a secure https-domain
  • My dev environment is at my house, and my clients/users are on mobile devices NOT on my network (mostly cellular data)
  • On the node js app, first a reverse proxy figures out (based on url path) which app to deliver the req to, then each app (also node js apps) operates securely through the proxy
  • In one of these apps, I need the client (on their mobile phone browser) to trigger the camera of a Theta that they have ON THEM, in the field. I need to then get that picture, upload it to my server (I’m already doing this for files/pictures they take with their cell phones) so I can serve it back out in another end-client-facing app (that uses these files along with other inspection data to help utilities manage their infrastructure).

So the Theta and my server will never be on the same network. The user will be on a mobile browser, connected most likely to cell-network only.

Right now I’m looking at the web API only, as I see in the docs that you can’t GET the image off the camera via strictly bluetooth…so I might as well keep developing with web control in mind.

I did find a npm module for theta ble control, but again, its for a server connected to the camera. I need a solution for the browser-client connected to the theta.

This is difficult as Android phones can’t use Wi-Fi and mobile data at the same time. At least, it’s not easy. They need to first take the picture, then disconnect from the camera and then send you the image with mobile data.

In this usage, I suspect that most people are building a mobile app that take a picture with Wi-Fi. They then disconnect from Wi-Fi, connect to mobile data and push the picture up to the cloud.

Are some of your users on Android? Or, is everyone using an iPhone?

yes, you can’t transfer the image from the camera to the mobile device with BLE. You need to use the Wi-Fi API (also known as the WebAPI).

I can’t predict, but I’d say most would be on some iOS device.

That workflow sounds annoying for the user, but I can try it to get something up and running. I’ll have to add checks in my existing app for reconnections to properly handle their session ending, and restarting…but I can figure that out I’m sure.

I can have the user do like you say…connect to the camera via wifi, take the picture, then upload it. How will they get the image on their mobile device? Is that something the camera does now via Ricoh app (allow user to download image to their mobile device)? I just got my clients new Theta V yesterday, and have not actually used it extensively yet.

If that’s the workflow I end up on so be it, but it essentially would work more like any other files upload. Kind of a bummer, but a solution is a solution for now.

I wonder if utilizing a phone’s mobile hotspot would work…I could get the camera on the users “wifi” that way…

I agree, the solution with the Wi-Fi and mobile data is not convenient for the user. However, with time and some training, it is doable.

You can either use the official RICOH THETA mobile app (which is free) to take the image and transfer it to the phone, or you can write your own app. The image on the phone is exposed as a URL endpoint. Unfortunately, you can’t pull the image directly from the camera to your server. The mobile phone will need to pull the image to local storage, then push it back to your server once it re-establishes mobile data. This is likely not as convenient you want, but it’s a limitation of mobile and wifi traffic on Android phones for security reasons.

More information about this problem is below

Again, really appreciate all the help. Wish I would’ve reached out yesterday!

I will now focus my efforts on maintaining the data I need between sessions on my server, and account for the user dropping out, taking his picture, then reconnecting for upload. Maybe I can help him/her out by presetting their directory for the input file to that where the mobile app saves it (depending on agent/OS).

Also, I keep running into electron (like you mentioned) and other software that lets you write native apps in languages like java/python…that sounds like my next experiment, as the native experience seems more natural for things I’m getting into recently. I know C++, but only due to my experience in Unreal Engine…and not outside of their dev environment (which deviates from vanilla C++).

Anyway, thanks a ton.