Photogrammetry, Object Tracking, Performance Optimization

I’m interested in writing software (plugins) that run onboard the camera that performs some image processing in real-time and logs some statistics that can later be retrieved by web or USB interface.

Depending on how much processing power is available, I’d ideally like to have all the processing onboard the Theta V but would likely end up offloading some processing to RaspPI and/or an Android device.

What I’d like to do:

  • Capture time-lapse of entire sky (1 frame every 10 to 30 seconds)
  • Track local markers (ie. some orange dots … maybe at 30Hz)
  • Compute depth maps (if processor can do it)
  • Compute motion flow vectors
  • Assemble small lowres video clips of objects in scene
  • Calculate distance to objects in scene using GPS and sensor data
  • lots more stuff :slight_smile:

The tricky bit is that the camera will always be in motion and oriented off-axis so one of my initial challenges is to use the orientation data to create a normalized (level, north-facing) spherical images so that feature correlation across images can be done.

The camera will be running on an aircraft. To get an idea of the sort of environment its running in check out this video …

I’m sure the camera will struggle to achieve some of the computationally expensive tasks but I’m hoping I can get it working mostly on the camera without having to resort to using a second computer.

I started doing development using an Insta360 and doing all the processing on an Android tablet but the idea of having a self-contained camera like the Theta V that does it all onboard is very appealing.

If you have any suggestions on how to tackle this in terms of which camera resolution, libraries, FastCV vs OpenVC vs OpenGL shaders, etc are likely to give the best overall performance I’d be very interested. I’ve skimmed most of the posts here and I didn’t spot much info about performance benchmarks.

It would be good to know how much time per frame is needed for capture at various camera settings and any tricks there are for speeding things up. Also, (I know this is probably a stretch) but some estimates for time to apply a simple GL shader operation or FastCV or OpenCV operation for various image sizes. If anyone has some ballpark estimates it would save a ton of time and help make design decisions easier.

I have seen a lot of very interesting posts about running TensorFlow or applying CV operations, etc but often there wasn’t much mention of how quickly these could be performed. I also didn’t notice any posts about optimization. (I’m leaning toward using GL shaders for everything … not sure how much overhead there is with CV libraries). Can camera features be disabled to speed things up (ie. I saw posts about saving dual-fisheye vs rectilinear … is this faster?)

I have a ton of other questions but for now, if anyone has suggestions on how produce normalized (level, north-facing) dual-fisheye images that could run on the camera, that would be a great help. In my earlier Insta360 app I did it with GL shaders but it was messy. Hopefully someone has come across a clever approach to this for moving time-lapse videos.

Thanks in-advance.

1 Like

Do you have a THETA V or a THETA Z1.

The stitching internal to the camera is computationally expensive. You can disable the stitching with either the WebAPI or the internal CameraAPI running on the internal Android OS. Although I don’t have benchmarks, I suspect that the fastest way will be to use the CameraAPI.

Another hit to performance using the WebAPI will be when you switch back to the Android OS, which you’ll need to do to process the image with either OpenCV or OpenGL.

https://api.ricoh/docs/theta-plugin-reference/camera-api/

OpenGL information is here:

Code is here:

Note that I was not able to get the surface to a Bitmap to save to the SDcard. I’ll put a note in the comments of the original Japanese article to the author Meronpan to ask about performance of OpenGL versus OpenCV.


Regarding FastCV versus OpenCV, although FastCV uses hardware acceleration, most people are using OpenCV because there is more examples.

I don’t think anyone has timed the processing, but you could try and do some tests with this code base:

and swap out the FastCV library calls for OpenCV.

This presentation has some interesting ideas.

1 Like

Hi Craig,

Thanks for getting back to me so quickly.

I am working with the Theta V right now … I might look into getting a Z1 depending on how things go.

I really appreciate all the links. I am really impressed with the Theta360 community and how much work you’ve put into supporting it. It’s great to see!! (Ricoh needs to up their sponsorship :P)

I’ll dig into some of the samples you’ve included and see what sort of performance I can get out of the camera. As a beginner there are probably a lot of simple optimizations I’m not aware of but I’ll try to document anything I discover.

I ran the 3D Mark benchmarking app on the Theta V and the results were somewhat promising. The most demanding Sling Shot Extreme Test which tests OpenGL ES 3.1 and Vulkan didn’t do that great with an overall score of 465 (roughly on-par with an iPhone 4 it looks like) and framerates around 3fps on graphics tests.

The older “Ice Storm Extreme” test that tests OpenGL ES 2.0 ran very well though with an overall score 13693 and framerates from 45fps to 78fps. Watching the animations through Vysor, they were very smooth. The shaders I’m planning should be fairly lightweight so hopefully I can get some decent results.

I’m going to try out some of the projects you linked to. Hopefully I can get past that issue you were having with saving the surface to a Bitmap so that I can save the result. Thank you very much for asking the author Meronpan about this issue for me.

2 Likes

Thanks for these tests. The Snapdragon 625 with Adreno 506 GPU does support OpenGL ES 3.1,
https://www.qualcomm.com/products/snapdragon-625-mobile-platform

However, maybe that’s a tough benchmark. I can try run the test on a Z1 if you think it would be useful.

It seems like you’re leaning more toward OpenGL. I was going to ask the engineer that wrote an article on FastCV if they did any benchmark tests versus OpenCV, but maybe it’s better to focus on OpenGL tests for now.

If someone else wants to try the test on a Z1, I’m planning to use this:

https://3dmark.en.uptodown.com/android/download

I can’t figure out how to start the benchmark above. Will try this one next.

https://apkpure.com/gfxbench-benchmark/com.glbenchmark.glbenchmark27/download?from=details

Currently downloading 660MB of files to my Z1. I hope that I’m following the correct process.

These look like a different test. I’m going back to the 3D mark one and give it another shot. I think I need to download files per test.

image

Currently installing 102MB of Sling Shot test files. Will then install 135MB of Ice Storm test files. Sling Shot Extreme benchmark seems to be running. Looks a bit jumpy on the screen. There’s some nice sounds coming out of the internal camera speaker. Kind of wild.

Performing the lighterweight Ice Storm Unlimited now and it loosk super smooth and fast.

lower score may be due to background processes