Z1 very dark video outdoors

Hi, @craig , yes, I know it’s not possible, but wouldn’t it be possible to ask Ricoh for it? The image quality of my plugin is is superior, but for recording video it’s useless in lot of scenarios because Ricohs Desktop app does not support videos recorded with the same camera but via Plugin. What I suggest is a solution which may work through multiple options:

  1. Ricoh to expose through API compiled API (Like with CameraX.jar for Theta X) a method that could be used to apply required metadata to plugin recorded video files - this way they could actually hide the secret key generation and they would be able to make sure that only videos created with Ricoh devices could use Desktop app, but could include content, videos recorded by plugins too.

  2. Ricoh to modify slightly their desktop app, so that it would process imu (or only data that is being used by desktop up, may not need all 3: gyro, accelerometer and magnetic field) data not directly embedded into video files, but from external resource file that is also generated by a Plugin - this way again they wouldn’t have to share their custom solution and still keep all processes under their brand, BUT there would be new options opening for developers, providing much better quality videos at least including stabilization.

Street View Studio made by google is able to use .gpx data files, to apply to videos, it shouldn’t be an issue to slightly modify Desktop app to be able to read similar file and use it for stabilization, clearly there may be differences as imu data would be there instead of gps.

I have this in an agenda where I will advocate for this use case. However, my limited role is to summarize information from the community.

The main issue is that if the desktop app cannot already provide the feature, I can only submit a community feedback summary.

1 Like

@biviel I discussed your idea with people at RICOH. Unfortunately, there are no feasible ideas on how people can use the desktop app for video stabilization on the video file generated by your plug-in.

1 Like

hi, @craig , thanks for sharing! Answer you shared will lead to some other questions related to how z1 stabilization works currently on Z1. May be I misunderstood. Let me think this through.

If you really want to pursue an improved video with the plug-in and need the video stabilization, I can put these two ideas into a slide to make it easer to understand.

In general, these types of ideas would take a long time to go through an approval process and get implemented.

  1. Ricoh to expose through API compiled API (Like with CameraX.jar for Theta X) a > method that could be used to apply required metadata to plugin recorded video files - this way they could actually hide the secret key generation and they would be able to make sure that only videos created with Ricoh devices could use Desktop app, but could include content, videos recorded by plugins too.
  1. Ricoh to modify slightly their desktop app, so that it would process imu (or only data that is being used by desktop up, may not need all 3: gyro, accelerometer and magnetic field) data not directly embedded into video files, but from external resource file that is also generated by a Plugin - this way again they wouldn’t have to share their custom solution and still keep all processes under their brand, BUT there would be new options opening for developers, providing much better quality videos at least including stabilization.

Hi @craig,

Ricoh’s stabilization (top-bottom correction process):
if a video is recorded on Theta Z1 with top/bottom correction turned on, but stitching off, after dropping the recorded video file to Ricoh Theta desktop app, it will render a new video file, which will become stabilized and stitched. This clearly works as in video Ricoh’s recording by a custom process generates gyro/accelerometer and magnetic field data directly. Ricoh’s desktop app is using this data to “correct” the file, top/bottom correction is nothing else but a “stabilization”, using 3 axis changes and stabilizing movements.

Third party/Plugin stabilization process could look like:
if a video is recorded using my plugin in HDR quality mode, stitching off, video file will not include gyro/accelerometer/ magnetic field data and also probably metadata will not be compatible with Theta desktop app, so videos, even if they are superior compared to original recordings, will not be able to drop into desktop app for stitching neither top bottom correction will be done. For this I asked to

  1. provide an option in desktop app to read external file which includes same data necessary for top-bottom correction, so that videos recorded with my plugin could also be top bottom corrected by theta desktop app.
  2. still some way would be required to apply correct metadata to video files that are not recorded directly with Ricoh recorded, but plugin. I understand that Ricoh doesn’t want to expose exactly how they detect if a video is recorded by plugin or other tools or directly with inbuilt video recorder, that’s why I suggest to provide an API method so that after a file is recorded, we could run that method over the file and necessary metadata information would be added. We are using the same hardware, so should be able to generate same format , encoding, etc. video files.

So may be they misunderstood “stabilization”, please could you ask for above, but highlighting top-bottom correction instead? Somehow it would be a huge improvement if plugins could become compatible with desktop app. In desktop app doing a code change to read top-bottom metadata from video file, or from another external file, shouldn’t be much change…

Does it make sense? Or it’s too much to ask for? :frowning:

hi,
were you able to run it through the slam program, how was the outcome without stabilization?

Thanks!

That is the use case @craig Just want to get outta the shot before recording and have a beep to let me know it started.

I tried a few different videos run through slam. Sadly none of the ones from your @biviel plugin worked very well. They were all off in the Z axis by a lot. (need that imu data) The slam process was able to pick out more points due to the hdr on some of the tests. I still have one more test to do with your plugin and thats the world lock non hdr. But i do not see much point since I could just use the default inbuilt settings other than I can set my fps in your plugin.
One thing to note is using the default Ricoh video recording and running it through their desktop app / route creation it appears to produces a good alignment in x and y looking at the little window but when run through slam its not so good in x and y. Z seems to be pretty good. I tested with 10 meter elevation changes to see how it would do.

Hi,
Thanks, the problem that I could in theory implement similar imu based stabilization as Ricoh but not sure about effort at the moment. I will check… did you try to use astabilizer like dji osmo 4 se? Im using it quite well, its visible when looking down, but depends on the use case.

The world lock did not make it any better. I have an osmo 1 or 2 somewhere around here. I will look into printing a adapter for it and can give that a try. I miss the hdr. Its so much better for slam other than the lack of stabilization.
I tried an ev of 0.7 this time. Its not as dark but still pretty dark when you get into the trees. I guess this probably is more of an issue when snow is on the ground. Camera does not have enough stops of dynamic range i guess.

Hi,
Yes with hdr I was also happy with. Live streaming is the same quality as recording now and for me live streaming was most important in hdr.

Stitching is worse for now in hdr vs non hdr is it possible that’s causing you in slam additional issues?

This isn’t an issuein hdr mode, right? Or also in hdr to as it doesn’t cover use case when too much difference in scene at different locations during same recording? I can llok into if could do a more agressive auto adjustment of exposure compensation during capture. Do I understand correctly?

I will try to capture some adjustments you suggest and will see if its feasable or not in upcoming upgrades of plugin, but I really do not want to spend time on something that is not going to be used by people .

  1. Recording without internet access for some period of time at least.

  2. Start recording after a predefined time , like 10 seconds, 30 seconds, etc? And to preset hkw long to record and stop automatically?

  3. Exposure adjustments to cover too dark/too bright scenarios during same recording.

Please review and modify above if feel so.

Material will be “plastic” right? Is there a chanse to print metal? :slight_smile:

Dji osmo 2 and 3 isnt using that magnetic adapter for phones? If you desing and its non metal , make sure that the back of the z1 isnt covered minimize parts where plastic material, which heat absorbtion is low, is attached to z1 magnesium alloy body… if its metal, do the opposite, especially at back. Look at my experiment, its aluminium peace at the back for heat, works like a heatsink too.

Sorry for typos


1 Like

Stitching is worse for now in hdr vs non hdr is it possible that’s causing you in slam additional issues?
I tried normal and hdr. Both had about the same issues.

This isn’t an issue in hdr mode, right? Or also in hdr to as it doesn’t cover use case when too much difference in scene at different locations during same recording? I can llok into if could do a more agressive auto adjustment of exposure compensation during capture. Do I understand correctly?
Not an issue in hdr. Without hdr the range is pretty small.

*Recording without internet access for some period of time at least.*

This would be very nice. Cant use the plugin when I am out in the mountains currently.

*Start recording after a predefined time , like 10 seconds, 30 seconds, etc? And to preset hkw long to record and stop automatically?*

I like how the delay is set in the factory video on the Z1. You can set a delay in seconds It counts down and then dings when it starts recording. Don’t think a set time is needed for me but it could be a nice feature sometimes. I have my lidar to stop after so many minutes.

*Exposure adjustments to cover too dark/too bright scenarios during same recording.*

This could be nice. But it does seem like hdr does ok without it. Standard more really needs this.

Material will be “plastic” right? Is there a chanse to print metal? :slight_smile:
Yes i can only print plastic. I see what you mean about the plastic being an insulator instead of a heatsink.
I can’t find the Osmo. I think I loaned it to someone. It really doesn’t matter since the z1 will be mounted on top of the lidar. The osmo would be to big for than. When I walk with the lidar I swing it up and down pretty aggressively to get more coverage. The camera being rigid to the lidar would mean it will be swung also instead of being pretty much vertical.

hi,
Thanks for your feedback! I will for sure include an option to be able to record and do some minimal adjustments without internet. It may be required to do some more work and display of Z1 is a bit limited. So without internet what do you think would be enough to adjust? I imagine a workflow where user sets state to record only and sets desired FPS and exposure compensation via web, and after there is no internet should be able to adjust exposure compensation and FPS too without internet, directly by pushing buttons on z1? I see you tried 10FPS.

I’m not sure if for your use case it’s important or not, but with my plugin Z1 can record much much longer, for hours. Clearly environment temperature, if it’s hot out there, it can limit recording time, but if FPS is set to 20FPS or less, it should be able to capture for ~2-3 hours using h.265 and ~25mbps bitrate.

I really think you should try using my plugin in HDR but with a stabilizer like DJI osmo or else. I noticed that video quality especially sharpness (+ dynamic range) is decreased a lot after post processing and top bottom correction is applied by Ricoh desktop app.

So without internet what do you think would be enough to adjust? I imagine a workflow where user sets state to record only and sets desired FPS and exposure compensation via web, and after there is no internet should be able to adjust exposure compensation and FPS too without internet, directly by pushing buttons on z1? I see you tried 10FPS.

Yes I agree fps and exposure would be good offline. Maybe the option to switch between hdr and non hdr. I would only have the hdr switch in one format. Say eqrectangular. to many options gets cluttered. Maybe the ev adjustment is not needed in HDR mode. Simpler the better.
Mode: HDR, NonHDR,
FPS: 10,20,30,
EV: +/-

For what I will be doing a 20 minute video is the maximum that would ever be done. Slam fails more the longer the data lasts.

really think you should try using my plugin in HDR but with a stabilizer like DJI osmo or else. I noticed that video quality especially sharpness (+ dynamic range) is decreased a lot after post processing and top bottom correction is applied by Ricoh desktop app.
I may need to do that. the last few videos processed from the Ricoh app were low quality and blurry compared to the raw.

1 Like

Hope you don’t mind me jumping in in this thread?
I’ve had a brief discussion via YouTube regarding issues extending the max record time on the Theta V and was directed to yourself as you’ve have achieved this for the Z1. I don’t know if this is even possible, but it was suggested I try reach out to you and see if you are able to change the code to specify the V for this plug-in as well as, or in addition to the Z1.
The video is quite old and seems the beta code original written by Craig has now become obsolete.

2 Likes

As there are many THETA Vs in use, it might open up the market for Flow Tours. The code might work with only minimal modification.

@biviel do you have a THETA V?

About 5 years ago, we were managing a plug-in contest. In order to test the submission workflow, I build a test plug-in to take long video. I haven’t looked at the code in 5 years and feel that it is unlikely to still work.

3 Likes

Sorry, I dont think it will work easily through my plugin… I didnt use that aproach Craig. :frowning: I’m directly recording live preview by processing it via plugin and not like in that older long video sample. I dont have any V models, could you try that plugin and see if it records?