HDR plugin to automatically create EXR file. For VFX use

Hi, I’m talking with my vfx supervisor about the time restrains on a tv commercial set. He likes the simple workflow of the plugin, but he’s always after less time spent with the actual capture of the different exposures. Even a minute can be too long. Yes, I know, crazy, but that’s the reality on some of the commercial shoots. Nobody from the crew wants to wait and not move for a supervisor to do his job :slight_smile:

How much of the time is the actual exposure and how much is the time spent by the stitching the 2 lenses inside theta?
Is it possible to get to only the raw 2 lenses, without stitching and then also without exr merge?

I know that’s the whole point of the plugin :slight_smile: but I asking theoretically how fast it can be if you are after the fastest possible capture time with 11 exposures?

Thanks a lot!

The default settings if you just setup and press the device button is 11 brackets and 5 denoise photos. So your wait time is for 55 photos (Bracket_Count * Denoise_Count). Obviously there would be wide variations here in total time spent based on your lighting, for instance if you have to wait for several exposures over 1 second then things are going to take longer. From generic tests outdoors, my average is 2 minutes of photos, after which you can move the camera around while it finishes processing ~1.5 mins.

If you connect to the web interface you can control # of photos and # of denoise photos. So depending on your needs you could snap 5 * 1 or 7 * 1 if time on set is the limiting factor.

I see, but even so, I’m trying to slice as much time as possible from the capture.
That’s why I’m asking about the stitching. How much processing time is spent with the stitch, that the camera is doing?
Is it insignificant compared to the exposure times? Or can it be avoided and get only the 2 lenses and no stitch and no merge?

I saw, that it is actually possible to get the 2 lenses but on the new z1 as raw dng. Not sure if it’s possible with V.

It would be interesting to measure this. Stitching is not parameterized at the moment so you’re currently stuck with the in-camera stitching. But you can see where you could change the parameter here if you wanted to start hacking around.

Apparently the stitching option is supported on the V in v3.00.1 and later. There’s no raw on the V.

Yeah I know, olny on z1.

ok, is it possible then to have all the great features of the plugin like the number stop jumps, number of brackets and denoise (if there’s enough time) - but then don’t have the EXR merge at the end?

or have the option to choose the exr merge or not?

This option does not exist right now. Is your use case shooting consecutive HDRIs at different hotspots on the same set?

Not only that, but the most time constraining moments are when the supervisor has to take 2 hdrs from the same exact spot - one for a scene with a macbeth chart for normalization and one without for the actual hdr to normalize.

So then the time constrain is the difference between getting or not getting the additional hdr with the color chart. And we would like to have it :slight_smile:

So then you are dealing with not 2-3 minutes for capture + exr merge, but in 5-6 minutes and that is too much time in some cases.

We are now thinking about just doing the hdr with the colorchart and then retouching it.

1 Like

Yep, this all makes sense now. These features don’t sound difficult to implement, I’m not the dev but since the project is open source and I’ve been hacking around in it, I might take a stab at throwing these in. I guess in your case you’d most likely have a stitching/merging post-process workflow that would happen off the device?

1 Like

Yes exactly.
In some cases, when there’s more time, the merge feature is absolutely great - very nice results and it saves time at the back end.
And then in some cases, the time saved in the capture is more precious, because we can stitch / merge the data after the shoot.

If you could take a look at it, that would be fantastic. To have the option to choose between capture+merge and capture only.

1 Like

@Martin_Smekal I came here to request the exact same things for the exact same reasons.

1 Like

I added a flag to allow skipping the HDRI merge step. It does exactly that, once all pictures have been captured, you’re done. One note, if you are capturing with denoise > 1, the denoising occurs in-between captures which adds some processing time. So the fastest possible capture time would happen with:

  • Number of Denoise Pictures = 1
  • MergeHDRI = off

Is denoising something you guys are using?

I actually made these changes last week but didn’t have a dev account at the time to test the changes on my theta. I’ve been testing the last day and things are working as expected on my end. For anyone with a dev background, you can look at my merge-toggle branch on github if you want to try the bits out for yourself:

I threw a pull request over to @Kasper so hopefully the official plugin can get updated so there is an easy upgrade path for those that want to test this out. Otherwise if you want it now you would need to sign-up for a dev account so you can side-load your own built plugins.

That great! I’ll need to wait until the plug in gets updated. The Denoise is useful when I have a lot of time or not on set and just grabbing HDRIs in the wild. I don’t have time to use it on set.

Hi error454,

thanx for the code I will add it to the main branch! Looking nice.

I was thinking it might be nice to have the option to save the settings from the webinterface as the default option, also linked to the big button. So that on set you only have to turn the camera on and it will always use your favorite settings. I will look into adding this.

Also in my experience I have never seen any time advantage between stiched pictures and non-stiched. But maybe there is. I will try to test it out and if it does have an edge I will add it as a feature.

Right now I’m looking into supporting the z1, finally got mine. Getting the raw files running is a bit harder then expected, also there is hardly any support for dng under java so that sucks.

I also noticed that the z1 is has the same amount of memory as the v, so with the bigger resolution it isn’t able to process as many pictures before running out of memory. This happens during the HDR merge function of opencv, so not in my own code. Once you run out of memory the android os just kills the app. That is waht is causing all the crashed.
One possible work around could be to split the images in two parts and process them one at a time, this will double the processing time but it will greatly increase the amount of pictures in the bracket… Will look into it.

I’ll keep you posted. once I have add enough stuff and a stabile version for the z1 I will resubmit to the store.

Greetings!

1 Like

I had the same thought! This should be pretty straight forward with SharedPreferences.

I had the same initial thought. The time to write the file to storage is probably orders of magnitude slower than any stitching time. But I’ve been wrong before :wink:

If you need a hand getting a better raw library in place I’m more than happy to help. Would something like https://www.libraw.org/ have the functionality needed? It would need some custom JNI written, or perhaps an entire custom C++ layer where the raw processing workflow could occur. The memory usage would at least be very controlled and predictable.

Hello again,

In addition to the apparent memory/crashing problems on the Z1 using the default settings (mentioned previously), I have several shots where white clipped parts of the resulting exr image have a flat gray color. See the attached image crop. 05%20PM

Thanks,
Bret

This is great! Thanks a lot @error454. Will wait for @Kasper and the official update to test it out. The denoise is good to have, if there’s time. Thanks again!

Hi error454,

the sharedprefs was a good idea! I got it up and running, it’s nice!

yeah I’m getting the feeling that the rawlib is the way to go. And help would be great especially with a JNI thing. Toke me long enough to get opencv running unde android, guess this a bit of the same stuff.

But first let’s get the dng files working. So far I haven’t gotten far, setting the api code for params.set(“RIC_DNG_OUTPUT_ENABLED”,1); isn’t enough you have to some how also save the file away but that is still a bit of a mystery for me how. You got any idea?

My secret hope is that we can easily read and debayer the dng and then move on to opencv without having to write a whole lib for working with the dng files.
But on the other hand one of the things on my todo list is to have to option to save to dng and exr… so that would indeed need a bit more dng stuff…

Thanx!!

Hi Bret,

I have the feeling that this is what I call the black hole sun error. It is an error in the opencv library with very bright parts of the image. Parts that in the darkest exposure are still fully white. (usually the sun but I think in your case also the clouds)
I have a working solution in python but haven’t gotten around to implemening it in java. (basically I use the fully white parts of that darkest expsoure as a mask to fill in the dark hole.)

thanx for the info!

I went through all the API calls and don’t see any further instructions on the Ricoh side. In the original android camera 1 framework, there are 2 parameters you typically set to capture raw. It’s possible these are also required for the Z1 in addition to the ricoh parameter. So The Z1 logic might be:

params.set("RIC_DNG_OUTPUT_ENABLED", 1);
params.set("rawsave-mode", 1);
params.set("rawfname", "/path/to/file.dng"); // may have to set this for every frame captured

And of course increasing your cols and rows count to accommodate for the larger image. If this doesn’t work I would capture adb logcat while running in dev mode while the plugin is running and see if there are any informative log messages that fly by when you either set camera parameters or capture a frame.

I did a search through all of Ricoh’s github repos and also don’t see any further tips or example code. I’ll keep looking around for an official support channel to ask them.

Hi Kasper and the HDR for VFX community. This is possibly the best thing ever! I have been looking for the answer to the issue of creating good quality HDRI’s for VFX via without using a DSLR and Ninja nodal for ages.

I had a Theta SC for a while and been using hdr360 app to capture bracketed 360’s. I just bought the Z1 discovered this amazing plug in. The problem is I can’t figure out how it works. I am a Mac user and was wondering if this is not developed for Mac OS?

There is a lot of talk on this thread about the webinterface but I can seem to find it or how to get onto it, I have tried connecting to the Theta on my MacBook via the wifi but it dose not seem to recognise it, I have tried going to this address 192.168.1.1:8888 on my phone and my MacBook but it just times out!

I really really want to get this working but with my limited knowledge of PC software and not using android I’m get a bit stuck!

I have got the plug in onto the Theta and can manually set it off, I can then see all the files on the Theta using Image Capture but don’t know what to do with them after that, I tried opening them in Photoshop, Flame, Lightroom but I’m having no joy!

Please could someone give me clear step by step instructions. I am sorry if this is a simple thing but I’m a slow learner and could really do with some help?

Thanks.