@gerlos, the current dual-fisheye plug-in from the store can’t take time lapse. It can only take multiple shots for bracketing (create your own HDRI file or for layers). Once the camera boots into plug-in mode. Press the Wi-Fi button to put it into bracketing mode. The Wi-Fi button is the middle button on the side.
The Wi-Fi LED will change color.
The normal LED is an aqua
It’s fairly easy to modify the plug-in to take timelapse. Maybe we can convince a developer to volunteer and build a special plug-in for you so the developer can learn and gain experience.
Can you answer these questions:
What interval do you want the images spaced out at?
How many images do you want in the timelapse?
Assuming that there are 3 configurations for timelapse (number of images and delay), what would you want them to be
do you want a special filename prefix for the timelapse (such as tl-2018-11-21-04-35)
Also, I’m assuming you know that the standard mobile app from Ricoh can take timelapse images without the plug-in. The main advantage of the plug-in is that you can reduce the time between images down to 1 second.
You’ll need to use something like PTGui to stitch the images together after the shoot.
Thanks for the quick answer - my project is both video and photo.
I plan to go to an astronomical observatory next summer and create a day-to-night time lapse video, a star trail time lapse (like this https://flic.kr/p/eabmnG - each frame is just a composition of the last few frames, so trails are visible) and then compose a set of night shots to create a star trail image like I did with this:
My main problem with that picture is that the camera seems to “move” features on the stitch line from one shot to the other, and when I compose several equirectangular images I got distortions on the stitch line.
My workaround so far has been to shot so that the seam line is along the horizon (i.e. I put the camera horizontal), but this way the tripod is too visible, like in this pic:
Another problem I got with my previous experiments is that I need a shorter interval between the shots, so that the trails are more regular.
I think I could perhaps mitigate these problems taking a set of dual-fisheye shots, composing them in a star trail stitch them at the end of the process (in the past I’ve used Hugin to stitch images taken with a DSLR and a fisheye lens).
About the details:
For this project I need to shoot every 30 seconds for the day-to-night sequence (with auto exposure) and every 62 seconds (with 60s long exposures) for the night sequences.
I need to take as many images as possible - passing clouds are not predictable and can affect the final result, so I need a lot of shots to be sure to have all the data I need - I’d prefer to manually stop the camera turning it off when I’m fine with it. I’ll power it with an external power bank, so battery life will not be a problem.
I’d like these configurations:
Shot every 30s, auto exposure, unlimited shots.
Shot every 30s, 25s exposure, 400 ISO, unlimited shots.
Shot every 63s, 60s exposure, 400 ISO, unlimited shots.
For my workflow I’d prefer a filename suffix - pattern like 2018-11-21_04.35.02-theta-tl
Just to confirm that you’re okay with having to stitch the images from the plug-in yourself, using either PTGui, Hugin or another third-party software. I have not tried using hugin for the stitching. This video on YouTube makes it seem feasible.
I am thinking of running a fun challenge for this developer community to complete the plugin. You can see the challenge concept in the mock-up below.
The idea is for developers to learn more about plug-in development by completing an easy challenge. If the developer extends the functionality to make it more user friendly, they get a chance to win another prize.
The banner would link to more information that contains:
information on the photographer/artist (Gerlando Lo Savio)
information on the project (astronomical observatory day-to-night time lapse video and trail time lapse)
plug-in requirements as specified in your previous post
If we go forward, @jcasman and I will take care of everything. You do not need to do anything other than provide feedback on the completed plug-in or plug-ins.
Other components that Jesse and I would handle:
challenge rules
securing and distributing prizes
promotional of the challenge
promotion of the products offered as prizes
create challenge starter kit with template dual-fisheye plug-in for people to modify
Let me know your initial feelings on the challenge. The reason I am proposing to show your picture and your art is to make the challenge feel more “real world”. There’s a real purpose for the plug-in, not just a theoretical use case. We can apply a watermark to your art if you prefer.
After I get your feedback, I will need to discuss with @jcasman
By the way, @gerlos, a while ago I posted this info from Nightflight, a conservation site “dedicated to the beauty of the night sky.” It’s about using a THETA for nighttime photography. It’s a little old, focused on the previous model, THETA S, and can’t handle fixing the problems you’re referring to, but maybe still interesting.
Stitch Line appears to be perfect. This version does not have a watermark. The app is fully functional. You can try it for free.
Original calibration Image
Hold the camera sideways and face the lens toward the sky. One lens points to the ground. Take the image outside where there are distant objects like trees.
Calibration Points
Select distant objects to calibrate the lenses. This is a one-time calibration and the lens calibration is saved to your mobile phone. The next time, you just stitch it.
After first calibration and a few successful stitches, I’ve seen that edit360 wasn’t able to automatically level the horizon using camera orientation data.
Later on, analyzing the images on the desktop with exiftool, I’ve seen that the pitch and roll metadata were missing from the file generated by the dual-fisheye plugin (PosePitchDegrees, PoseRollDegrees, RicohPitch and RicohRoll metadata were missing).
Is there any reason for this?
These pieces of information are really useful to bulk level bathes of pictures, both on mobile (using the edit360 app) and on desktop (using Pano2VR or this script).
Oh, good point! The developer needs to add the metadata in. It’s not put in by default. There is no reason that it was taken out. The base SDK probably just didn’t put it in as a feature. I think it can be put in. I’ll look into it.
For other developers looking at this challenge, i’m going to look at this:
I’m not sure. I inspected with exiftool a few pictures straight out of the camera from both Theta S and Theta V, and it seems that PosePitchDegrees and RicohPitch always contain (almost) the same values (same happens for PoseRollDegree and RicohRoll).
Data in proprietary Ricoh tags seems to be rounded to the second decimal (i.e 5.67), while PhotosphereXMP tags (such as PosePitchDegrees) got rounded to the first decimal (i.e. 5.7).
When I rectify shots with Ricoh desktop app the Ricoh proprietary tags get dropped, and PhotosphereXMP tags are changed to zero.
Without knowing more, I’d set both pitch tags (and both roll tags) to the same values (perhaps rounding them to the first decimal).
Pictures taken with the plug-in webAPI contain Exif data. These images are in equirectangular format.
Pictures taken with the CameraAPI can be in dual-fisheye mode, but do not contain Exif data. It is possible to take an image with the WebAPI as a reference for the camera orientation. Then, switch to the the CameraAPI and copy the orientation image from the image taken with the WebAPI to the dual-fisheye image taken with the CameraAPI. The following example from Ricoh may be useful.
Hi, went through this topic and looked really useful as I’m taking a lot of images with the new Theta Z1. The process time is just a tad to slow for the purpose I’m using the camera. Does somebody know if the Plug in would be developed for the new Theta Z1 as well? I know there is already a setting where it generates a RAW .DNG images but the image is still being processed!
This plug-in by Leaning Len can easily be adapted for use with the Z1. It may work as is. However, it doesn’t reduce the time taken per shot significantly below the time taken for a stitched image.
Using the API with equirectangular images, you can probably get it down to 3 or 4 seconds on the Z1.