We manufacture gamma-ray imaging radiation detectors (www.h3dgamma.com). In order to account for parallax between the radiation image and the optical images I have to be able to transform between the fisheye space, cartesian/spherical space, and then finally into equirectangular for the final image which we display on screen.
If I start in fisheye space I can calibrate for the lens parameters which then lets me transform into spherical space, then cartesian, correct for parallax, then sample the parallax corrected image for directions in the radiation-origin space. We have our users enter the distance to source and then after correcting for parallax we can have our optical image match the radiation image to <1 degree. This is what I would normally do for our detectors.
The impetus to use the Ricoh is from a customer who wants to use the 4pi image for pre-job brief work in radiation environments to minimize worker exposure to radiation. We want to mount the Ricoh to our detector, using only the USB cable for comms and have the arm in the detector grab the picture from the Ricoh (without an individual having to press a button). In this case instead of correcting the optical image for parallax to the radiation image, we will correct the radiation image for parallax to the optical image. Without knowing the camera lens parameters and how the image is stitched I can’t correctly do the reverse transformation. I can do a poor job if I just use the equirectangular image, but is more empirical than we want.
Thanks for the information, and if I end up having some success I will update.