Sensor Fusion (not necessarily in real time)- Sync 2 Ricoh Theta V with a LiDAR

Hello Everyone,

I am working on a depth detection project involving a stereo 360 cameras (2 Theta V) and a LiDAR.
As you may know one crutial step for sensor fusion is to make them time synchronized. I did everything possible using ROS. now the output is synchronized according to ROS but it is still not enough.
Some researches lead me to think that synchronizing all the hardware using the same external clock would lead to a better result. what do you think ?
Also and most importantly is it even possible on a Ricoh V to sync its clock with an external one (like a GPS for instance).

Thank you in advance,
Mehdi

Please, do you have any ideas? I can’t even find related projects that have been done…

I do not believe that synchronization is possible.

See this thread.

You will likely need to use three cameras, 2 that are not theta and a 3rd that is a theta.

For example on a Jetson Nano, get two cameras like this:

https://www.robotshop.com/en/arducam-ov9281-1mp-mono-global-shutter-mipi-camera-raspberry-pi.html

and then synchronize it like this:

https://youtu.be/MbLOcaAJ7Ug

Then, connect a RICOH THETA Z1 or V that is not in stereo for the 360 view and archival documentation.

There have been no success reports that I know of synchronizing two THETAs at the level needed for robotics vision research. There is no success synchronizing with an external clock. See the thread above.

The internal OS of the V is Android 7.

Thank you very much for the response.
Let me please add that I do not necessarily need synchronization in real time for a real sensor fusion. It is as good if I can manage to synchronize them in post-recording. Does this make it any easier ?

Thank you,
Mehdi

Oh, are you saving the video to file and then want to parse the clock data that is embedded in metadata of the video file?

Please post more explanation of your use case.

For example, are you live streaming to a ROS device (either x86 or Jetson) and saving the stream to local storage, then analyzing the video file?

Or, are you using the video file that the V is saving internally which has some metadata that you may be able to extract with exiftool?

The video file on the stream will not have any time information.

Hello again @craig sorry for not being precise.
Here is what I want as a final output : I want to build a DL model where each sample is 2 camera 360° images(I am using two Ricoh Theta V) and a LiDAR output as a groundtruth.
So each sample ( : two images) and groudtruth output should be time sync.
Thus, the most important is to have the triplet output sync for my model, it does not matter if it is done in real time or if I find a way to do it in post-processing.
At the moment I used ROS synchronizer (and open cv while cameras are in streaming mode) to sync them in Real time (by taking the bottelneck frame rate : LiDAR that has 10 fps and whenever a LiDAR frame comes I take the two closest camera images).
For now my synchronized timestamp looks like this :

Even though it seems sync, there is a problem and time difference between the two camera frames (which are approximatively well sync) and the LiDAR because of this (I am synchronizing with the date of the data arrival at the disk and not the real capturing date):

So now I am stuck at this level, and was wondering if there is a simpler way to sync them in post processing (post recording way) as there is no need in my case to sync them in real time.

PS : I am using a jetson AGX Xavier, a LiDAR Ouster OS1, two Ricoh Theta V for this project.

Thank you very much for the help

And so a question would be, is the metadata of the files of the Ricoh correspond to the actual captured scene date ? how precise is it ? In the sense that if the date is only precise to the second order, it is not very useful as there would be a lot of frames having the same date…

Thank you for your information. I need to check with people on this.

The time data for 360 videos is not standardized. I’m not sure how to extract the timestamp for different clock cycle samples and match it with the frame. I’ll ask around.

I’m just guessing that the data exists because there is a getSampleTime in Android. The internal OS of the camera is Android.

1 Like

Thank you very much @craig !!!
You’re helping a whole team of researchers (but with a background of Data Science only unfortunately)

1 Like

I have done the similar work to color the Lidar point cloud of vlp16 with ricoh theta v. I must stop to capture the pictures of ricoh theta v.

If you want to synchronize a lidar with a panoramic camera. one example of this kind of panoramic camera is ladybug5+. If you have enough money you can try.

@manifold can you be more precise about the two implementations you talked about please ?
why a ladybug5+ and nor a ricoh theta V ?
and what do you mean that you had to stop capture the pictures?

The reason are as follows:

  1. ladybug5+ is global shutter, but ricoh theta V is rolling shutter. Maybe you have heard of Jelly effect of roller shutter camera, when Rolling shutter camera captures moving objects.
    That is that you can get twisted pictures when you use rolling shutter cameras.
  2. If you stop moving and captures the pictures of Ricoh theta v, you can avoid Jelly effect of roller shutter camera.
  3. this is the popular way to color the point cloud of lidar such as navvis, geoslam. They all stop moving and capture the pictures!!
  4. Some company they use ladybug5+ and they don’t stop moving to capture the pictures!

Very insightful! Thank you for the contribution.
But let’s assume we can stop to take picture, how would you sync the devices between the 2 ricoh and the LiDAR ? (even in post-processing)

The reason are as follows:

  1. ladybug5+ is global shutter, but ricoh theta V is rolling shutter. Maybe you have heard of Jelly effect of roller shutter camera, when Rolling shutter camera captures moving objects.
    That is that you can get twisted pictures when you use rolling shutter cameras.
  2. If you stop moving and captures the pictures of Ricoh theta v, you can avoid Jelly effect of roller shutter camera.
  3. this is the popular way to color the point cloud of lidar such as navvis, geoslam. They all stop moving and capture the pictures!!
  4. Some company they use ladybug5+ and they don’t stop moving to capture the pictures!

Can you share here which method did you use for colouring the lidar point? Did you use lidar to image projection like the one mentioned in this? If so how did you calibrate the Ricoh Theta V camera to get its parameters?

2 Likes

Hi Mehdi, were you able to get it working? I am looking to do something similar with still images from the Ricoh Z1, synchronised with Lidar

Hello, can you please share with us how did you get the Ricoh V Camera intrinsic parameters ?

What does “instrinsic parameters” mean? I am not technical in this area and want help understanding what information you are looking for. Thanks.

Hello Craig,

Thank you for your reply. Intrinsic parameters represent the optical center and focal length of the camera. Usually, it is a transformation 3x3 matrix obtained by calibration. Did anyone manage to calibrate it ?

Thank you