I am working on a depth detection project involving a stereo 360 cameras (2 Theta V) and a LiDAR.
As you may know one crutial step for sensor fusion is to make them time synchronized. I did everything possible using ROS. now the output is synchronized according to ROS but it is still not enough.
Some researches lead me to think that synchronizing all the hardware using the same external clock would lead to a better result. what do you think ?
Also and most importantly is it even possible on a Ricoh V to sync its clock with an external one (like a GPS for instance).
Then, connect a RICOH THETA Z1 or V that is not in stereo for the 360 view and archival documentation.
There have been no success reports that I know of synchronizing two THETAs at the level needed for robotics vision research. There is no success synchronizing with an external clock. See the thread above.
Thank you very much for the response.
Let me please add that I do not necessarily need synchronization in real time for a real sensor fusion. It is as good if I can manage to synchronize them in post-recording. Does this make it any easier ?
Hello again @craig sorry for not being precise.
Here is what I want as a final output : I want to build a DL model where each sample is 2 camera 360° images(I am using two Ricoh Theta V) and a LiDAR output as a groundtruth.
So each sample ( : two images) and groudtruth output should be time sync.
Thus, the most important is to have the triplet output sync for my model, it does not matter if it is done in real time or if I find a way to do it in post-processing.
At the moment I used ROS synchronizer (and open cv while cameras are in streaming mode) to sync them in Real time (by taking the bottelneck frame rate : LiDAR that has 10 fps and whenever a LiDAR frame comes I take the two closest camera images).
For now my synchronized timestamp looks like this :
Even though it seems sync, there is a problem and time difference between the two camera frames (which are approximatively well sync) and the LiDAR because of this (I am synchronizing with the date of the data arrival at the disk and not the real capturing date):
So now I am stuck at this level, and was wondering if there is a simpler way to sync them in post processing (post recording way) as there is no need in my case to sync them in real time.
PS : I am using a jetson AGX Xavier, a LiDAR Ouster OS1, two Ricoh Theta V for this project.
And so a question would be, is the metadata of the files of the Ricoh correspond to the actual captured scene date ? how precise is it ? In the sense that if the date is only precise to the second order, it is not very useful as there would be a lot of frames having the same date…
Thank you for your information. I need to check with people on this.
The time data for 360 videos is not standardized. I’m not sure how to extract the timestamp for different clock cycle samples and match it with the frame. I’ll ask around.
I’m just guessing that the data exists because there is a getSampleTime in Android. The internal OS of the camera is Android.
If you want to synchronize a lidar with a panoramic camera. one example of this kind of panoramic camera is ladybug5+. If you have enough money you can try.
@manifold can you be more precise about the two implementations you talked about please ?
why a ladybug5+ and nor a ricoh theta V ?
and what do you mean that you had to stop capture the pictures?
ladybug5+ is global shutter, but ricoh theta V is rolling shutter. Maybe you have heard of Jelly effect of roller shutter camera, when Rolling shutter camera captures moving objects.
That is that you can get twisted pictures when you use rolling shutter cameras.
If you stop moving and captures the pictures of Ricoh theta v, you can avoid Jelly effect of roller shutter camera.
this is the popular way to color the point cloud of lidar such as navvis, geoslam. They all stop moving and capture the pictures!!
Some company they use ladybug5+ and they don’t stop moving to capture the pictures!
Very insightful! Thank you for the contribution.
But let’s assume we can stop to take picture, how would you sync the devices between the 2 ricoh and the LiDAR ? (even in post-processing)
ladybug5+ is global shutter, but ricoh theta V is rolling shutter. Maybe you have heard of Jelly effect of roller shutter camera, when Rolling shutter camera captures moving objects.
That is that you can get twisted pictures when you use rolling shutter cameras.
If you stop moving and captures the pictures of Ricoh theta v, you can avoid Jelly effect of roller shutter camera.
this is the popular way to color the point cloud of lidar such as navvis, geoslam. They all stop moving and capture the pictures!!
Some company they use ladybug5+ and they don’t stop moving to capture the pictures!
Can you share here which method did you use for colouring the lidar point? Did you use lidar to image projection like the one mentioned in this? If so how did you calibrate the Ricoh Theta V camera to get its parameters?
Thank you for your reply. Intrinsic parameters represent the optical center and focal length of the camera. Usually, it is a transformation 3x3 matrix obtained by calibration. Did anyone manage to calibrate it ?