Modeling from 360 Degree Camera Images - Using Blender

Modeling from 360 Degree Camera Images

Originally posted by @yukimituki11 on Qiita (in Japanese). I have used Google Translate and my own Japanese language skills to translate and make the information available here, since I believe modeling and measurement is important in construction and real estate 360 degree applications. I have not tested this type of modeling personally. Yet! The article uses Blender which is a free and open-source 3D computer graphics software tool set used for creating animated films, visual effects, art, 3D-printed models, motion graphics, interactive 3D applications, virtual reality, and, formerly, video games. Available for Linux, macOS, Windows, BSD, and Haiku.

Spherical cameras like RICOH THETA allow you to shoot in 360 degrees: up, down, left and right all at once.

I would like to explain how to roughly reproduce the shooting space using Blender’s shader.

Screen Shot 2024-04-27 at 3.04.15 PM

Object-based coordinates and coordinate transformations

The shooting data of the 360 degree camera can be obtained as image data of equirectangular coordinates.

Also, the Blender shader has a function to obtain coordinates based on a specific object.

In other words, if you can successfully calculate the shooting coordinates and object positions, you can recreate the shooting space.

The formula to convert arbitrary coordinates x, y, z in space to equirectangular projection [u, v] is

u = atan(x/y)
v = atan(z/sqrt(x^2+y^2))

If you reproduce this with Blender’s shader node, it will look like this

u,v respectively are divided by 360 degrees and 180 degrees by radian conversion. This is because the angle unit in the shader is radian and the texture coordinates are expressed as values between 0.0 and 1.0.

Subtracting 0.5 in the y coordinate is an offset that makes the camera position represent the center of the image.

The capture is with the cylinder face reversed and the back side hidden.

Modeling

When pseudo-reproducing the shooting conditions, it may be sufficient to just project onto a cylinder or sphere.

I used one of the advantages of Blender, which allows you to edit the model without changing the shading state.

I would like to give an example of how to create a more detailed model.

First, adjust the Empty object that we set earlier as a reference to the height of the camera at the time of shooting.

I tried creating a surface that matches the ground position.

68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3135313439332f66346434363030632d633161392d376339382d646539332d6366303638323934303537652e706e67

The general shape of the floor seems to match.

You can create a wall by creating a vertical plate polygon at the boundary with the floor. In this way, it is possible to recreate the space, including areas that have not been measured, based on known data from the shooting location.

68747470733a2f2f71696974612d696d6167652d73746f72652e73332e61702d6e6f727468656173742d312e616d617a6f6e6177732e636f6d2f302f3135313439332f65663637393833612d623337302d663039642d316139322d3265643730386565393661622e706e67

When creating the video at the beginning, the walls were known from the floor plan, so I was able to place the walls. More specific details are created by adding edges to the wall and extruding them.

Usage Example

This example is from a previous location I was considering moving to, and I used it to see what would happen if I placed furniture there.

By mixing the results of multiple cameras, I was able to create a wider area than the examples shown so far in about 2 hours.

Blender also has an add-on that displays views in VR using a VR headset. So I also checked how the furniture would fit in a VR space.

I also used this method to photograph a built-up area and recreate how it would look from the ground.

Although this is a simple method, I think it can be applied effectively.

Furthermore, images captured by a 360 degree camera are not actually accurate cylindrical coordinates due to optical characteristics.

I created calibration data and used it for work. Please refer to the previous article for an overview of how to create and use calibration data: Blenderでレンズの歪み補正データ(STmap)を作成してみる #Blender - Qiita

I hope it will be of some help!

1 Like

that’s a fascinating example from yukimituki11
consider asking him to join this community and then give him a community thanks award. It looks like he posted 67 times on Qiita.

Good idea! I invited them to theta360.guide, we’ll see if they join up!

I did some experiments with photogrammetry in January 2022. I’d like to learn more about this technique with blender.

Back in 2017, I was experimenting with putting 3D assets into 360 live streams.

I don’t think mothra was ever fully appreciated.

Do you remember this scene?

It’s still up here:

Unity WebGL Player | testImport

Do you remember our online hackathon using furniture in an SF office?

image

1 Like

Is that Mothra in that second image?! :butterfly:

yes, it’s supposed to be mothra. However, I was not able to find a 3D asset for mothra. I ended up using a moth.

The idea at the time was that 360 streaming video could be used in entertainment as well as industrial telepresence. Although the current market for streaming focuses on industrial control in dangerous conditions or some type of cost-savings for inspection or monitoring, at the time, we were hoping for the market for entertainment apps to expand.

The two likely scenarios for a live stream were: 1) game; 2) entertainment event (like a museum or a concert)

At the time, there were things like Pokemon and were were hoping to use live streaming of a 360 camera in a location-based game.

If you recall, there is also a 360 hunting game

theta-plugins/plugins/com.theta360.hunting360 at main · ricohapi/theta-plugins · GitHub

1 Like