I have camera positions and rotations from a camera alignment (4x4 transformation matrices). Visualizing them with open3d works fine. The following code produces the scene below with the object in the center of the cameras and the RGB-axis shows the origin of the scene.
import open3d as o3d
import numpy as np
import copy
import os
CAMERAS_DATA = os.path.abspath('cameras.npy')
MESH_FILE = os.path.abspath('mesh.obj')
cameras_data = np.load(CAMERAS_DATA, allow_pickle=True)
camera_previews = []
for index, camera in enumerate(cameras_data):
preview = o3d.geometry.TriangleMesh.create_cone(radius=2, height=4)
preview = copy.deepcopy(preview).transform(camera)
camera_previews.append(preview)
mesh = o3d.io.read_triangle_mesh(MESH_FILE)
axis = o3d.geometry.TriangleMesh.create_coordinate_frame()
axis.scale(10.0, center=(0, 0, 0))
o3d.visualization.draw_geometries([mesh, axis] + camera_previews)
Now I want to import those cameras into MeshLab for further processing. For that purpose I've written a script to create a MeshLab project file (.mlp). You can find the code in the question related repository, but it's not important for the issue.
Opening this generated project.mlp
file misplaces the cameras, as you can see in the image below:
It seems as if the cameras are mirrored on the Z-axis and rotated by 180 degrees. Why does that happen?
You can try it yourself by cloning this repository:
git clone https://github.com/flolu/meshlab-camera-transformation
conda env create -n meshlab-camera-transformation -f conda.yml
conda activate meshlab-camera-transformation
python visualize_cameras.py
(Visualize correct Open3D scene)python main.py
(Generate MeshLab project file)
You can look at the cameras by following these instructions:
- Open MeshLab
- File ➝ Open project... ➝
project.mlp
- Render ➝ Show Camera
- Scale cameras by opening the "Show Camera" toggle at the bottom right of the screen and setting "Camera Scale Method" to "Fixed Factor" and entering
0.005
as "Scale Factor" - Scroll out
- Render ➝ Show Axis
I've tried to transform the cameras back to their initial positions and rotations like this:
for transformation_matrix in camera_transformation_matrices:
# flip z value
transformation_matrix[2, 3] *= -1
# swap y and z rotation
swap_y_and_z = np.array([[0, 0, 1],
[0, 1, 0],
[1, 0, 0]])
transformation_matrix[:3, :3] = np.matmul(
swap_y_and_z, transformation_matrix[:3, :3])
# rotate transformation_matrix 90 degrees about y axis of the camera
rotate_90_around_y_axis = np.array([[math.cos(-math.pi/2), 0, math.sin(-math.pi/2)],
[0, 1, 0],
[-math.sin(-math.pi/2), 0, math.cos(-math.pi/2)]])
T = np.eye(4)
T[:3, :3] = rotate_90_around_y_axis
T[:3, 3] = transformation_matrix[:3, 3] - \
np.matmul(rotate_90_around_y_axis, transformation_matrix[:3, 3])
transformation_matrix[:4, :4] = np.matmul(T, transformation_matrix)
The result in MeshLab looks really promising:
But when I look through the cameras, by clicking the "Show Current Raster Mode"-Button:
And switching the images on the right, the pictures are not aligned with the mesh. In fact you cannot see the mesh at all on most of the pictures. That doesn't make sense, since the cameras are all pointing towards the mesh.
You can try it yourself by running python failed_try.py
and opening the generated project_failed_try.mlp
file in MeshLab.