Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

realsense 2 aligned depth and point cloud in color camera coordinate #277

Closed
lz89 opened this issue Nov 17, 2017 · 19 comments
Closed

realsense 2 aligned depth and point cloud in color camera coordinate #277

lz89 opened this issue Nov 17, 2017 · 19 comments

Comments

@lz89
Copy link

lz89 commented Nov 17, 2017

System Configuration

Version Your Configuration
Operating System Ubuntu 16.04.3
Kernel 4.10.0-37
ROS kinetic
ROS RealSense v2.0.1
librealsense v2.8.0
Camera Type-Firmware SR300-3.17.0.0

I am using librealsense-2 and realsense_ros_camera development branch (with RGB point cloud).
I wonder if there is a way to enable the aligned depth to be published to ROS.
Currently, the RGB point cloud (/camera/points) is an organised point cloud stored (arranged) in the IR image order. This is because in realsense_camera_node.cpp, for each pixel in the IR image, it reproject the pixel to 3D and transform the point to color camera coordinate then find its color by projecting the point on the color image. More details here.

I would like to have: an organised point cloud arranged in the color image. i.e. for each pixel in the color image, I know its depth and the corresponding 3D point.

@icarpis
Copy link
Contributor

icarpis commented Nov 21, 2017

Hello @lz89, we will publish aligned_depth_to_color topic in the next RealSense ROS versions.

@akashjinandra
Copy link

Is there an update to this?

@icarpis
Copy link
Contributor

icarpis commented Jan 18, 2018

This feature is still in-progress on my repository.
I would be glad to get your feedback on it.
In order to get the aligned topic please run the frames_aligned_to_depth launch file.

@icarpis
Copy link
Contributor

icarpis commented Jan 25, 2018

I merged the alignment feature to the development branch.
See here how to enable this feature.

@icarpis icarpis closed this as completed Jan 31, 2018
@icarpis
Copy link
Contributor

icarpis commented Jan 31, 2018

Provided by ROS wrapper 2.0.2

@lz89
Copy link
Author

lz89 commented Feb 2, 2018

@icarpis The new version (2.0.2) only provide the depth aligned to color but the point cloud is still in depth camera coordinate. I would like to have the point cloud in the color camera coordinate. That means for each pixel in the color image, I have the corresponding 3D position (in camera coordinate).

@lz89 lz89 changed the title realsense 2 aligned depth and point cloud based on color image realsense 2 aligned depth and point cloud in color camera coordinate Feb 2, 2018
@icarpis
Copy link
Contributor

icarpis commented Feb 5, 2018

@lz89, the point cloud provided by the rs_rgbd launch file is created by the aligned-depth-to-color topic and by the color topic.
The aligned-depth-to-color topic is match exactly to the color topic, so they have the same coordinate system, and therefore the point cloud topic is in color coordinate system.

In order to retrieve the 3D position of each color pixel I would suggest you to go over the aligned-depth-to-color pixels and for each pixel that is different than zero, taking its corresponding pixel in the color image. see below code snippet

const int size_of_depth_pixel = 2; // pixel size in bytes
const int size_of_color_pixel = 3; // pixel size in bytes
uint8_t* aligned_depth_to_color;
uint8_t* color;
for (int y = 0; y < depth_frame_height; ++y)
{
	for (int x = 0; x < depth_frame_width; ++x)
	{
		int depth_pixel_position = (y * depth_frame_width + x) * size_of_depth_pixel;
		if (aligned_depth_to_color[depth_pixel_position] != 0)
		{
			int color_pixel_position = (y * depth_frame_width + x) * size_of_color_pixel;
			corresponding_color_pixel = color[color_pixel_position];
		}
	}
}

@lz89
Copy link
Author

lz89 commented Feb 13, 2018

@icarpis Thank you for the clarification of the rs_rgbd launch file.
However, when I subscribe to the topic /camera/depth_registered/points, I found that the x, y, z value of the point in the cloud is strange. It looks neither in meter nor millimeter. I wonder if the cloud published by this topic has considered the depth scale. In rviz, I can visualise the cloud correctly, but the scale is wrong. I roughly measured it in rviz, 1.15 correspond to ~145mm.

Following is how I get the point cloud in the ROS callback:

void Listener::pointCloudCallback(const sensor_msgs::PointCloud2ConstPtr &msg) {
    std::lock_guard<std::mutex> lock(m_mx_cloud);
    pcl::fromROSMsg(*msg, *m_cloud);
}

@icarpis
Copy link
Contributor

icarpis commented Feb 15, 2018

The point cloud is published by depth_image_proc/point_cloud_xyzrgb nodelet.
Acording to the RGBD nodelet implementation, it looks that the units of depth values (z) are converted from mm into meters.
Could you please elaborate how do you measure the distance with RViz ?

@lz89
Copy link
Author

lz89 commented Feb 15, 2018

@icarpis I put a plane in the view and measure the distance from the camera to the plane (plane is put almost parallel to the camera X-Y plane). Then I added an axis (coordinate) in RViz which I can adjust the length of the axis so that the Z axis is intersected with the plane.

I checked the RGBD nodelet implementation, I see that it it simply converts to meter by multiply 0.001 (link). But if the depth in the depth image haven't been multiplied by the depth_scale of the specific camera, the RGBD nodelet would not know this right?

@icarpis
Copy link
Contributor

icarpis commented Feb 22, 2018

@lz89, the depth_scale of D400 devices is 0.001.
The RGBD assume that the depth data haven't been multiplied by the depth_scale and therefore convert the values from millimeters to meters by multiplying the raw data by 0.001.

@lz89
Copy link
Author

lz89 commented Feb 22, 2018

@icarpis Thanks for the reply. So in case of SR300, what shall I do to correct this?

@icarpis
Copy link
Contributor

icarpis commented Feb 22, 2018

In case of SR300, I think that you need to multiply the depth data by 0.1 because the depth scale is 0.0001.

@lz89
Copy link
Author

lz89 commented Feb 22, 2018

@icarpis I once read it from the librealsense API, which gave me a value like 0.0025 (or 0.00025). If the RGBD nodelet assume the scaling is 0.001 (while the actual one is not), wouldn't it make the X and Y value being scaled as well?

@icarpis
Copy link
Contributor

icarpis commented Feb 22, 2018

No, the depth_scale doesn't effect on the X and Y axes values but only on the Z axis values.

@tolgasaglik
Copy link

@icarpis hello, I downloaded your repo to get z coordinate value of the camera. Which topic do I need to subscribe to in order to get coordinate values of pixels?

@chongyi-zheng
Copy link

@lz89 Any suggestion on how to modify the depth scale in realsense-ros?

@doronhi
Copy link
Contributor

doronhi commented Aug 28, 2019

You can't change the depth scale in the realsense-ros. According to the ROS standard, the depth is in mm.
Suppose we allowed the modification of the depth scale. We would have ended up scaling it back to mm.
As a side note, I think that given the mechanical issues, i.e. the resolution of the sensors, the distance between them and the scale of the problem in general, there is no gain by changing the scale of the the depth.

@chongyi-zheng
Copy link

chongyi-zheng commented Aug 28, 2019

You can't change the depth scale in the realsense-ros. According to the ROS standard, the depth is in mm.
Suppose we allowed the modification of the depth scale. We would have ended up scaling it back to mm.
As a side note, I think that given the mechanical issues, i.e. the resolution of the sensors, the distance between them and the scale of the problem in general, there is no gain by changing the scale of the the depth.

Oh, thanks for the notification! Actually, I'm working with a D435 dynamically mounted on the wrist of a UR5. It turns out launching rs_rgbd.launch gives me a aligned pointcloud. I also find that there are some NAN in the aligned pointcloud which I have to remove manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants