-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
realsense 2 aligned depth and point cloud in color camera coordinate #277
Comments
Hello @lz89, we will publish aligned_depth_to_color topic in the next RealSense ROS versions. |
Is there an update to this? |
This feature is still in-progress on my repository. |
I merged the alignment feature to the development branch. |
Provided by ROS wrapper 2.0.2 |
@icarpis The new version (2.0.2) only provide the depth aligned to color but the point cloud is still in depth camera coordinate. I would like to have the point cloud in the color camera coordinate. That means for each pixel in the color image, I have the corresponding 3D position (in camera coordinate). |
@lz89, the point cloud provided by the rs_rgbd launch file is created by the In order to retrieve the 3D position of each color pixel I would suggest you to go over the const int size_of_depth_pixel = 2; // pixel size in bytes
const int size_of_color_pixel = 3; // pixel size in bytes
uint8_t* aligned_depth_to_color;
uint8_t* color;
for (int y = 0; y < depth_frame_height; ++y)
{
for (int x = 0; x < depth_frame_width; ++x)
{
int depth_pixel_position = (y * depth_frame_width + x) * size_of_depth_pixel;
if (aligned_depth_to_color[depth_pixel_position] != 0)
{
int color_pixel_position = (y * depth_frame_width + x) * size_of_color_pixel;
corresponding_color_pixel = color[color_pixel_position];
}
}
} |
@icarpis Thank you for the clarification of the rs_rgbd launch file. Following is how I get the point cloud in the ROS callback:
|
The point cloud is published by depth_image_proc/point_cloud_xyzrgb nodelet. |
@icarpis I put a plane in the view and measure the distance from the camera to the plane (plane is put almost parallel to the camera X-Y plane). Then I added an axis (coordinate) in RViz which I can adjust the length of the axis so that the Z axis is intersected with the plane. I checked the RGBD nodelet implementation, I see that it it simply converts to meter by multiply 0.001 (link). But if the depth in the depth image haven't been multiplied by the depth_scale of the specific camera, the RGBD nodelet would not know this right? |
@lz89, the |
@icarpis Thanks for the reply. So in case of SR300, what shall I do to correct this? |
In case of SR300, I think that you need to multiply the depth data by 0.1 because the |
@icarpis I once read it from the librealsense API, which gave me a value like 0.0025 (or 0.00025). If the RGBD nodelet assume the scaling is 0.001 (while the actual one is not), wouldn't it make the X and Y value being scaled as well? |
No, the depth_scale doesn't effect on the X and Y axes values but only on the Z axis values. |
@icarpis hello, I downloaded your repo to get z coordinate value of the camera. Which topic do I need to subscribe to in order to get coordinate values of pixels? |
@lz89 Any suggestion on how to modify the |
You can't change the depth scale in the realsense-ros. According to the ROS standard, the depth is in mm. |
Oh, thanks for the notification! Actually, I'm working with a D435 dynamically mounted on the wrist of a UR5. It turns out launching |
System Configuration
I am using librealsense-2 and realsense_ros_camera development branch (with RGB point cloud).
I wonder if there is a way to enable the aligned depth to be published to ROS.
Currently, the RGB point cloud (/camera/points) is an organised point cloud stored (arranged) in the IR image order. This is because in
realsense_camera_node.cpp
, for each pixel in the IR image, it reproject the pixel to 3D and transform the point to color camera coordinate then find its color by projecting the point on the color image. More details here.I would like to have: an organised point cloud arranged in the color image. i.e. for each pixel in the color image, I know its depth and the corresponding 3D point.
The text was updated successfully, but these errors were encountered: