Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RGB Alignment with Depth #284

Closed
Luxonis-Brandon opened this issue Dec 11, 2020 · 8 comments
Closed

RGB Alignment with Depth #284

Luxonis-Brandon opened this issue Dec 11, 2020 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@Luxonis-Brandon
Copy link
Contributor

Start with the why:

In cases where both color is required for neural inference to work properly -and- the neural data needs to be perfectly aligned with the depth data, alignment between the RGB and depth information needs to exist.

A canonical example is doing semantic segmentation of color defects and needing to know their physical location. In this case, both color-based neural inference is needed (and it is per-pixel, since the network is a semantic segmentor) and depth information is needed to be aligned per-pixel.

Move to the how:

The Myriad X already has the capability to perform the transform to provide alignment between cameras. What is needed is a system for doing the calibration to determine this transform matrix, including at the differing resolutions of the color camera and the grayscale cameras (which are the source of the depth map).

Move to the what:

Provide a system for DepthAI users to calibrate the RGB camera against the right grayscale camera and the mechanism to do this alignment.

And for units with onboard cameras, improve our calibration system to perform this RGB-right calibration in the factory for all future production.

@Luxonis-Brandon Luxonis-Brandon added enhancement New feature or request Gen1 labels Dec 11, 2020
@saching13 saching13 self-assigned this Dec 11, 2020
@Luxonis-Brandon
Copy link
Contributor Author

Luxonis-Brandon commented Dec 11, 2020

As an update, we have the calibration stage working now. And future DepthAI builds (after this writing) are actually having RGB-right calibration performed. An example with semantic segmentation is shown below:
image

The right grayscale camera is shown on the right and the RGB is shown on the left. You can see the cameras are slightly different aspect ratios and fields of view, but the semantic segmentation is still properly applied.

@Luxonis-Brandon
Copy link
Contributor Author

We now have an initial prototype of this running on-device. See below for the remapping of depth between the view of the right, RGB, and left cameras:

View from the right:
image

View from the RGB (center):
image

View from the left:
image

So in each of these cases, the depth is mapped (or re-mapped) to be centered to the respective camera. So then the remaining step for RGB is the WARP of the depth to fit the intrinsics/etc. of the RGB camera module.

This is working, but we have some crash that is preventing this from being mainlined. We've been debugging the crash.

@Luxonis-Brandon
Copy link
Contributor Author

This was just initially released in https://github.com/luxonis/depthai-python and https://github.com/luxonis/depthai-core (2.4+)

@Luxonis-Brandon
Copy link
Contributor Author

Luxonis-Brandon commented Jul 6, 2021

@TheBricktop
Copy link

Sorry to inform You but the link is dead :(

@Luxonis-Brandon
Copy link
Contributor Author

Thanks @TheBricktop the link above is now fixed.

@TheBricktop
Copy link

Thank You ☺️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants