-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why assume the patch have the same depth? #5
Comments
@vvvsice Good question. It's true that depths around the points are not same. The key idea is to compare patch similarity between different frames. You can also implement this idea in different manner. For example, first project 3D points to different frames and crop a patch around the projected coordinate then calculate the loss. However, this implementation ignore camera rotation while assuming same depth accounts for camera rotation. |
I see, thanks a lot. |
Hi,thanks for sharing this excellent work. |
@belkahorry For the first one, you always crop a local patch around the projected point. The patch is a regular grid and not dependent on the camera rotation. For the second one, we project a patch (instead of point) with same depth to the other image and the projected patch is not a regular grid anymore and thus depends on camera rotation. |
It's clear,thanks! |
Hi, I'm still a bit confused as to why the regular grid would have a connection to the camera rotation. |
It seems that the keypoints extracted by DSO mainly distribute around the edge of objects, thus the depth variance may be large, so I'm wondering if the same depth assumption is plausible, could you please share the idea of this implementation?
The text was updated successfully, but these errors were encountered: