You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With one camera, it maps really well. Then at present, the data of multiple cameras need to be merged together, and the algorithm is found to be related to the internal parameters of the camera. How can multiple cameras be used to build a map?
I am looking forward to your reply.
Thank you very much.
The text was updated successfully, but these errors were encountered:
This is a really interesting question, and one I can't say I've considered before.
When I consider how this software works, it does seem like you could simply set- and re-set the camera parameters before every new point cloud you feed in (width, height, fx, fy, cx, cy, min- and max- sensor distance). Nothing about the algorithm assumes that the frames are consecutive, and (unless I'm missing something crucial) nothing about the camera intrinsics will change the shape of the TSDF in a way that would reset state. Would be very curious to hear what happens if you try!
(Technically speaking, this probably breaks some of the statistical assumptions behind the TSDF algorithm...but I don't see a reason it should fail.)
Dear the author,
It's really a great work, thank you for share it.
With one camera, it maps really well. Then at present, the data of multiple cameras need to be merged together, and the algorithm is found to be related to the internal parameters of the camera. How can multiple cameras be used to build a map?
I am looking forward to your reply.
Thank you very much.
The text was updated successfully, but these errors were encountered: