-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thoughts on getting position from SteamVR tracking? #529
Comments
I believe conception of additional means to generate meta-data or exif which could be embedded in images as information is outside the scope. |
That makes sense. Would it be within scope to support reading position hints from EXIF data? |
OpenVR-Tracking-Example: vive-diy-position-sensor: Code & schematics for position tracking sensor using HTC Vive's Lighthouse system and a Teensy board. https://github.com/ashtuchkin/vive-diy-position-sensor HTC Vive Tracker Node for ROS |
There is no problem in the pipeline to provide an initial camera poses if you have the information. The only part that would be nice to do is to use this geometric knowledge also in the feature matching to improve the quality. It will not speed up the process but it should improve robustness to challenging conditions like indoor. |
Okay, looks like this is a feature idea for a supporting application, and a feature idea about integrating position data into feature matching. Given that's not the title of this issue, my inclination is to close it. @fabiencastan I don't see an issue filed for adding this capacity to feature matching; would it make sense to open one? |
I don't know how useful this would be if implemented, or how difficult it would be to implement, but:
Would it make sense to get or supplement camera position and orientation information from a SteamVR tracker attached to the camera?
This would require work to associate a position with a photo, but if you're moving the camera but not the subject, my thought is that a Vive Tracker has a 1/4" UNC threaded mount, just like cameras do, and in many situations it's straightforward enough to have base stations pointed at the camera from opposing sides.
My hope is that if this is viable, it'd reduce the processing required to determine camera position, and speed up the path to the final mesh.
The text was updated successfully, but these errors were encountered: