Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts on getting position from SteamVR tracking? #529

Closed
Thynix opened this issue Jun 30, 2019 · 5 comments
Closed

Thoughts on getting position from SteamVR tracking? #529

Thynix opened this issue Jun 30, 2019 · 5 comments

Comments

@Thynix
Copy link

Thynix commented Jun 30, 2019

I don't know how useful this would be if implemented, or how difficult it would be to implement, but:

Would it make sense to get or supplement camera position and orientation information from a SteamVR tracker attached to the camera?

This would require work to associate a position with a photo, but if you're moving the camera but not the subject, my thought is that a Vive Tracker has a 1/4" UNC threaded mount, just like cameras do, and in many situations it's straightforward enough to have base stations pointed at the camera from opposing sides.

My hope is that if this is viable, it'd reduce the processing required to determine camera position, and speed up the path to the final mesh.

@tycone
Copy link

tycone commented Jun 30, 2019

I believe conception of additional means to generate meta-data or exif which could be embedded in images as information is outside the scope.

@Thynix
Copy link
Author

Thynix commented Jun 30, 2019

That makes sense. Would it be within scope to support reading position hints from EXIF data?

@natowi
Copy link
Member

natowi commented Jun 30, 2019

OpenVR-Tracking-Example:
A small c++ example on how to access OpenVR tracking data and controller states using IVRInput system
https://github.com/Omnifinity/OpenVR-Tracking-Example

vive-diy-position-sensor: Code & schematics for position tracking sensor using HTC Vive's Lighthouse system and a Teensy board. https://github.com/ashtuchkin/vive-diy-position-sensor

HTC Vive Tracker Node for ROS
https://github.com/moon-wreckers/vive_tracker

@fabiencastan
Copy link
Member

There is no problem in the pipeline to provide an initial camera poses if you have the information.
As it's done when we do an SfM augmentation. These positions will be refined in the Bundle Adjustment as all other cameras.

The only part that would be nice to do is to use this geometric knowledge also in the feature matching to improve the quality.

It will not speed up the process but it should improve robustness to challenging conditions like indoor.

@Thynix
Copy link
Author

Thynix commented Jun 30, 2019

Okay, looks like this is a feature idea for a supporting application, and a feature idea about integrating position data into feature matching. Given that's not the title of this issue, my inclination is to close it.

@fabiencastan I don't see an issue filed for adding this capacity to feature matching; would it make sense to open one?

@Thynix Thynix closed this as completed Jun 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants