-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the cube is not where it should be #17
Comments
You could try to calibrate your Hololens camera with this repository: https://github.com/qian256/HoloLensCamCalib |
Hi @pocce90 |
Hi @qian256, I finally found some nice parameters for the magic functions for my Hololens which work very well for a distance between the Hololens and the object up to 60-70 cm. No need for rotation adjustments, only translation as [0.002, 0.04, 0.08]. However, beyond 1 meter, the error between the virtual and real object increases quite a lot. I guess that this may be because of tracking errors from the ARToolKit, IPD error, among others? Did you also experience that when doing the experiments for your paper? |
@araujokth , good to hear that you found working parameters. |
@qian256 makes sense! I guess that will be good material for your next paper? :) |
hi @qian256,we do not understand what the mean of the magicMatrix in the magic functions,how to adjust the magicMatrix |
hi @qian256 same as @pocketmagic , I don't understand the mean of the magicMatrixx1, and how can we adjust it. |
The magicMatrix is a rotation matrix (https://en.wikipedia.org/wiki/Rotation_matrix) that is added to the transformation to reduce the error manually. Look up which parameter affects which transformation parameter and edit the matrix according to the error you see. |
Hi @tsteffelbauer, both magic matrices are 3D transformation matrices (https://en.wikipedia.org/wiki/Transformation_matrix) since they are performing both translation (magicMatrix1) and rotation (magicMatrix2), but of course one could do the operation in the MagicFunction in a single step using a single transformation magicMatrix instead. From my experience I would not recommend doing this manually by looking into the error you see since this would take quite some tedious time to perform and it is quicker to implement a calibration method to do it properly. Mainly if one has to perform a rotation. I would suggest implementing the method described in @qian256's paper https://arxiv.org/abs/1703.05834 since its quite quick to implement. |
Hello everyone, Solving this problem might be a little bit involved as one would have to rewrite Also, I don't believe that per-user calibration is needed unless trying to fine tune for a specific application and viewpoint (i.e., surgical procedures). I've been fairly successful with Vuforia for various marker sizes. All in all, if anyone is willing to improve that part of things, this repository has some sample code capturing frames and applying transforms to the Locatable Camera. |
@danilogr Thank you for linking the sample code. I am currently making some commits in a fork of HoloLensARToolkit (https://github.com/DanAndersen/HoloLensARToolKit) along the lines you are talking about. By using the approach shown in https://github.com/camnewnham/HoloLensCameraStream I found I didn't have to edit any C++ code and could do it all in C#. I need to look further into the question of precise alignment of AR and real markers (still not exactly aligned), but at the very least I've been able to use the per-frame pose for the locatable camera as the reference point from which the AR markers are placed, rather than doing it based on the coordinate system of the "Main Camera" (which represents the HoloLens IMU and not the locatable camera). One benefit of this approach is that markers remain world-anchored even if the user's head moves quickly. Currently with HoloLensARToolKit, if you have a world-anchored marker at the left side of your camera, and then rotate your head quickly (holding the marker fixed) so that the marker ends up on the right side of the camera view, the marker will appear head-anchored and not world-anchored during the transition. This is because the marker isn't detected due to motion blur, and so it remains at the same pose relative to the Main Camera pose, which is changing. Instead, I create a reference GameObject for the locatable camera pose, which is in world space, and off of which the markers update their relative position, which resolves that issue. |
Hey @DanAndersen, I just went through your code, and I am mind-blow! I could not imagine that it would be so simple to get the extension data from the frame reference from Unity. Perhaps we should merge back to this repository so other people can benefit from it? @qian256 ? As for the proper alignment, there are a couple of things: First, the locatable camera is not perfect and has some distortion.
Second, the interpupillary distance should be configured per user in order to minimize alignment errors. If you want to calibrate with high accuracy ( <= 4mm error ), you might want to take a look at @qian256's updated paper here: https://arxiv.org/abs/1703.05834. The down side is that you have to calibrate per user, and they might not be able to move much around. Once again, amazing job tackling this problem. Thanks for sharing! |
@DanAndersen Thank you for doing this!
The code to perform calibration is still not available on this repo, due to some coordination issues between our team. My apologies again. I hope the "hold" will be released in 2 months. At that time, I will upload the code to do the calibration, and apply calibration results to alignment (closing the loop to update magicMatrix). |
Hello, I've tried single scene, and when I start it in my hololens the cube is positioned 5-6 cm above and 5-6 cm closer to me than the marker. I've controlled marker size and settings in app and it's ok: 80mm.
The text was updated successfully, but these errors were encountered: