Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the cube is not where it should be #17

Open
pocce90 opened this issue Jun 8, 2017 · 13 comments
Open

the cube is not where it should be #17

pocce90 opened this issue Jun 8, 2017 · 13 comments

Comments

@pocce90
Copy link

pocce90 commented Jun 8, 2017

Hello, I've tried single scene, and when I start it in my hololens the cube is positioned 5-6 cm above and 5-6 cm closer to me than the marker. I've controlled marker size and settings in app and it's ok: 80mm.

@tsteffelbauer
Copy link

You could try to calibrate your Hololens camera with this repository: https://github.com/qian256/HoloLensCamCalib

@qian256
Copy link
Owner

qian256 commented Jun 23, 2017

Hi @pocce90
It is very common for the virtual object not aligning very well with the marker, because we never do a calibration between the coordinate system of HoloLens and coordinate system of tracking camera.
The magic functions in ARUWPMarker.cs script is the place where you can tune the tracking transformation to the display transformation (for alignment). Currently, you have to manually supply the numbers in the magic function.

@araujokth
Copy link

Hi @qian256, I finally found some nice parameters for the magic functions for my Hololens which work very well for a distance between the Hololens and the object up to 60-70 cm. No need for rotation adjustments, only translation as [0.002, 0.04, 0.08]. However, beyond 1 meter, the error between the virtual and real object increases quite a lot. I guess that this may be because of tracking errors from the ARToolKit, IPD error, among others? Did you also experience that when doing the experiments for your paper?

@qian256
Copy link
Owner

qian256 commented Jun 27, 2017

@araujokth , good to hear that you found working parameters.
The calibration in the paper also has a restricted volume, and the result must favor that volume and gets worse and worse if it is out of the calibration volume. The 3D space of tracking and 3D space of display may not be only affine, or perspective, there is distortion as well. This includes the distortion of camera tracking parameters too.

@araujokth
Copy link

@qian256 makes sense! I guess that will be good material for your next paper? :)

@pocketmagic
Copy link

hi @qian256,we do not understand what the mean of the magicMatrix in the magic functions,how to adjust the magicMatrix

@RaymondKen
Copy link

hi @qian256 same as @pocketmagic , I don't understand the mean of the magicMatrixx1, and how can we adjust it.

@tsteffelbauer
Copy link

The magicMatrix is a rotation matrix (https://en.wikipedia.org/wiki/Rotation_matrix) that is added to the transformation to reduce the error manually. Look up which parameter affects which transformation parameter and edit the matrix according to the error you see.

@araujokth
Copy link

araujokth commented Jul 5, 2017

Hi @tsteffelbauer, both magic matrices are 3D transformation matrices (https://en.wikipedia.org/wiki/Transformation_matrix) since they are performing both translation (magicMatrix1) and rotation (magicMatrix2), but of course one could do the operation in the MagicFunction in a single step using a single transformation magicMatrix instead.

From my experience I would not recommend doing this manually by looking into the error you see since this would take quite some tedious time to perform and it is quicker to implement a calibration method to do it properly. Mainly if one has to perform a rotation. I would suggest implementing the method described in @qian256's paper https://arxiv.org/abs/1703.05834 since its quite quick to implement.

@danilogr
Copy link

Hello everyone,
I am here dealing with the same issue of misaligned holograms. This issue stems from ARUWPVideo.cs using Windows.Media.Capture (C#) to receive video frames. As far as I can tell from Microsoft Documentation this API does not provide a CameraViewTransform per frame. As a result - and in accordance to what @qian256 said before - the ARToolkit coordinates are in the locatable camera space and not in the app space.

Solving this problem might be a little bit involved as one would have to rewrite ARUWPVideo.cs from the WinRT / C++ side of things, exporting the CameraViewTransform of each frame, and applying it to the marker coordinates.

Also, I don't believe that per-user calibration is needed unless trying to fine tune for a specific application and viewpoint (i.e., surgical procedures). I've been fairly successful with Vuforia for various marker sizes. All in all, if anyone is willing to improve that part of things, this repository has some sample code capturing frames and applying transforms to the Locatable Camera.

@DanAndersen
Copy link
Contributor

@danilogr Thank you for linking the sample code. I am currently making some commits in a fork of HoloLensARToolkit (https://github.com/DanAndersen/HoloLensARToolKit) along the lines you are talking about. By using the approach shown in https://github.com/camnewnham/HoloLensCameraStream I found I didn't have to edit any C++ code and could do it all in C#.

I need to look further into the question of precise alignment of AR and real markers (still not exactly aligned), but at the very least I've been able to use the per-frame pose for the locatable camera as the reference point from which the AR markers are placed, rather than doing it based on the coordinate system of the "Main Camera" (which represents the HoloLens IMU and not the locatable camera).

One benefit of this approach is that markers remain world-anchored even if the user's head moves quickly. Currently with HoloLensARToolKit, if you have a world-anchored marker at the left side of your camera, and then rotate your head quickly (holding the marker fixed) so that the marker ends up on the right side of the camera view, the marker will appear head-anchored and not world-anchored during the transition. This is because the marker isn't detected due to motion blur, and so it remains at the same pose relative to the Main Camera pose, which is changing. Instead, I create a reference GameObject for the locatable camera pose, which is in world space, and off of which the markers update their relative position, which resolves that issue.

@danilogr
Copy link

danilogr commented Dec 5, 2018

Hey @DanAndersen, I just went through your code, and I am mind-blow! I could not imagine that it would be so simple to get the extension data from the frame reference from Unity.

Perhaps we should merge back to this repository so other people can benefit from it? @qian256 ?

As for the proper alignment, there are a couple of things:

First, the locatable camera is not perfect and has some distortion.

On HoloLens, the video and still image streams are undistorted in the system's image processing pipeline before the frames are made available to the application (the preview stream contains the original distorted frames). Because only the projection matrix is made available, applications must assume image frames represent a perfect pinhole camera, however the undistortion function in the image processor may still leave an error of up to 10 pixels when using the projection matrix in the frame metadata. In many use cases, this error will not matter, but if you are aligning holograms to real world posters/markers, for example, and you notice a <10px offset (roughly 11mm for holograms positioned 2 meters away) this distortion error could be the cause.
( Source.

Second, the interpupillary distance should be configured per user in order to minimize alignment errors.

If you want to calibrate with high accuracy ( <= 4mm error ), you might want to take a look at @qian256's updated paper here: https://arxiv.org/abs/1703.05834. The down side is that you have to calibrate per user, and they might not be able to move much around.

Once again, amazing job tackling this problem. Thanks for sharing!

@qian256
Copy link
Owner

qian256 commented Dec 5, 2018

@DanAndersen Thank you for doing this!
I will take a look at the code soon and merge back. Apologies for lazy updates on this repo.
On my local node, I also switched to the HoloLensCameraStream as a replacement to ARUWPVideo.cs, and uses the pose at the time of capturing instead of the pose of Unity virtual camera which is tens of milliseconds later than the time of capturing. I have two experiences:

  1. When you rotate your head left, the virtual cube does not drift left with the head, but drift very slightly to the right.
  2. Even if I use the camera pose from UWP, the alignment is still not perfect, although much better than before. For perfect alignment, a user-specific calibration is still needed. Thank you @danilogr for referring to my updated arxiv paper.

The code to perform calibration is still not available on this repo, due to some coordination issues between our team. My apologies again. I hope the "hold" will be released in 2 months. At that time, I will upload the code to do the calibration, and apply calibration results to alignment (closing the loop to update magicMatrix).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants