-
Notifications
You must be signed in to change notification settings - Fork 549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/clamp normals #326
Fix/clamp normals #326
Conversation
Approximation errors sometimes cause the components of the normal vector to be beyond the range of [-1, 1]. Normal components are now clamped to prevent computation errors downstream.
Thanks for the effort. It's really appreciated! We'll look at this PR more in details at the end of this week. |
@jsburklund Is there a reason for clamping normals only when sortEigen is false? |
@jsburklund Also, what is your motivation for clamping normals? Wouldn't it be better to normalize them? This way, vector directions would remain the same. |
Lack of understanding of the difference between the two options, and lack of a test scenario to verify this. Looking deeper into the code, this clamping should be applied to both cases. |
The eigen values that I have seen during testing for some corner cases result in an approximation error where the maximum vector value is 1.00000012 using single precision floating point values. This is only a singular least significant bit value difference to the single precision representation of 1.0, and normalizing the vector results in only 1 bit changes for all other values. This was close enough to make the corner cases disappear, but perhaps the vector should be normalized using:
The question then becomes "which is the correct approximation value for the other components". Since the normalized values differ from the original value by only 1 bit, which value is the best approximation of the remaining vector components? Or should the components be set to zero, since one component has a value of 1, and the vector norm would truly be 1 instead of the approximation? E.g. 0,0,1. When computing the norm of the vector, these almost zero components have no effect due to floating point approximations, and the norm of the vector results in 1.0 even though it should be ever so slightly greater. Some example eigen values that trigger this case are below. Note that this only occurs when two components are close to 0 and the other component is close to 1.
Examples of eigen values before and after normalization. Format is (eig[0], eig[1], eig[2], normal(eig)).
|
@jsburklund Ok, I understand your motivation now. I pushed a fix to also apply normal clamping when sortEigen is true. It will be merged soon. Thanks again for your contribution! |
Clamp normal values to [-1 1] to compensate for small approximation errors when computing the normal