-
-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LightmapGI: Reduce warnings and increase probe accuracy #90702
Conversation
Maybe I was too modest in naming this PR. To be clear, I expect that this together with #90701 will take care of all warnings under all normal circumstances (at least until someone finds a new way to break Delaunay3D). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested locally (with #90702 applied on top), it works as expected.
This testing project previously resulted in 200+ warnings being printed on every bake, now it doesn't result in any warning: test_lightmap_preview_bake_4.x.zip1
Code looks good to me at a glance.
Footnotes
-
I intentionally placed multiple probes at the same location in the testing project to see if it breaks, but it appears to bake just fine and dynamic object lighting still behaves correctly. ↩
b91b64d
to
f7dfe79
Compare
f7dfe79
to
a990e42
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me! Let's go ahead with this
Thanks! |
Fixes #82642
The issue
It's very common to get warnings when baking LightmapGI and it's easy to get tens of thousands of them in a single scene.
If the scene is smaller than about 100x100 meters, these warnings are often harmless and don't have a noticeable effect on the indirect lighting of dynamic objects. But for larger scenes, the effect is very much noticeable.
The warnings and the bad accuracy have three causes:
I made #90701 a separate PR for the last issue since Delaunay3D can be accessed directly from GDScript so that change isn't strictly isolated to LightmapGI.
Increased tolerance for Plane::has_point()
When creating a
Plane
from a triangle that spans hundreds of meters,Plane::has_point()
can give false negatives when applied to the second and third point of the triangle with the default epsilon. Andis_point_over()
can return either true of false in these cases.This wreaks havoc on BSP partitioning.
I've done tests on MRPs and scenes with randomly placed probes over 1000x1000x100m. The precision errors are always a power of two and the largest one I've been able to produce is 1/(1<<13) so I'm using that for tolerance when calling
has_point()
. It is 12x the current tolerance of 1e-5 but still small enough that it shouldn't matter. False positives with this magnitude should not have any effect.A better partitioning fallback
Probe tetrahedra are recursively sorted into a BSP tree. All the faces of the tetrahedra are candidate planes to be used for partitioning. As long as the remaining tetrahedra in a subtree have at least two that share an edge or a face, this method will always find a plane that cleanly separates at least one into each new subtree.
But the final two (or very rarely three) tetrahedra in a branch are often either disjoint or share only a single vertex. In this scenario, it is no longer guaranteed that one of the faces is also a separating plane. This is common cause of warnings.
With this knowledge, we can handle this specific case explicitly in a way that always succeeds (as long as the triangulation from Delaunay3D is valid).
Bonus: Smaller BSP tree
When the chosen BSP separating plane intersects tetrahedra, they are not split but instead moved into both the subtrees.
The current scoring system to find the best plane only rewards balance. If a plane has one tetrahedron over it, one under and 20 intersecting then it gets a perfect score because 1/1 is perfect balance. This makes the BSP tree much larger than it needs to be.
Reduz actually had a better solution but he commented it out. I took it and I made it more extreme. The scoring now penalizes planes for the number of tetrahedra they intersect.
"Worst" case, the tree is now 30% smaller. Best case, 97% smaller.
I can go into details about why I'm not worried that this will create unbalanced trees, if someone is interested. I've also verified that the max tree depth on all my test scenes is less than before. I expect that lookup times at runtime in general should "always" be improved. And when there are many probe nodes, bake time is also noticeably reduced.