-
-
Notifications
You must be signed in to change notification settings - Fork 21.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace OIDN denoiser in Lightmapper with a JNLM denoiser compute shader. #81659
Conversation
It would be interesting to modify these two values to see if they help out with the shadows:
I agree adjusting the strength might be easily doable by using the quality preset setting we already have for the ray count. That'd allow us to adjust the filtering to be stronger or weaker based on the expected noise. |
Looking at Calinou's screenshots makes me feel like JNLM breaks and just does a giant blur in a lot of situations where it probably shouldn't; the part under the stairs in the first screenshot set stands out a lot in a bad way. |
I agree the feature loss is a bit much, but I think it's fixable by tweaking the one of the constants I mentioned before, so I'd be interested in how much lower the color range would need to be so the shadows are not lost but a reasonable amount of noise is reduced. |
63f0752
to
4ca59f7
Compare
38f53ff
to
c4b5ef4
Compare
Testing project: https://0x0.st/HO8p.zip I've tested the latest revision of this PR on the same testing project as above (
Note that I disabled the texture before baking lightmaps here, so the result appears naturally brighter than in the screenshots I posted yesterday. Other shadows in the scene fare better at lower denoiser levels, keeping a good level of detail while not exhibiting too much visible noise when textures are added to the mix:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Denoise Strength property in LightmapGI should be hidden when Use Denoiser is disabled, as it has no effect in that case.
This can be done by modifying LightmapGI::_validate_property()
(see the existing examples there), and changing LightmapGI::set_use_denoiser()
to call notify_property_list_changed();
at the end of the method.
A suggestion with the lightmapper in general, wouldn't it make sense to store a Non-denoised lightmap, and have the denoiser make a copy of it and save a second denoised result that gets overwritten every time denoiser settings change? it'd speed up Iteration times a ton when it comes to changing the Denoising Settings, and might make it more viable to add more settings to play around with. |
@DarioSamo If we proceed with removing OIDN as discussed on RC, here's a commit that does it, removing all references also in license details and docs: -115k LOC, nice! Feel free to grab ownership and those sweet sweet negative LOC stats :D |
That'd double up the result of storing lightmaps, so it does come with the tradeoff of having to store double the textures as well as coming up with a new naming scheme for them and possibly file format changes. I think the ideal goal here would be to avoid having to change these settings as much as possible or just have very few of them, as we don't want to impact the workflow of people significantly.
Ah sweet, I had a similar commit of my own but I missed the COPYRIGHT.TXT file. I'll just grab yours instead. I will proceed to make some changes and see if we can restore those shadows in Calinou's scene somehow. |
it could be an editor setting, and you'd just call the OG one something like |
c4b5ef4
to
6810d67
Compare
Did @Calinou's change and also changed what variable of the denoiser the strength was linked to. While it can't exactly match OIDN's result, it is significantly improved by adjusting the strength on according to the level of contrast in the scene. Taking it out of drafts so we can start testing it out more in detail. I don't think the design will change significantly at this point, but we might need to evaluate whether an additional setting is needed or not to have better control over it. I'd rather keep one single slider if we can. |
540d8e8
to
ab65eff
Compare
|
This scene has really visible seams even without the denoiser. Honestly just looks like the lightmap size is way too low to me to expect any coherence between boxes that are joined together and the denoiser is working over far too few pixels. An increase in the texture size also clearly helps with the coherence. But really, unless you can somehow get a better result with OIDN, which I doubt because the problem here is the coherence of the texture across the surface, you should just do a better model. You won't find this behaves much better in other engines when you give the light-mapper a hard time to match the UV2 across meshes like this. It is far easier to solve this by just making one surface instead of a bunch of boxes (that and it's a waste of texture space considering the sides can't be seen at all). For context for anyone else, this is the structure of the scene. |
So, in conclusion, it's caused by bad UV2? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation looks good to me.
In the Production team meeting we also discussed removing OIDN and agreed that we were okay removing it for now because the quality is only slightly better. We can evaluate bringing it back as an extension if there is user demand.
I have not tested locally, but I trust the testing done by other contributors
The UV2 itself isn't the problem (although you could manually alter it to mitigate it further). The construction of the scene itself just works against how a lightmapper works in general. Each of those cubes has its own lightmap. A denoiser will only work in the context of one texture at a time. It's not gonna be possible for any denoiser to fix that unless it starts working with multiple lightmaps at once and figuring out how things align in world space. That is a really tough problem to solve. Think of how tough it can usually be to get a texture to tile properly across seams like this: now imagine how tough it's gonna be for a lightmapper to guarantee consistency across pixels, especially at low resolution when basically one pixel covers far more than that in the screen depending on the camera distance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good to go now 🙂
Great work!
Thanks! This is a hefty reduction in the size of the codebase! Here's to faster build times 🎉 |
JNLM stands for Joint NL-Means. |
2 years ago Blender dropped NLM in favor of superior OIDN and here instead of properly implementing latest version of OIDN, Godot drops it in favor of inferior denoiser algorithm? It's not even affecting runtime binaries, just the editor, so i don't get it. NLM could be an external alternative denoiser or something you're able to select and choose from other options, but not the only core tech as it gives worse results. Quality should be more favorable for the offline denoising tool or at least switchable (to get better results in the final bake). NLM as faster (i doubt it with multithreaded implementation of OIDN), choosable alternative sounds ok. Open Image Denoise is used by Blender and Unity. |
It's already been explained in the original proposal, you'll never have an embedded OIDN integration that works properly due to how Godot's build system works unless you want to port their custom compiler, CUDA, HIP, etc. Nobody's really up for maintaining that level of integration with how complex it can be. The purpose of this is to have an embedded alternative that works and we can have a command line option that you can set so you can use OIDN properly. The current version that was embedded was slow to the point of being unusable (single threaded). I don't see any realistic scenario on how that was viable for production the way it was. |
It's not an "alternative" if it's the only core option available. Any functionality you remove from the core/move outside will lead to less Godot users using and caring about it and in turn less people interested in using engine without industry standard functionalities (because nobody or hardly anybody maintains something outside the core, which people doesn't care about; it's a vicious cycle). Why wouldn't you make it a choosable denoiser and try to get the funding for better implementation of OIDN or simply spread awareness that it needs more love? (i know it's not that easy, but with OIDN removed from the core it's simply impossible) It could also encourage implementing other choosable denoisers. |
The plan is already laid out in the original proposal on how to go about it. It's not out of the cards to add back OIDN and it'd definitely be as part of the core as this is, it's just the way OIDN's build system works that makes it prohibitive for Godot to embed it by itself (TBB, CUDA, HIP and ISPC). It could be as simple as automatically downloading the executable from Intel when you choose the option in the lightmapper. People wouldn't have to go hunt down an unmaintained extension. I'm very much in favor of that being implemented. It was also missing from the blog post perhaps, but it's worth mentioning this denoiser can actually fix stuff OIDN was just getting wrong or straight up breaking, as it lacks the information from surface normals and albedo which helps guide the result a lot better than your average NLM implementacion (hence the J part of the name). |
@Listwon What it comes down to is that Godot can't use OIDN out of the box. If we want to use it, we have to cobble together a series of hacks to make it work with Godot. That's what we did previously and it made OIDN borderline unusable (no multithreading support, no GPU support, we were stuck on an old version). The effort to get OIDN working well in Godot would be monumental. It would increase the size of the editor by tens of mbs and would require a significant amount of resources to maintain. Ultimately what we found was that with the time savings from not using OIDN, quality can be increased to a degree that denoising is less necessary and the functional difference between the two approaches was minimized. Given that, it made little sense for us to keep OIDN around. We considered making the denoiser switchable between JNLM and OIDN, but the quality difference was too small to justify keeping OIDN around at this point. More funding is certainly welcome if you have some leads who would like to pay for bringing OIDN back. Given the substantial build system changes necessary and the amount of work to connect things like CUDA and HIP, we would likely need around $50,000 or so for the initial work and then slightly less, maybe $10,000 per year to keep it up to date. |
Is there any reason not to use |
That's pretty much what the proposal lays out we should do, and I agree. @Calinou pointed out there might be some other things to solve when it comes to the image format at least. Admittedly maybe this PR went out earlier than having that option back in yet, but it wouldn't take as much effort and it'd yield a far better result than choosing to make it part of Godot itself. |
That's the current plan! |
I think the only reason could be that it's a sample implementation and the commands can change between binary releases. But it's good enough approach, allows passing extra data to OIDN (albedo, normals) and running it on multiple CPU threads or GPU. |
I guess loading the dynamic lib should not have these issues (at least across the same major version), and can be done in the same way as executable (the only downside, there's no ARM64 macOS prebuild). Here's a quickly made test for dynamic library loading - https://github.com/bruvzg/godot/tree/oidn_external_lib (seems to work, at least on macOS), library location can be set in |
That's pretty sweet, I think you'll have an easy chance for this to get accepted now that the JNLM denoiser is in. |
Original proposal: godotengine/godot-proposals#7640
Background
"Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings" (https://benedikt-bitterli.me/nfor/nfor.pdf) mentions in its opening chapter the possibility of using Non-Local Means with Joint Filtering, which has a fairly simple implementation over at https://github.com/ManuelPrandini/YoctoImageDenoiser. However, upon reviewing the implementation I found a few minor corrections that needed to be made and decided having the algorithm run in a compute shader would definitely yield far better results when it comes to runtime.
As mentioned in the proposal, the current implementation of OIDN is fairly outdated, and most importantly, extremely slow due to being forced to use a single CPU thread. On the scene I used for testing this out, the overall rendering time went down from 3 minutes to just over 1 minute in an RTX 3090 Ti with an 8K lightmap.
While it is not expected for the visual quality to be able to compete due to the nature of the algorithms (one is just a compute shader, the other uses an entire neural network and definitely understands the topic far more than I do), I think overall the result is fairly competitive and this might prove to be a decent fallback if we choose to go via the route of only using OIDN via the CLI.
One interesting advantage JNLM has over OIDN here is that it can use both the albedo and normal inputs to help reduce artifacts like seams and help improve the denoising and filtering process significantly. This could potentially be further improved if we can think of more auxiliary inputs we can use to help the algorithm out.
Lighting Only
Raw
JNLM
OIDN
Composite Result
Raw
JNLM
OIDN
Feedback
My reason for the PR is that I think it'd be interesting if people could test the branch out and see if they achieve decent enough results for this implementation to act as the future fallback when OIDN is not installed. However, seeing as it got very close, I think it would be reasonable to also experiment with this algorithm and see if with some tweaks the result could be even closer than what I initially got.
For that I'd like to point out the following code in the shader itself:
https://github.com/DarioSamo/godot/blob/ed6a4e53e7420e93ca07bd48960e6eebe00c2f67/modules/lightmapper_rd/lm_compute.glsl#L753-L777
Some of the feature losses such as shadows and some particular areas as shown in the other picture might be fixable by playing around with these values and finding a sweet spot. It'd be preferable to not have to expose them to end-users as they can be values that are hard to understand what direct impact they have, but it could be evaluated to do so if it's deemed necessary to fit the needs of other scenes.
TODO