-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PBR / Clustered Forward Rendering #179
Comments
Hey thats a great discussion to have. I suggest changing the title to something more actionable so that we can instinctively have an idea of the "Lifetime" of this issue. an example would be "Make a plan regarding X" or "Investigate the possiblity of Y". Its just a small guideline that helps keeping track of where a discussion should go and end :) |
Or maybe even just Clustered Rendering. As @GabLotus said, in general I'd like to avoid issues without a clear done definition. I'm fine with issues being big, but they should have clear outcomes. (i also fully admit that i've set a bad example with some of my past issues) |
What I'd like to see happen with this task is to gather opinions and possibly find consensus on a high level default render pipeline structure for bevy to target in the very near term and a little longer term. (i.e. very near term = ~3 months, little longer term = ~12 months). Ideally we could come up with a very brief roadmap that provides value now and also provides a path to improve on it later. I would recommend keeping focus on forward rendering today, and favor choices that can fold well into doing clustered forward rendering later. |
I'm Fusha on Discord, basically just coming to throw my 👍 into the hat on clustered forward but also to comment and mention a couple other related things:
|
Hmm, I don't think you necessarily need to rely on one big uber shader. Godot does this and quite frankly its a mess. Plus WSL wont support shader defines. Instead if we really want a more module adaptive shader we likely should build shaders from smaller pieces programmatically. The other option is to use shader includes as much as possible to dedup code and have different shaders as different files. Currently though the shader system in place does not support shader includes. I created an issue here about it: #185
I'm not familiar using NPR tonemapping in this way do you have any articles or papers on the subject? On mobile reducing render passes becomes much much more important. I'm not convinced we should limit the desktop by mobile(low-end) constraints though. Perhaps going down the path of splitting the renderer into two different graphs would make more sense. Internally we could build the graph based off of user settings and hardware limits. Also this is an interesting read: |
And here I'm trying to add it in after the compute stuff is done. 😄 I don't mind waiting but adding clustered shading doesn't change a lot of stuff. |
What I had in mind for the 3-ish months was to spend some time improving and polishing a simple forward renderer. PBR, bloom, HDR... In particular, get shadows up and running because it touches a lot of rendering systems (multiple views, multiple passes in a material, can be done in parallel with other rendering stages). Clustered forward rendering sounds to me like the direction we want to be headed and it seems like there is consensus on that so far. Could prototyping/R&D for clustered forward happen in parallel with fleshing out the current pipeline? I think this would help limit risk of things stalling out. |
We likely could create a plugin for it(similar to |
Check out this part of the Filament docs https://google.github.io/filament/Filament.html#imagingpipeline/validation/scenereferredvisualization Also, tonemapping (at least, global tonemapping, i.e. each fragment only has information about itself, which is the case for most tonemapping operators used in games) can simply be done as the last step of the main uber-shader; it doesn't necessarily need to be a separate render pass. |
Lots of great conversation happening here. I really appreciate the thoughtfulness and expertise you all are bringing to the table. I will defer to you all here when it comes to clustered. It seems like a good place to start. Ideally we experiment with multiple paradigms and build things in a way that makes them reusable across paradigms. The Armory project is a pretty good example of supporting forward and deferred with the same pieces. However that isn't a hard requirement. We can always try to modularize later if we need to. Uber shaders don't scare me as an output, but they do definitely scare me from an organizational standpoint. Using imports to create scoped (and ideally reusable) pieces of shader logic seems like the right call, with or without uber shaders. Uber shaders can perform quite well in some contexts (ex: the dolphin emulator project had great success with their "uber shader" effort). The Filament docs really are great. We should probably start a collection of rendering resources that we can all learn from. When you start implementing, please record in the repo what sources you used (both for giving appropriate credit and to help new contributors). I'm starting to consolidate my thoughts on what the Bevy development process should look like. I think the main bevy crates (and bevy repo) should be for building our current best ideas for "final" implementations of core functionality. Ex: eventually bevy_pbr will contain the default pbr plugin that everyone uses. Pushing code there will be a signal that we have made a decision to take that crate in a given direction. But spending time discussing and worrying about building the 100% correct solution will stall us and force us to get caught up in theoreticals. Almost without exception I think we should be building "fast and loose" prototype code outside of the main bevy repo, probably with some naming convention like In the short term, I encourage you all to create and distribute your own crates for experimentation (while being respectful of the core bevy_XXX namespace). As specific implementations gain momentum and stability, we can then start discussing centralization of efforts. I'll try to give appropriate visibility into the various distributed projects to help direct people's attention and avoid duplicate work. I'll also be setting up working groups for specific focus areas (and PBR will be one of them). |
This I guess raises the next question, how do we separate out the shaders into different crates? Ideally we should have a way of sharing the PBR implementation between a |
Yeah I agree that long term breaking them up into separate crates could be beneficial (or alternatively, just separate modules in the same crate). Short term I expect making crate divisions will hamper productivity. But if that workflow works for the implementors, im cool with it. As you saw, right now shader includes dont work. I don't see a huge problem with building "big shaders" first and then breaking them up later. But its very possible we can make includes work with a small amount of effort. I just dont want to waste too much effort on that when naga is so close. |
Yeah this is my thoughts as well and is what I wanted to convey originally. |
We had a short discussion in discord tonight RE: next short-term steps. I'll summarize here for further discussion: Now:
Later:
(I don't think this needs to block someone doing R&D for forward clustered rendering as a longer-term project on the side.) |
I opened a draft PR #261 which is a somewhat working implementation of googles filament. I think its a good starting point for getting a feel of how we want PBR to work in bevy. |
Thinking about bind groups and how limited we are on them I came up with the following(WIP) non-exhaustive list: Textures:
Lighting:
This brings us to 13 textures which gives us a little wiggle room for more We can request for more than 16, but remember that it will no longer align to webgpu min spec and wont run on all hardware and may not run on web at all. |
It might not be ready for use for a while, but it would be awesome if we could leverage Embark's recently announced Rust GPU project: |
A |
#1554 has merged, adding a lot of basic PBR functionality! |
This is a bit far ahead of what's currently implemented, but some form of Global illumination would be nice to have (Something like voxel cone tracing, or GI (and soft shadows) via distance fields) |
I’m not sure if it’s a good idea to extend this issue or if we want to create a new one but I think it would be good to look into the options for higher-level rendering features and figure out some kind of plan of what we’d like to implement. I think this would help provide focus for render feature implementation and also more coverage for consideration for the renderer rework effort that is happening at the time of writing. For readers, I'm just a random interested party in the community. :) Main Target and High Fidelity TargetDiscussions have been held on Discord in the rendering channel that lean toward having a main render path that 'works anywhere' - no clear guidelines were decided upon but @cart suggested perhaps 'works on 10-year-old desktop hardware' was not unreasonable. This is around the NVIDIA GeForce 500/600 series, and AMD Radeon HD 6000/7000 series (HD 7000 was the start of GCN architecture). Some others, me included, would like to use more advanced features of modern hardware and APIs to achieve graphical results with higher fidelity. I think @StarArawn noted the 'two target' approach first, similar to Unity's Universal Render Pipeline and High-Definition Render Pipeline. It was suggested by people who know better that this would likely result in different architectures as we see the high fidelity state of the art renderers tending to use a deferred architecture, using more compute shaders, hardware accelerated ray tracing, mesh shaders, etc. This seems like it would lead to a plan to focus on building the main 'works anywhere' target first, though of course the community can build in whatever order it wants. NOTE: This is just a brain dump of some things I've been looking at recently. There are a lot more pieces to consider. And we'd probably want to dig a bit deeper to consider pros and cons, what can fit into the main LightingAmbient occlusionPurposeAmbient occlusion addresses medium to long range occlusion of ambient light (which is light that has undergone many bounces and is in some sense 'omnidirectional'). Note that it should be used in conjunction with the occlusion texture in PBR models as that occlusion texture provides more fine-grained, short-range occlusion information. Options
StatusI have a mostly-working SSAO implementation working but it relies on multiple branches to aid doing fullscreen passes (which is needed for bloom anyway), global resource bindings (the render resource nodes are going away / going to be reworked), and Draw/RenderPipelines taking a type parameter such that multiple such components can be added to Entities in order to be consumed in multiple separate render passes (I think this also goes away, or perhaps needs modifying, after the renderer rework). Global IlluminationGlobal illumination is about accounting for the indirect contributions from 'directional' (i.e. non-ambient) light sources such as directional / point / area / etc lights. Options
|
To add to GI methods:
Ambient occlusion:
|
It’s been suggested on discord that this should probably be a separate discussion. GI and AO could go into one perhaps. But I’d also like somewhere to discuss an overview and maintain the current state of renderer discussions. Other ideas that come up are dynamic sky / atmosphere, volumetric lighting and fog, crepuscular rays, depth of field, chromatic aberration, and other physical camera things, colour grading, clouds, weather. A number of those would be better implemented as plugins and considering how to do that would impact the APIs of the core renderer. The point isn’t to plan out exactly what and how to do everything up front but rather give some idea of what things are needed / desirable, what methods are good, what rough order should they be done in with respect to the visual benefits they bring, what can be done on top of bevy_render / bevy_pbr straight away and what needs more work to be done there first, etc. @cart - where do you think such an overview should be discussed and a summary be maintained? |
This should probably start as a github discussion. Most of the features discussed are orthogonal to each other, so I don't see much value in having an official "global list and implementation order" for all render features. But it would be good to identify which features intersect with each other (and how). And collecting implementation details / algorithm options / requirements for each feature seems valuable. This might help inform renderer api decisions. Feel free to create a github discussion if you want to facilitate this "information collection and consolidation" effort. But when it comes time to start making "official plans", each feature should have its own RFC so we can discuss it in detail. |
As a note, @ChangeCaps implemented bloom here: #2876 and I have a branch that implements clustered-forward rendering: https://github.com/superdump/bevy/tree/clustered-forward-rendering |
PR for clustered forward rendering is up: |
Closing this out, since #3153 merged! |
This is a Focus Area tracking issue
PBR is a standard-ish way of rendering realistically in 3D. There is both a lot of interest and a lot of brain-power in this area, so it makes sense to build PBR now. This focus area has the following (minimum) requirements:
Active Crates / Repos
Sub Issues
No active issues discussing subtopics for this focus area. If you would like to discuss a particular topic, look for a pre-existing issue in this repo. If you can't find one, feel free to make one! Link to it in this issue and I'll add it to the index.
Original Post (sorry @aclysma for stomping on this)
There was a discord conversation that I think is worth capturing. I'll do my best but I may miss some people or get some sentiments wrong. I also don't know everyone's github names.
StarToaster, fusha, and aclysma (me) all commented that clustered forward rendering was a good overall model to pursue. (Possibly also matthewfcarlson as well, not sure if he was agreeing or just linking a helpful doc :D)
@cart suggested later:
To summarize the main advantages of this approach are:
The text was updated successfully, but these errors were encountered: