-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Use the infinite reverse right-handed perspective projection #2543
[Merged by Bors] - Use the infinite reverse right-handed perspective projection #2543
Conversation
27f668a
to
3a0074c
Compare
I fixed the bug. diff --git a/pipelined/bevy_pbr2/src/render/pbr.wgsl b/pipelined/bevy_pbr2/src/render/pbr.wgsl
index ff427aa93..4f12c6792 100644
--- a/pipelined/bevy_pbr2/src/render/pbr.wgsl
+++ b/pipelined/bevy_pbr2/src/render/pbr.wgsl
@@ -531,7 +531,7 @@ fn fragment(in: FragmentInput) -> [[location(0)]] vec4<f32> {
// # endif
var V: vec3<f32>;
- if (view.view_proj.w.w != 1.0) { // If the projection is not orthographic
+ if (view.projection.w.w != 1.0) { // If the projection is not orthographic
// Only valid for a perpective projection
V = normalize(view.world_position.xyz - in.world_position.xyz);
} else { The problem was that the view projection matrix bottom-right value being 1 does not indicate an orthographic projection in all cases. That value can be 1 with perspective projections too. It is rather that the projection matrix has a bottom-right value of 1 in an orthographic projection. As such this PR now also adds some useful matrices (view, inverse view, projection, inverse projection) to the ViewUniform and adds back MeshUniform to be able to include an inverse transpose model matrix for correctly transforming normals. These matrices are standard built-in matrices in Unity: https://docs.unity3d.com/Manual/SL-UnityShaderVariables.html and Unreal Engine has similar. |
5c3c316
to
8b686c1
Compare
We should update the View binding in the sprite shader (and the upcoming depth shader in #2727) |
@@ -42,6 +42,10 @@ pub struct ExtractedView { | |||
#[derive(Clone, AsStd140)] | |||
pub struct ViewUniform { | |||
view_proj: Mat4, | |||
view: Mat4, | |||
inverse_view: Mat4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that ultimately storing pre-computed inverses of these matrices is valuable, but I think I'd like to hold off until after we add "shader imports". The rationale being that every additional field we add here needs to be copied (correctly) to custom shaders. The smaller this is, the easier it is to write a custom shader. Given that we don't currently use the inverses in our shaders, I think it makes sense to temporarily postpone these additions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like I disagree on this one. These matrices are standard to have available in other engines. Copying and pasting them from the PBR shader into another custom shader doesn't make any difference whether it's one matrix or 5. And views are few so the size argument doesn't really have an impact for this one I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While probably true in practice (all copy pastes are approximately equal difficulty), I think a "large" binding that every shader needs to include has real implications for how people perceive Bevy. We want our "custom shaders" examples to be as small / terse / concise as possible. Users that see huge bindings with data irrelevant to the shader they are writing will almost certainly react negatively. Obviously "binding imports" are the long term solution to this problem, but its very possible that those won't land in 0.6.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we’re more likely to find people who want to write a shader, wonder what the built-in bindings are, wonder why their standard matrix that is in other engines isn’t accessible, try to use an inverse function in wgsl and find that doesn’t work, implement their own and find it is slow, maybe make their own binding instead and perhaps just complain at one or more points through this journey not understanding why bevy doesn’t have the standard collection of matrices.
That said, I agree to disagree and I’ll do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I kept the projection matrix as it is used, but view, inverse view, and inverse projection are gone.
@@ -10,7 +14,8 @@ var view: View; | |||
|
|||
[[block]] | |||
struct Mesh { | |||
transform: mat4x4<f32>; | |||
model: mat4x4<f32>; | |||
inverse_transpose_model: mat4x4<f32>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Data transfer is expensive. Adding another 4x4 matrix to each mesh binding probably meaningfully registers for our frame time. Have you measured this relative to the cost of computing it in the shader? Also according to this article the inverse transpose isn't even needed for non-uniform scales. We can just do their "scaling" calculation in the shader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I'll implement that tomorrow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inverses of 3x3 and 4x4 in the shader are expensive and naga doesn't support inverse yet. I've implemented them manually for 3x3 and 4x4 previously and they're pretty expensive. As there is an alternative, cheaper method, I'm fine with doing that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although, this would prevent non-standard model matrices where the axes are not orthogonal to each other. I don't know if we want to cater to that...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inverses of 3x3 and 4x4 in the shader are expensive and naga doesn't support inverse yet.
Riiight. Totally forgot about that. Pretty weird :)
Although, this would prevent non-standard model matrices where the axes are not orthogonal to each other. I don't know if we want to cater to that...
haha yeah im fine with not working in that context. pretty far off the reservation and if/when that becomes a problem, we can just make that a configuration option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I thought I'd have a quick go at a naive benchmark. I made a cubic grid of cubes, 30 x 30 x 30 = 27000 cubes. I disabled vsync (on macOS, not 100% certain that works), and tried just multiplying the vec3 normal by the 3x3 model matrix (as a baseline, even though the special-case method would use a bit more computation to apply the inverse square scales) and separately multiplying by the 3x3 inverse transpose model matrix. On my MacBook Pro 16 with Radeon Pro 5500M (~mobile GTX 1050Ti performance), I get about 17-17.5fps with the model matrix, and 18-19fps (???) with the inverse transpose of the model matrix. I don't know why the latter is faster, but it is. I made sure to change the minimum binding size. So I feel like it's unnecessary to make this change and it's better to just do it properly? 27000 * 64 = 1728000 bytes extra per frame.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried with 46^3 = 97336 cubes, which would be 6229504 bytes extra per frame, and I'm seeing similar results: ~4.9fps for just multiplying by the model matrix (and removing the inverse transpose matrix from the binding and shader, and reducing the minimum binding size) and ~5.2fps (being generous, it was more like 5.3fps) when using the inverse transpose.
One thing I just realised: MeshUniform uses dynamic offsets, so the alignment is required to be 256 bytes. So I don't think this will make any difference in practice as without the inverse transpose model matrix, the binding size is 80 (mat4x4 64 bytes + u32 4 bytes but rounded up to the next alignment of 16 = 80) and with it it is 144 bytes (2*64+4 = 132, rounded up to the next mod 16 is 144.) As both of these fit within the alignment, they will use the same amount of data per frame!
It still doesn't explain why using just the model matrix is slower, but it would explain why using the inverse transpose would not be slower.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's my example I was using for benchmarking if you want to test it yourself: https://gist.github.com/superdump/5a02caa6747de6892448dbed749dc767
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this on Intel UHD 630 as well and saw the same odd behaviour of 3.8-3.85fps with multiplying the model matrix, and 3.9-4.1, mostly around 4fps for multiplying the inverse transpose model matrix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@superdump and I already discussed this on discord, but for anyone else paying attention: I was also able to repro this benchmark. Weirdly I was also able to reproduce that the current impl is faster. For now I see no reason not to roll with this impl.
…ction This projection is industry standard and offers better precision distribution over the depth range than forward projections.
…forms view, inverse view, projection, and inverse projection are commonly-used matrices from the view. model and inverse transpose model are commonly-used matrices for meshes.
Normals should be transformed by the inverse transpose of the model matrix to correctly transform their scale.
A projection is not orthographic if the view projection matrix bottom-right element is 1.0, rather if the projection matrix bottom-right value is 1.0. Now the projection matrix is bound, this fix is possible.
8b686c1
to
dc9e501
Compare
f0457e0
to
cf8a4ff
Compare
bors r+ |
# Objective Forward perspective projections have poor floating point precision distribution over the depth range. Reverse projections fair much better, and instead of having to have a far plane, with the reverse projection, using an infinite far plane is not a problem. The infinite reverse perspective projection has become the industry standard. The renderer rework is a great time to migrate to it. ## Solution All perspective projections, including point lights, have been moved to using `glam::Mat4::perspective_infinite_reverse_rh()` and so have no far plane. As various depth textures are shared between orthographic and perspective projections, a quirk of this PR is that the near and far planes of the orthographic projection are swapped when the Mat4 is computed. This has no impact on 2D/3D orthographic projection usage, and provides consistency in shaders, texture clear values, etc. throughout the codebase. ## Known issues For some reason, when looking along -Z, all geometry is black. The camera can be translated up/down / strafed left/right and geometry will still be black. Moving forward/backward or rotating the camera away from looking exactly along -Z causes everything to work as expected. I have tried to debug this issue but both in macOS and Windows I get crashes when doing pixel debugging. If anyone could reproduce this and debug it I would be very grateful. Otherwise I will have to try to debug it further without pixel debugging, though the projections and such all looked fine to me.
Pull request successfully merged into pipelined-rendering. Build succeeded: |
Objective
Forward perspective projections have poor floating point precision distribution over the depth range. Reverse projections fair much better, and instead of having to have a far plane, with the reverse projection, using an infinite far plane is not a problem. The infinite reverse perspective projection has become the industry standard. The renderer rework is a great time to migrate to it.
Solution
All perspective projections, including point lights, have been moved to using
glam::Mat4::perspective_infinite_reverse_rh()
and so have no far plane. As various depth textures are shared between orthographic and perspective projections, a quirk of this PR is that the near and far planes of the orthographic projection are swapped when the Mat4 is computed. This has no impact on 2D/3D orthographic projection usage, and provides consistency in shaders, texture clear values, etc. throughout the codebase.Known issues
For some reason, when looking along -Z, all geometry is black. The camera can be translated up/down / strafed left/right and geometry will still be black. Moving forward/backward or rotating the camera away from looking exactly along -Z causes everything to work as expected.
I have tried to debug this issue but both in macOS and Windows I get crashes when doing pixel debugging. If anyone could reproduce this and debug it I would be very grateful. Otherwise I will have to try to debug it further without pixel debugging, though the projections and such all looked fine to me.