Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KHR_materials_transmission #1698
KHR_materials_transmission #1698
Changes from 11 commits
b6b5396
a234697
74ad767
f7af9c1
a2cee58
45045c3
8f10d45
fe76115
af9ab1f
56deeaa
f3256a7
35de2de
fb83e3b
e429c4f
8df02df
3fe463e
1410f7f
c676ac9
a1af8f0
ae7ce0f
d4875a8
885fa99
3776f24
24c7755
f9e8ce1
b936cf2
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that we should make a painters (depth sorting) algorithm normative - otherwise we risk objects 'popping' in and out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mike is correct that specific rendering techniques are out of scope for normative glTF spec.. This particular suggestion would prohibit many real-time implementations, which is not what we want.
Separately, 3D Commerce may require something along these lines for certification, but beware that you may be excluding more clients than you bargained for here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it could pose a performance problem for a particular type of real-time implementations - mostly webGL / script based ones.
The cost would be performance - however the gain would be predictable results.
Depending on implementation (language and platform) you would get varying degrees of performance penalty, for instance a C based engine could do it on the CPU without too much overhead.
A Vulkan based engine could do it on the GPU.
I would not want us to limit this due to current viewer implementation bottlenecks - it could be a good motivator to push development forward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rendering should not be normative.
Depending on the rendering method (forward vs. deferred plus it's variations), different and better algorithms do exist.
E.g. pixel sorting (aka order independent transparency) should not be excluded.
From my point of view, we should point to non-normative approaches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rsahlin Also consider that many glTF implementations exist within rendering libraries that are used by 3rd-party applications. The library cannot dictate how the application structures its render passes or loops. That's part of what makes this extension particularly challenging, as it imposes some nontrivial best practices there. These should be as clearly spelled out as we can manage, but not marked normative. Little more than the glTF file structure and its meaningful interpretation is normative to glTF.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is a simplification calling this the 'rendering ' - to me it boils down to what is visible through the primitives using transmission.
If it's the sorting algorithm you object to then I would suggest a wording similar to:
'The background and other objects, including primitives using transmission, should be visible through the material (that is using transmission)'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed on today's call, Sketchfab's refraction material has a similar baseline as what is proposed here as an implementation note: we only require the opaque scene to be sampled by the transmission material. Some highlights from their documentation :
And for convenience, here are some sample models that demonstrate their refraction material.
Sketchfab has established this as an acceptable baseline for their community and I would hope that our spec would be structured in such a way that their implementation is conformant. It seems like a reasonable baseline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to be a similar situation for Blender's realtime renderer (Eevee). It cannot show refractive objects behind other refractive objects, but can display transparent and opaque objects behind refractive objects. https://blenderartists.org/t/realistic-glass-in-eevee/1149937/29
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make it clear that this is an implementation note:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section is normative. The non-normative "Implementation Notes" are below this section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this missing the G term and the normalization, like in f_specular?
f_transmission = (1-F) * T * baseColor * D_T * G_T / (4 * abs(dot(N, L)) * abs(dot(N, V)))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, quite right. Thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally would benefit from clarity here with regards to
Dt
. This states thatDt
is the Towbridge-Reitz distribution function, but sampled along the view direction rather than the reflected direction. The microfacet distribution function, D, is calculated using dot(n, h). Is it correct that this should instead use dot(n, -v) in this calculation for when we deal with transmission? Is it that simple?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the
dot(n, h)
term shouldn't come into play at all since the transmitted light isn't being reflected, but I could be wrong. The incoming light direction should be sampled along the view direction because you're looking through the material and are seeing completely unrefracted light. The transmitted light is then scaled by the Fresnel to account for the portion of light reflected off the inside surface of the material (and, therefore, not reaching your eye).Not sure at all if this is correct so it would be good to have others (like @proog128) look at this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To my understanding, the dot product is relevant as we still evaluate the microfacet distribution. There's a brief section on the topic in the DSPBR spec https://bit.ly/2NpmJVn (eq. 5 and fig. 4). We just use a modified half-vector, which uses the flipped light direction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right. You'd flip it across the plane defined by the surface normal and then it would be centered around the same direction as the view.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this sentence you mention Trowbridge-Reitz by name, but we only suggest that model in Appendix B as non-normative. Should it be mentioned here? I would consider rewriting this as:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm leaning towards that we should require a painter algorithm (depth sorting) of triangles - in my opinion this should be normative.
Making depth sorting of triangles normative will make a huge step towards consistent result across viewers (bar blurring and possibly other effects)
Without it whole objects may pop in and out of existence......
However, I'm not sure how to approach blurring since I think this will be the most 'heavy' effect to implement for multiple layers/objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depth sorting of triangles is not going to be reasonable on all platforms due to the number of passes required (or having the ability to use linked-lists of fragments which isn't possible in WebGL). Besides, I think rendering techniques are out of scope of glTF. Yes, there will be differences between renderers but as long as they do the blending in the way described, I think that's enough.
Blurring of the transmission is pretty cheap if they just sample the mips of the background texture. The background render texture will probably be required anyway to do the appropriate blending.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that sorting triangles is prohibitively expensive. Maybe a dedicated model viewer (note: model viewers are a subset of all glTF clients) can do that when the camera and object are stationary, but it's not a requirement we should have in the extension spec.
But on a similar point — the core specification says this about alpha blending:
In retrospect, I think we could have made a normative requirement that depth not be written for alpha blend materials. Should we specify this for transmission? Or is this not an issue, because transmissive objects are treated as opaque (or rendered in an entirely separate pass)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that if all layers have to support blurring an engine could require multiple passes (render to texture, minify then sample) which could become a performance bottleneck.
However - 'just' sorting the triangles of transmission objects could be done in a single pass if effects needing to sample the objects behind are left out.
This would leave us with viewers that would at least display transparent objects in the correct order and visibility.
Yes, the blurring itself is quite cheap - it's having to render out to texture, forcing multiple passes, that will be mostly heavy, plus generating the mip-levels.
I also do not like us to be so tied to WebGL - just because something can not be done in WebGL should, in my opinion, not mean that we leave it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope "blurring" is just in short we do it physically - as far as possible - correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that's what I mean anyway. :-)
The point I'm trying to make is that we should require a minimum of visibility through triangles using transmission.
Meaning that not only the background should show through - any other primitives (that may also use transmission) should also be visible.
However, what effects to apply to each of these primitives is optional - for instance the roughness that will result in blurring (since will require multiple passes on todays hardware)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with this in principle but WebGL will have a huge influence since we want glTF to be viewable on the web.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is a WebGL problem. Requiring that implementations sort triangles, in realtime, is extremely burdensome on any platform. Yes, there are cases where it may be possible (i.e. few enough triangles), but we really cannot impose this at the data format level.
Is there a reason these requirements cannot be in the 3D Commerce Certification process, instead?