-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Customising Shaders #249
Comments
After looking at that SceneKit shader modifiers our api could look e.g. like this: material.shaderModifiers = {
// model space, before skinning, world transform and projection
geometry: `
uniform float uDisplaceAmplitude;
vertexData.position += vertexData.normal * uDisplaceAmplitude;
`,
// after reading all uniforms and texture values
// before lighting, values in view space
surface: `
vec3 textureTriplanar(vec3 worldPos, vec3 normal) {
...
}
surfaceData.baseColor = textureTriplanar(worldPos, worldNormal);
`
// after lighting (but before or after exposure and tonemapping?)
fragment: `
outputData.color.rgb = vec3(1.0) - outputData.color.rgb;
// but also
outputData.color.rgb = surfaceData.normal * 0.5 + 0.5;
`
} Notes
Open questions:
material.shaderModifiers = {
// model space, before skinning, world transform and projection
geometry: {
defs `uniform float uDisplaceAmplitude;`,
code: `vertexData.position += vertexData.normal * uDisplaceAmplitude;`
}
`, |
|
Hmm these are basic concepts in computer graphics. Geometry defines the shape of thing being render, and surface parameters how that shape responds to light. And they are not as much detached from renderer names and it might first seem.
|
Same as currently recompilation happens when any of the material parameters modifying shader is changed.
I think so too |
Could we make use of With the shaderModifiers syntax above, it feels a big vague as to where custom code is added (is it at the top, uniform declaration, beginning/end of main()...). |
@dmnsgn which part of it is vague? If code then then way Scene Kit defines it is very clear: "after the above values are defined/read but before they are used. I.g. vertex before skinning, surface after getBaseColor() (read uniform or texture) but before applying lighting and frag at the very end. The problem is e.g. if you want to declare function as well. Hence proposal for
|
Code injection points |
How do I handle modifying vertex position before or after instancing if I only have one injection point? Same if I need to modify data.normalWorld before or after. |
Yes, it's limiting but also make things clear how they work and provides immunity from underlying shader code changes. Otherwise things get complicated fast just to cover remaining 5-10% of use cases and we end up where we are now "copy whole shader, modify what you want and hope it works in the X.1 release" https://twitter.com/bonzajplc/status/1177715088079368194?s=21 |
Another good thing i see coming out of it we could clean up code a bit as now values are defined and computed all over the place at different stages. e.g. reflectionWorld added to global PBRData struct only if we have reflection probes material.frag.js#L207 |
A good idea is to write a list of use cases: Easy
No clear how to prototype
|
@dmnsgn to answer your earlier question about uniforms and varying. Similar to what thi.ng/umbrella is doing we could move to more modular shader definitions like here : https://github.com/thi-ng/umbrella/blob/master/examples/webgl-ssao/src/shaders.ts#L20 This way custom shader would be pushing it's uniforms to |
For now this is what's happening Renderer following shader chunks
We customize material shader material.frag = renderer.shaders.pipeline.material.frag.replace(...)` but that then get's used for depth pre-pass return {
flags: flags,
vert: material.vert || DEPTH_PASS_VERT,
frag: material.frag || DEPTH_PRE_PASS_FRAG
} What i think should happen is depth prepass - renders geometry and outputs depth and normals then used for SSAO material.frag = `#ifdef DEPTH_PRE_PASS_ONLY
${renderer.shaders.pipeline.depthPrePass.frag.replace(...)}
#elseif DEPTH_PASS_ONLY
${renderer.shaders.pipeline.depthPass.frag.replace(...)}
#else
${renderer.shaders.pipeline.material.frag.replace(...)}
#endif |
Proper syntax with var frag = `#ifdef DEPTH_PRE_PASS_ONLY
${renderer.shaders.pipeline.depthPrePass.frag}
#elif defined DEPTH_PASS_ONLY
${renderer.shaders.pipeline.depthPass.frag}
#else
${renderer.shaders.pipeline.material.frag.replace(
`gl_FragData[0] = encode(vec4(color, 1.0), uOutputEncoding);`,
`color = data.normalView * 0.5 + 0.5;
gl_FragData[0] = encode(vec4(color, 1.0), uOutputEncoding);`
)}
#endif` |
There could be an additional "above the fold" point where you can insert GLSL outside of Also, there would not be a need to even define them in GLSL if you could pass the type in along with the uniform:
|
Currently you can do it by modifying various |
This example is interesting. I don't think that you would want to write normal calculation code every time for every shader. This could perhaps be automated to take into account only the fact that the default normal is no longer correct. So for example, the presence of some GLSL in some field, could trigger a define to calculate the normals. I think you'd want to limit this to just vertex, and have the surface stuff happen in a general way (vertex moved / normals invalid), automagically. |
Also might be worth checking out (help welcome!) this . It's on the shelf atm, but i think a couple of these materials are completely wired. Basically you can remove all mention of materials other than The approach is exactly the same as how State of the world:
You're free to process the template anyway you want, but you get getters and setters so you don't have to do
You get:
|
Note to self: If above ifelse doesn't work try #ifdef DEPTH_PRE_PASS_ONLY
//copy pre pass shader
#endif
#ifdef DEPTH_PASS_ONLY
//copy depth shader
#endif
#ifndef DEPTH_PRE_PASS_ONLY
#ifndef DEPTH_PASS_ONLY
//pbr shader
#endif
#endif |
@pailhead thanks for the comments. I'll definitely review the ThreeJS struggles. It seems though that everybody (three, babylonjs, unity) are moving towards shader graphs. I need to understand better "why now?". as there was a big push against that in the gamedev industry few years back. One benefits of shader graph would be that it fits naturally with http://nodes.io which is where pex-renderer is used the most. |
I don't understand the idea of compiling shader graphs on the web. It seems that three's Heh i also didn't realize i wasn't on three's page anymore :D |
My latest preferred approach would be to have named hook/extension points + optional source string replacement like BabylonJS Material Plugins Curious how ThreeJS does it. |
How would it work in practice? entity.material = components.material({
type: 'standard',
baseColor: [1, 0, 0, 1],
vertHooks: {
AFTER_DECLARATIONS: /*glsl*/`
uniform vec2 uTextureScale;
`,
BEFORE_POSITIONOUT: /*glsl*/`
vTexCoord0 *= uTextureScale;
`
},
fragHooks: {
AFTER_DECLARATIONS: /*glsl*/`
uniform sampler2D uVertexNormalMap;
`,
BEFORE_PBR: /*glsl*/`
data.normalView = texture2D(uVertexNormalMap, data.texCoord0);
`
}, |
It's a simple solution that could be battle tested in the current project for smooth normals from vertex texture and reviewed afterwards. There is lot's of complications that are to be considered beyond GLSL modification. E.g. where data comes from? If we were to follow Babylon approach with entity.material = components.material({
type: 'standard',
baseColor: [1, 0, 0, 1],
vertHooks: { ... }
fragHooks: { ... },
uniformHook: (entity) => {
return {
uTextureScale: entity.material.textureScale,
uVertexNormalMap: entity.material.vertexNormalMap
}
} or even entity.material = components.material({
type: "standard",
baseColor: [1, 0, 0, 1],
hooks: {
vert: {
AFTER_DECLARATIONS: /*glsl*/ `
uniform vec2 uTextureScale;
`,
},
frag: {
AFTER_DECLARATIONS: /*glsl*/ `
uniform sampler2D uVertexNormalMap;
`,
},
uniforms: (entity) => {
return {
uTextureScale: entity.material.textureScale,
uVertexNormalMap: entity.material.vertexNormalMap,
};
},
}, |
I'll attempt a summary (WIP): HooksNot a huge fan of hooks for a few reasons. Cons:
Pros:
Full shader + #include declarationBoth three and lygia moved to #include chunkOfLibrary. Pros:
Cons:
Shader graphPros:
Cons:
Side notes: Three/dgel can do automatic variables insertion My ideal scenario:
|
Adding some examples for context #include declarationThreeJS
The fact that all those are globally defined with no path/source doesn't look like much fun to debug. export const vertex = /* glsl */`
#define PHONG
varying vec3 vViewPosition;
#include <common>
#include <uv_pars_vertex>
#include <uv2_pars_vertex>
#include <displacementmap_pars_vertex>
#include <envmap_pars_vertex>
#include <color_pars_vertex>
#include <fog_pars_vertex>
#include <normal_pars_vertex>
#include <morphtarget_pars_vertex>
#include <skinning_pars_vertex>
#include <shadowmap_pars_vertex>
#include <logdepthbuf_pars_vertex>
#include <clipping_planes_pars_vertex> or void main() {
#include <clipping_planes_fragment>
vec4 diffuseColor = vec4( diffuse, opacity );
ReflectedLight reflectedLight = ReflectedLight( vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ), vec3( 0.0 ) );
vec3 totalEmissiveRadiance = emissive;
#include <logdepthbuf_fragment>
#include <map_fragment>
#include <color_fragment>
#include <alphamap_fragment>
#include <alphatest_fragment>
#include <specularmap_fragment> And then import alphamap_fragment from './ShaderChunk/alphamap_fragment.glsl.js';
import alphamap_pars_fragment from './ShaderChunk/alphamap_pars_fragment.glsl.js';
import alphatest_fragment from './ShaderChunk/alphatest_fragment.glsl.js';
import alphatest_pars_fragment from './ShaderChunk/alphatest_pars_fragment.glsl.js';
import aomap_fragment from './ShaderChunk/aomap_fragment.glsl.js';
import aomap_pars_fragment from './ShaderChunk/aomap_pars_fragment.glsl.js';
import begin_vertex from './ShaderChunk/begin_vertex.glsl.js';
import beginnormal_vertex from './ShaderChunk/beginnormal_vertex.glsl.js';
import bsdfs from './ShaderChunk/bsdfs.glsl.js';
import iridescence_fragment from './ShaderChunk/iridescence_fragment.glsl.js';
import bumpmap_pars_fragment from './ShaderChunk/bumpmap_pars_fragment.glsl.js'; One issue I see there that it implies access to the library. For nodes does it mean glslify like code on the client side? #include "../../../lygia/lighting/envMap.glsl";
#include "../../../lygia/lighting/pbr.glsl";
#include "../../../lygia/lighting/shadow.glsl";
#include "../../../lygia/lighting/material/new.glsl"
#include "../../../lygia/lighting/atmosphere.glsl"
#include "../../../lygia/color/dither.glsl"
#include "../../../lygia/color/tonemap.glsl"
void main() {
vec4 final = vec4(0.0, 0.0, 0.0, 1.0);
vec3 lightPos = u_lightPosition;
vec3 normal = normalize(v_normal);
vec3 lightColor = vec3(1.0);
vec3 lightDir = normalize(lightPos - v_position);
vec3 viewDir = normalize(u_cameraPosition - v_position);
Material material = materialNew();
material.albedo.rgb = u_diffuseColor;
material.emissive.rgb = u_diffuseColor * 0.0;
material.normal = normalize(v_normal);
material.metallic = u_metallic;
material.roughness = u_roughness;
material.reflectance = u_reflectance;
material.ambientOcclusion = 0.5;
final = pbr(material); Shader graph
Assumes that we have subgraph and can e.g. abstract Maybe an alternative is possible. What i like about hooks is that they support most common scenarios:
I don't think following usecases are desired and shouldn't be optimised for
Side notes
Based on some parameter declaration? |
Is ShaderGraph goal https://nodetoy.co/ ? ThreeJS Node Material generateStandardMaterial( builder, { colorNode, diffuseColorNode } ) {
const { material } = builder;
// METALNESS
let metalnessNode = this.metalnessNode ? float( this.metalnessNode ) : materialMetalness;
metalnessNode = builder.addFlow( 'fragment', label( metalnessNode, 'Metalness' ) );
builder.addFlow( 'fragment', assign( diffuseColorNode, vec4( mul( diffuseColorNode.rgb, invert( metalnessNode ) ), diffuseColorNode.a ) ) );
// ROUGHNESS
let roughnessNode = this.roughnessNode ? float( this.roughnessNode ) : materialRoughness;
roughnessNode = getRoughness(.... |
TLDR: not aiming for fully dynamic gpu compute graphs, just avoiding yet-another-pbr-version-with-this-one-small-change To answer concern about hooks
We could limit it to those touch points What would probably cover 90% of cases for me.
Just few. See above
No need. Entry points should be limited.
Not desired IMO. Any example use cases?
To be available only in standard PBR material. |
As predicted naming is hard... in glsl mark place for code injection #define HOOK_VERT_DECLARATIONS_END and then in material js code hooks: {
vert: {
HOOK_VERT_DECLARATIONS_END: `?`,
VERT_DECLARATIONS_END: `?`,
DECLARATIONS_END: `defaulting to this for now` |
For smooth normals I would need tangents or noise deriviative so i skipped that for simplicity. const sphereEntity = createEntity({
transform: transform({
position: [0, 1, 0],
}),
geometry: geometry(icosphere({ radius: 1, subdivisions: 4 })),
material: material({
baseColor: [1, 0, 1, 1],
metallic: 0,
roughness: 0.2,
castShadows: true,
receiveShadows: true,
hooks: {
vert: {
DECLARATIONS_END: /*glsl*/ `
${perlinNoiseGLSL}
uniform float uTime;
varying float vNoiseAmount;
`,
BEFORE_TRANSFORM: /*glsl*/ `
vNoiseAmount = 0.5 + 0.5 * cnoise(position.xyz * 2.0 + vec3(uTime, 0.0, 0.0));
position.xyz += normal.xyz * vNoiseAmount;
`,
},
frag: {
DECLARATIONS_END: /*glsl*/ `
varying float vNoiseAmount;
`,
BEFORE_TEXTURES: /*glsl*/ `
vec3 dX = dFdx(data.positionView);
vec3 dY = dFdy(data.positionView);
data.normalView = normalize(cross(dX, dY));
data.normalWorld = vec3(data.inverseViewMatrix * vec4(data.normalView, 0.0));
`,
BEFORE_LIGHTING: /*glsl*/ `
data.roughness = 0.1;
data.metallic = step(0.5, vNoiseAmount);
data.baseColor = mix(vec3(2.0, 0.4, 0.0), vec3(1.0), data.metallic);
`,
},
uniforms: (entity) => {
return {
uTime: (Date.now() % 10000) / 1000,
};
},
},
}),
}); |
Currently implemented vertex shader: fragment shader: |
This works pretty well and i'm closing this for now as i don't anticipate any changes in direction before next stable release. |
Currently we can set
material.vert
andmaterial.frag
but that breaks as we also include them in the depth prepass shadersvert: material.vert || DEPTH_PASS_VERT
. The correct (needs testing) way is to define all 3 shaders and use preprocessor to support all 3 casesThat's painful.
Related
The text was updated successfully, but these errors were encountered: