Three.js: Promote Node-based Materials to Core and Polish

Created on 14 May 2019  ·  36Comments  ·  Source: mrdoob/three.js

Description of the problem

@sunag did a great jobs on the node-based materials. While he got some criticism for his PRs when he was submitted them, usually dealing with internal design of the code compiler, I've always thought his work was amazing.

We have adopted node-materials for our rendering recently. We have found that it leads to more artistic creativity. It also leads to less complex code base -- the current Uber materials in Three.Js are just insanely complex and hard to maintain.

Thus I would propose that we move the node-based materials into core and then we slowly move most of the examples to use the node-based materials instead of the complex do-everything uber materials.

It is a big change, but I think it moves Three.JS forward.

Some of the changes I think are needed in order to make node-based materials first class citizens of Three.JS:

  • add ability to get specialized layers out of the node-based materials easily (normal, depth, velocity, etc.) There is both an easy way and a hard way to do this. We'll start with the easy way.
  • add more advanced options to the BRDF/EDFs such as anisotropy, basic sub-surface scattering.
  • add ability to read AO from a pre-pass in the BRDF/EDF so it can only affect indirect diffuse rather than being layered on top and add an example of this.
  • rework the texture node a bit, it has some problematic dependencies that make the graph difficult to evaluate. It was suggested by @jojobyte of NVIDIA to follow the model of UE4's texture node.
  • have a separate clearcoat normal: https://github.com/mrdoob/three.js/issues/12867
  • add trilinear mapping example using the node-materials: https://github.com/mrdoob/three.js/issues/15048
  • add displacement map to the node-based BRDF/EDF (which would require a vertex shader change.)
  • add bump to normal example using the node-materials.
  • add transmission factor to the root BRDF/EDF and an example to three.js: https://github.com/mrdoob/three.js/issues/15941
  • makes it easy to support arbitrary UV channels per map in a very simple way. Lots of power and flexibility. Thus negating the need for this: https://github.com/mrdoob/three.js/pull/8278
  • allows for supporting world to UV mapping and other forms of procedural UV transforms.

This ties into my current proposal to glTF to adopt node-based materials:

https://docs.google.com/document/d/1Y6JFE2FV164IFDe7_cYhp2gzhSapB76fUNPgmsI6DDY/edit

I am not the first to suggest this but I can not find the previous discussion. I think it is just a matter of time until this happens, so might as well do this now. No other JavaScript 3D renderer has fully node-based materials. ;)

/ping @WestLangley @donmccurdy

Three.js version
  • [x] Dev
Browser
  • [x] All of them
OS
  • [x] All of them
Hardware Requirements (graphics card, VR Device, ...)
Suggestion

Most helpful comment

At some point the project will start to work at WebGPURenderer. How about starting with a node-based approach in this context and use the time now for other topics like adding support for remaining WebGL 2 features, further enhance light probes, enhance amount of JSM modules, focus on XR related stuff and the project support in general.

Why delay? @sunag made a system that works with WebGLRenderer right now. It is just a matter of adopting it. Sunag has made it relatively easy. I know there is still a bunch of work to polish it up and have it work correctly with the various post effect scenarios, etc. I can commit to doing this if we do it in the relatively near term. I've done a few heavy lift PRs in the past such as redesigning the lighting system and the animation system, so generally I can pull these off:

https://github.com/mrdoob/three.js/pull/7324
https://github.com/mrdoob/three.js/pull/6934

All 36 comments

At some point the project will start to work at WebGPURenderer. How about starting with a node-based approach in this context and use the time now for other topics like adding support for remaining WebGL 2 features, further enhance light probes, enhance amount of JSM modules, focus on XR related stuff and the project support in general.

I personally would leave the material system for WebGLRenderer as it is and use WebGPURenderer as an opportunity to make some clear cuts/major changes like introducing a new material system.

hi @bhouston, thank you very much for this incentive...

Did you ever see the MeshStandardNodeMaterial?
https://github.com/mrdoob/three.js/blob/dev/examples/js/nodes/materials/MeshStandardNodeMaterial.js
https://github.com/mrdoob/three.js/blob/dev/examples/js/nodes/materials/nodes/MeshStandardNode.js

My idea is finish this material and test with all examples of threejs until normalizing the code replacing the references, for example:

// import three.js lib
THREE.MeshStandardMaterial = THREE.MeshStandardNodeMaterial;
// start/test example

And do this with the others materials...

I would what you think of this node because it is based on your shaders:
( referring more specifically to the code design )
https://github.com/mrdoob/three.js/blob/dev/examples/js/nodes/misc/TextureCubeUVNode.js
This is the most complex node so far and the shader was a fork of the core.

add more advanced options to the BRDF/EDFs such as anisotropy, basic sub-surface scattering.

Did you think any way to add anisotropy in physical materials? Or a path you recommend.

add bump to normal example using the node-materials.

I think it was done here:
https://github.com/mrdoob/three.js/blob/93e72ba7b24958ddb0652bd33171edd14ed2d693/examples/webgl_materials_nodes.html#L987-L988

rework the texture node a bit, it has some problematic dependencies that make the graph difficult to evaluate. It was suggested by @jojobyte of NVIDIA to follow the model of UE4's texture node.

What exactly is that? Currently there is a cache system to reuse texture samples in the same UV.

makes it easy to support arbitrary UV channels per map in a very simple way. Lots of power and flexibility. Thus negating the need for this: #8278

I think something has been done in this sense here:
https://github.com/mrdoob/three.js/blob/93e72ba7b24958ddb0652bd33171edd14ed2d693/examples/webgl_materials_nodes.html#L912-L917

@sunag I believe strongly your stuff belongs in core. :)

Did you think any way to add anisotropy in physical materials? Or a path you recommend.

Clara.io has supported anisotropic roughness since 2014. I want to contribute it back to three.js but I hate modifying our horribly complex uber shaders -- very few people understand the uber shaders because their complexity is so high, so many nested defines spread everywhere.

Moving to node-based will simplify Three.JS's core so we can move to the next level.

Did you ever see the MeshStandardNodeMaterial?

I forgot about this, but yes, this is the easy way to move this to core! :) And then we can deprecate the old one and use this instead. I bet it could make ThreeJS smaller in terms of its code base or at least keep it the same size. Thus Three.JS is better than ever other open source JS-based 3D rendering library and it remains the same code size.

What exactly is that? Currently there is a cache system to reuse texture samples in the same UV.

I did reference the wrong person, it wasn't jojobyte, but rathan Jan Jordan the NVIDIA MDL product manager. Basically he suggested a T2V type that is passed around. This allows for some different node structures that he says are better. I can explain later as I want to keep this discussion focused.

makes it easy to support arbitrary UV channels per map in a very simple way. Lots of power and flexibility. Thus negating the need for this: #8278

I think something has been done in this sense here:

With regards to arbitrary UV channels, I know that your system supports it. :)

At some point the project will start to work at WebGPURenderer. How about starting with a node-based approach in this context and use the time now for other topics like adding support for remaining WebGL 2 features, further enhance light probes, enhance amount of JSM modules, focus on XR related stuff and the project support in general.

Why delay? @sunag made a system that works with WebGLRenderer right now. It is just a matter of adopting it. Sunag has made it relatively easy. I know there is still a bunch of work to polish it up and have it work correctly with the various post effect scenarios, etc. I can commit to doing this if we do it in the relatively near term. I've done a few heavy lift PRs in the past such as redesigning the lighting system and the animation system, so generally I can pull these off:

https://github.com/mrdoob/three.js/pull/7324
https://github.com/mrdoob/three.js/pull/6934

I'm just concerned about backwards compatibility and the necessary time effort to convert the related example code which is quite extensive to be honest. Just wanted to point out a different approach that might be easier to handle and is maybe more strategic.

So does this mean that the built-in materials would be replaced by node materials entirely, and user custom node materials would be treated the same way in the renderer?

So does this mean that the built-in materials would be replaced by node materials entirely

No, the built-in materials will be _implemented_ as node materials.

I'm just concerned about backwards compatibility and the necessary time effort to convert the related example code which is quite extensive to be honest. Just wanted to point out a different approach that might be easier to handle and is maybe more strategic.

The idea is that we polyfill MeshStandardMaterial such that it uses the node-based system to compile while keeping the current interface. Thus we get rid of the insane set of defines and what not that our uber materials utilize and instead we use the node-based compiler system of sunag. Thus it is work, and I am sure we will find bugs, but it isn't impossible.

Looking at the overall plan, getting the uber-materials _out_ of the renderer codebase is big. Ideally the renderer should have no dependencies on either the uber-materials _or_ the node-based materials. If I'm only using a small custom ShaderMaterial, the entire threejs material system (nodes or otherwise) would ideally be tree-shakeable. Is that a realistic goal?


Some of the changes I think are needed in order to make node-based materials first class citizens of Three.JS...

Could we split this list, to identify tasks that would definitely be required up front? Reworking the texture node and full AO support might be candidates.

add bump to normal example using the node-materials.

Does this work with a procedural bump map?


Just to mention them here, a couple other issues I've run into with the current implementation:

  1. Defining a custom FunctionNode is tricky. A typical "noise" function may call other functions, or have overloaded versions of itself with different parameter types. The regex that parses FunctionNode for a function name breaks on this and returns ''. Is there a safe way to import something like a library of custom noise functions? Example.
  2. Organization of the Math nodes by number of arguments (Math1Node, Math2Node, ...) has bit me multiple times. If I'm writing a node graph imperatively, I'd prefer to think about what the node does rather than the number of arguments it has. Would a consolidated MathNode be possible?
  3. Optimizations, mentioned by @zadvorsky in a previous discussion:

The built-in three.js materials (standard, phong, etc) are implemented differently from Shader Materials. They have optimizations for WebGL state changes that do not apply to Shader Material. Without these optimizations, node based materials lose a lot of their oomph if you have a complex scene graph.

^This also applies to NodeMaterials.

node based materials lose a lot of their oomph if you have a complex scene graph

"oomph". Can you please explain what you are referring to?

I believe the issue was that in a scene reusing many otherwise-identical materials with different uniform values, state change optimizations that would apply to default materials do not apply to ShaderMaterial.

If I understand the current system correctly, having two identical Shader Material instances (in terms of vert and frag shader) with different uniforms would still cause a full useProgram switch, which is expensive, whereas having two MeshStandardMaterial instances would only update the uniforms (aside from differences that affect defines).

Any new system should, ideally, give custom materials the same optimizations as the build in materials.

Organization of the Math nodes by number of arguments (Math1Node, Math2Node, ...)....

possible and interesting...

Optimizations, mentioned by @zadvorsky in a previous discussion:

Considering that I understood the question... If use an differents material with same inputs and types but diferents nodes, threejs would share the same program without problems like this benchmark show:
https://threejs.org/examples/?q=nodes#webgl_performance_nodes

Defining a custom FunctionNode is tricky

That is a good question...

The correct approch today for multiples functions is this:
https://github.com/mrdoob/three.js/blob/51afa0b6ff54135cbb198124bf00691a5a06dac5/examples/js/nodes/misc/TextureCubeUVNode.js#L137-L158

Create a FunctionNode for each GLSL function and add the depedencies using includes argument of FunctionNode.
https://github.com/mrdoob/three.js/blob/93e72ba7b24958ddb0652bd33171edd14ed2d693/examples/js/nodes/core/FunctionNode.js#L12

I am thinking of starting the dev of a threejs shader language parser using GLSL syntax:

```es6

var tjslNodes = new TJSLNode(`{

vec3 hash(vec3 p) { return vec3( 1.0 ); }

vec3 voronoi3D(const in vec3 x) {
return hash( x );
}

}`);

// this would parse the code and return it in nodes

tjslNodes.methods.voronoi3D // return FunctionNode with hash function depedencie
tjslNodes.methods.hash // return FunctionNode no depedencies
````

There is a care in this process for a better automatic optimization and auto rename system(avoiding name conflict).

And for example most of the root nodes like TextureCubeUVNode could based on proper shader language that turned the whole hierarchy into nodes.
Not equal but similar to what happens in: https://github.com/mrdoob/three.js/tree/dev/src/renderers/shaders/ShaderChunk)

If I understand the current system correctly, having two identical Shader Material instances (in terms of vert and frag shader) with different uniforms would still cause a full useProgram switch, which is expensive, whereas having two MeshStandardMaterial instances would only update the uniforms (aside from differences that affect defines).

I think that is easy to fix. Just have a content hash of the shader network that is independent of the uniforms and use that as a lookup into a shader cache of some sort. This would save both compilation as well as switching costs. This is a fixable problem and a straight forward optimization I think.

I believe the issue was that in a scene reusing many otherwise-identical materials with different uniform values, state change optimizations that would apply to default materials do not apply to ShaderMaterial.

I think it may be useful to introduce the concept of a reusable shader graph in which you can just change the uniforms on it as well. This would save a lot of memory allocations for otherwise identical material graphs. This is similar to the material instances in UE4: https://docs.unrealengine.com/en-us/Engine/Rendering/Materials/MaterialInstances

Thus one could have a material graph and then also a material graph instance which references the material graph but with replacement uniforms.

Organization of the Math nodes by number of arguments (Math1Node, Math2Node, ...) has bit me multiple times. If I'm writing a node graph imperatively, I'd prefer to think about what the node does rather than the number of arguments it has. Would a consolidated MathNode be possible?

I think we should just have classes for each math node type. I think these can be defined very easily so the code side is about the same. Thus one would use SinNode rather than Math2Node( Sin ). It would just be syntax sugar really but it would be easier to use.

Could we split this list, to identify tasks that would definitely be required up front? Reworking the texture node and full AO support might be candidates.

I agree it should be different PRs. Otherwise it will be a 4 month PR that will never get accepted. One first PR to get it into place and ensure that the examples still work and no major performance degradations.

And then a bunch of follow-up minor PRs to clean up the loose ends. I think that once it is in place there will be a ton of extension and optimization ideas, which we will all work on for years.

I just want to get this into place so we can get rid of those insanely complex Uber materials.

If I understand the current system correctly, having two identical Shader Material instances (in terms of vert and frag shader) with different uniforms would still cause a full useProgram switch, which is expensive, whereas having two MeshStandardMaterial instances would only update the uniforms (aside from differences that affect defines).

Any new system should, ideally, give custom materials the same optimizations as the build in materials.

that is incorrect. Programs are looked up by a fairly large string key, and it's mostly just the GLSL code (okay, there's a lot more, but uniforms aren't part of that key). Switching to Node-based model would have 0 impact of that Program re-use mechanism.

Switching to Node-based model would have 0 impact of that Program re-use mechanism.

I do not see how this could happen as the benchmark shows:
As you can see natively it has 3 uniforms:
https://threejs.org/examples/?q=nodes#webgl_performance_nodes
https://github.com/mrdoob/three.js/blob/d5743a467685b27bf98c80fe41d5e2890b7e5848/examples/js/nodes/materials/nodes/StandardNode.js#L14-L16
Moreover in NodeMaterial you can define a custom property name for each node, this way, you can make the material identical to the current one like:
https://github.com/mrdoob/three.js/blob/93e72ba7b24958ddb0652bd33171edd14ed2d693/examples/webgl_materials_nodes.html#L2552-L2553

But it is much more advantage to invest in dev a uniforms sorting into types to share a hash than working with the current system using names(labes), for example:

Material 1
mat.input1 = NodeColor
mat.input2 = NodeFloat
mat.input3 = NodeVector4

Material 2
mat.input1 = NodeFloat
mat.input2 = NodeVector4
mat.input3 = NodeColor

Material 3
mat.input1 = NodeFloat
mat.input2 = NodeColor
mat.input3 = NodeVector4

...

// All of materials below will be shared if sorting at build code (NodeBuilder).
// result for all materials uniforms must be
NodeFloat -> nodeU0: float
NodeColor -> nodeU1: vec3
NodeVector4 -> nodeU2: vec4

@Usnul My benchmark is about sharing the same program with differents materials and time to build the shader(initialization time).

@bhouston
"Promote Node-based Materials to Core and Polish"

Ben.
Reading the title I thought you want to promote the Node-based Materials in the polish language ... :smile:
Anyway, I think the idea to move them into the THREE core is a very good one.
From many points of view!

@bhouston

I want to contribute it back to three.js but I hate modifying our horribly complex uber shaders -- very few people understand the uber shaders because their complexity is so high, so many nested defines spread everywhere.

I have an idea of a three.js shader editor that allows the user to expand includes and either modify them or just collapse them again (if not modified). Macros could similarly be displayed in expanded form. A typical workflow would be to start out with a copy of one of the builtin ShaderLib materials, expand includes that are for a feature that would be interesting to modify, then modify the partially expanded code. Maybe this could be a part of the three.js Editor. @mrdoob

@zadvorsky

If I understand the current system correctly, having two identical Shader Material instances (in terms of vert and frag shader) with different uniforms would still cause a full useProgram switch, which is expensive, whereas having two MeshStandardMaterial instances would only update the uniforms (aside from differences that affect defines).

@Usnul

that is incorrect. Programs are looked up by a fairly large string key, and it's mostly just the GLSL code (okay, there's a lot more, but uniforms aren't part of that key).

(Usnul is right)

I have suggested two small improvements (#17116 , #17117 ) to reduce the overhead of unneeded material initialization operations. Both PRs are accompanied by demonstrations of performance differences. However, both demonstrations depend on really extreme cases. In what I consider normal scenarios, the differences will likely be negligible. (But I still think my PRs are right.)

@sunag How do you envision the NodeMaterial API for shadows?

@sunag How do you envision the NodeMaterial API for shadows?

Today I add shadow input to users still that use lightmap with multiply, or a customized shadow from node. The next step is to be able to manipulate the shadows generate by natives lights with something like node/accessors/ShadowNode similar at LightNode, the ideal is to have control of cast and receive shadows ( I did not get here yet ). About lights I want of advanced in this sense for the next updates, adding customized light group per material and/or customizing the light formulas like translucent example:

// add light group per material
material.light = new LightsNode([ hemisphere, pointLight, directLight ]);

// change shadow color to green for example
material.shadow = new MulNode( new ShadowNode(), new Vector3Node( 0, 1, 0 ) );

// A extended light implementation 
material.light = new TranslucentLightNode();

According to Three.js philosophy this can be a great improvement. Some guys use it for backgrounds and tree-shaking is good in this case.

I came here because @munrocket pointed out in the three-forum that this proposal could help for tree-shaking which would make the bundle of a three.js app smaller and improve boot-time performance 🙂

I think this would be awesome. One thing I wanted to add here:

We also encountered the problem with the big shader strings in the final bundle. Our three.js app is used in e-commerce and since many e-commerce users are on mobile devices boot-time performance is essential. In an e-commerce scenario, we just can not afford to waste time on parsing unnecessary JavaScript code. Especially because all of our clients read the Amazon study which claims that they loose 1% in sales if latency increases by 100ms. Of course they also read the Google study which outlines that 500ms delay decreases the traffic by 20%. So we have no margin for wasting bandwidth and CPU cycles.

Therefore we created a "hack" to get the glsl shaders out of the final bundle and save it to a JSON file. This way we can benefit from the faster parsing of JSON.

I just wanted to drop my two-cents and maybe it's totally unrelated to the initial issue. As pointed out above I just came here because of the link in the three.js forum. But if three.js refactors all of the glsl stuff maybe you could think about how to make things even more performant and maybe it's an idea to put the glsl stuff into another data-format than JavaScript.

If someone is interested in our JSON hack, just let me know and I'll setup a repo 🙂

@tschoartschi

Therefore we created a "hack" to get the glsl shaders out of the final bundle and save it to a JSON file. This way we can benefit from the faster parsing of JSON.

This will not be productive. Chrome JSON parser is fast, and JSON is a subset of JavaScript, so that makes sense. glsl is neither JSON not JavaScript, so any motions you make there that involve encoding/decoding glsl to/from JSON will be pure waste of space and CPU time.

This is off-topic, but I thought I would clarify for the hopefuls.

@Usnul I'll create a repo to show you what we do. I do not think it's a waste of CPU time because what we do is essentially something like const shaders = JSON.parse('... string of shader lib ...'). Creating the "'... string of shader lib ...'" happens during built-time. Yeah but it's a little bit more involved of course that's why it makes more sense to see it in a repo but that's basically the gist.

Nevertheless, I think it would be awesome to have the glsl shader code in something more efficient than in plain old JavaScript because JavaScript is expensive to parse. And better support for tree-shaking would be also great. So maybe it's worth considering those aspects when refactoring the shader parts of the code-base.

Aha, i see, so not the GLSL string, but the chunk library. I don't think it makes too much of a difference, but you might save a bit there. I'm not sure if it's worth it though, since there is very little JS code that constitutes chunk library.

The expensive part of parsing JS comes from building AST, not from tokenization. Tokenization is actually quite fast, that's why this "JSON trick" works in principle. If you have a very large string in JS, it's essentially a single literal string token, so it takes almost no toll on the AST building process.

Also worth mentioning, it's possible to be mindful of the ambiguity of token sequences and write JS code that's cheaper to parse. I'm not sure if it's worth doing this directly as a programmer writing code though.

@sunag Hey, great work on the node materials! Would it be possible to specify outputs on node materials to render additional scene data into multiple render targets with custom shader code?

@vanruesc Thanks. The output is converted automatically, this would only be for the purpose of typification (.ts)?

Sorry, I should've been more specific:

Currently, people have to use Scene.overrideMaterial to render scene data such as normals to a texture. This approach has some shortcomings, though (see #14577, #18533). Instead of doing it this way, I'd much rather use additional draw buffers to render out normals (and other custom data) without having to render the same scene multiple times.

I think the node material system has the potential to support this feature. It would make life a lot easier if users could specify custom outputs in addition to inputs. These output nodes could wrap the index into gl_FragData which could in turn be used to render specific data to the correct texture.

Do you think it would be possible to add something like this at some point or do you see any roadblocks ahead?

I should also mention that three doesn't support Multiple Render Targets yet. See #16390 for the relevant discussion on that topic.

I think the node material system has the potential to support this feature. It would make life a lot easier if users could specify custom outputs in addition to inputs. These output nodes could wrap the index into gl_FragData which could in turn be used to render specific data to the correct texture.

Already exist something similar (render-to-texture) RTTNode.
You can create multiples RTTNode for the same material. Does this meet your need?
https://threejs.org/examples/webgl_materials_nodes.html?e=rtt
https://threejs.org/examples/webgl_materials_nodes.html?e=temporal-blur

RTT is similar to but not the same as MRT. The RTTNode uses a fullscreen quad to render post processing effects into a render target. This is actually one step ahead of what I'd like to do. You need the original scene with all the meshes and skinned meshes to render view space normals, view space positions, roughness, specular factors, etc.

I want to be able to modify and extend built-in materials with additional shader code in a structured manner to render scene data into gl_FragData. For this I'd require access to the shader variables that store said scene data so that I can render that into additional draw buffers during the main render pass. These additional draw buffers would need to be defined first with something like an MRTNode.

The main goal is to render a complex scene as usual, but instead of just saving the scene colors and depth, you'd also render various additional textures at the same time and you'd only have to calculate everything once.

After that you could use these textures as inputs for RTT effects such as Screen Space Reflections. MRT is also essential for deferred shading pipelines.

Interesting... Maybe MRTMaterial because the output is a material.

let mtl = new MRTMaterial();
mtl.inputs['albedo'] = new SomeNode();
mtl.inputs['normal'] = new SomeNode();
Was this page helpful?
0 / 5 - 0 ratings

Related issues

Horray picture Horray  ·  3Comments

jack-jun picture jack-jun  ·  3Comments

seep picture seep  ·  3Comments

makc picture makc  ·  3Comments

konijn picture konijn  ·  3Comments