This article describes how to visualize outlines for a WebGL scene as a post process, with example implementations for ThreeJS & PlayCanvas.
Note: I’ve written a follow up that builds on this article with an improved technique that solves many of the artifacts present here. See Better outline rendering using surface IDs with WebGL.
There are a few common approaches that produce boundary-only outlines as shown on the left of the above picture.
- Drawing objects twice, such that the backfaces make up the outline, described here.
- A post process using the depth buffer, implemented in ThreeJS here.
- Similar post process implemented in PlayCanvas here.
Rendering the full outlines of a scene is particularly useful when you need to clearly see the geometry and structure of your scene. For example, the stylized aesthetic of Return of the Obra Dinn would be very hard to navigate without clear outlines.
The technique I describe here is similar to the post process shaders linked above, with the addition of a “normal buffer” in the outline pass that is used to find those inner edges.
Below is a link to a live demo of this technique implemented in ThreeJS. You can drag and drop any glTF model (as a single .glb/glTF file) to see the outline effect on your own test models:
You can also find the source code on GitHub: https://github.com/OmarShehata/webgl-outlines.
Overview of the technique
Our outline shader needs 3 inputs:
- The depth buffer
- The normal buffer
- The color buffer (the original scene)
Given these 3 inputs we will compute the difference between the current pixel’s depth value and its neighbors. A large depth difference tells us there’s a distance gap (this will typically give you the outer boundary of an object but not fine details on its surface).
We will do the same with the normal buffer. A difference in normal direction means a sharp corner. This is what gives us the finer details.
We then combine those differences to form the final outline, and combine that with the color buffer to add the outlines to the scene.
Tip: The live demo has a scaling factor for each of the normal & depth. You can scale that to 0 to see the influence of each on the final set of outlines.
Overview of the rendering pipeline
Here is how we’re going to set up our effect:
Render pass 1 captures the color of all objects in the scene in “Scene Buffer”.
It also outputs the depth of every pixel in a separate “Depth Buffer”.
Render pass 2 re-renders all objects in the scene with a normal material that colors it using the object’s view-normal at every pixel. This is written
to the “Normal Buffer”.
Finally, Outline pass is a post process, taking the 3 buffers and rendering onto a fullscreen quad.
This can be further optimized by modifying the engine to combine the normal and depth buffers into one “NormalDepth”, similar to how Unity does it, to avoid the need for the 2nd render pass.
A final step not shown in the diagram is an FXAA pass, which we need because we’re rendering the scene onto an off-screen buffer, which disables the browser’s native antialiasing.
It’s difficult to describe this technique without reference to a specific engine since a core part of it is how to set up the rendering pipeline described above. The implementation details here will be specific to ThreeJS but you can see the PlayCanvas source code along with an editor project here:
1. Get the depth buffer
3D engines will typically draw all opaque objects into a depth buffer to ensure objects are rendered correctly without having to sort them back to front. All we have to do is get a reference to this buffer to pass it to our outline post process.
In ThreeJS, this means setting
depthBuffer = true on the render target we’re creating so that we capture the “scene color” and the “depth buffer” at the same time. See: https://threejs.org/docs/#api/en/renderers/WebGLRenderTarget.depthBuffer
In our demo this is created here:
There are a few caveats to know when working with the depth buffer:
- You need to know how the values are “packed”. Given the limited precision, does the engine just linearly interpolate Z values camera.near to camera.far? Does it do this in reverse? Or use a logarithmic depth buffer?
- The engine most likely already has some mechanisms for working with depth values that you can re-use. For ThreeJS, you can include
#include <packing>in your fragment shader which will allow you to use these helper functions.
- For just visualizing it for debug purposes, you can collapse your camera’s near/far to cover the bounds of the object so you can more clearly see the image.
2. Create a normal buffer
If your engine supports outputting the normals of everything in the scene, you should use that directly. Otherwise, you’ll need to create a second render pass. This needs to be identical to the original render, with the only exception that all materials on all meshes are replaced by a “normal material” that renders the view space normals.
ThreeJS has a convenient scene.overrideMaterial method we can use for exactly this purpose. Instead of creating a new identical scene and a new identical camera, we can directly re-render the same scene with the given override material.
this.renderScene.overrideMaterial = new THREE.MeshNormalMaterial();renderer.render(this.renderScene, this.renderCamera); this.renderScene.overrideMaterial = null;
In our ThreeJS implementation this is encapsulated in CustomOutlinePass.js for convenience, but it is a completely separate render pass.
3. Create the outline post process
The outline effect is a post process — we’ve already rendered the scene, now we need to take those buffers, combine them, and render the result onto a fullscreen quad. The result of that will either go directly to the screen or to the next pass in the pipeline (like FXAA).
We need to pass 3 uniforms: sceneBuffer, depthBuffer, and normalBuffer.
We create helper functions to read the depth at an offset from a given pixel. Then we sum up the difference between the current pixel’s depth value and its neighbors.
float depth = getPixelDepth(0, 0);
// Difference between depth of neighboring pixels and current.
float depthDiff = 0.0;
depthDiff += abs(depth - getPixelDepth(1, 0)); depthDiff += abs(depth - getPixelDepth(-1, 0)); depthDiff += abs(depth - getPixelDepth(0, 1)); depthDiff += abs(depth - getPixelDepth(0, -1));
The same thing is done for normals as well. Since the normal is a 3 dimensional vector, we get the difference using the
vec3 normal = getPixelNormal(0, 0);
// Difference between normals of neighboring pixels and current float normalDiff = 0.0;
normalDiff += distance(normal, getPixelNormal(1, 0)); normalDiff += distance(normal, getPixelNormal(0, 1)); normalDiff += distance(normal, getPixelNormal(0, 1)); normalDiff += distance(normal, getPixelNormal(0, -1));
To render the outline only at this point we would do:
float outline = normalDiff + depthDiff;
gl_FragColor = vec4(vec3(outline), 1.0);
There’s a few parameters here to tweak:
- We can include the diagonals in our neighbor sampling to get a more accurate outline
- We can sample one or more neighbors further, to get thicker outlines
- We can multiply
depthDiffby a scalar to control their influence on the final outline
- We can tweak
depthDiffso that only really stark differences in depth or normal direction show up as an outline. This is what the “normal bias” and the “depth bias” parameters control.
This is implemented in CustomOutlinePass.js.
4. Combine the outlines with your final scene
Finally, to combine the outline onto the scene, we
mix the scene color with a chosen “outline color”, based on our outline value.
float outline = normalDiff + depthDiff;
vec4 outlineColor = vec4(1.0, 1.0, 1.0, 1.0);//white outline
gl_FragColor = vec4(mix(sceneColor, outlineColor, outline));
This is also where you can create any custom logic for how you combine your outline with your scene.
For example, in the Return of the Obra Dinn, the outlines change color based on the lighting. To achieve this effect we would check the lighting direction against the surface normal in our normal buffer, and color the outline white if it not in direct light, and black if it is facing the light source(s).
Thanks for reading! If you found this helpful, sign up to my newsletter to follow my work & stay in touch.
Thanks to Ronja Böhringer whose Outlines via Postprocessing article helped me understand this technique and adapt it for the web.
If you have any suggestions or corrections to the code or technique, open an issue on GitHub (https://github.com/OmarShehata/webgl-outlines/) or reach out to me directly. You can find my contact info at: https://omarshehata.me/