This article builds on my previous “How to render outlines in WebGL” tutorial. I’ll explain an improved technique that eliminates most artifacts of the previous article, and gives you more control over where exactly the lines are drawn in your geometry.
Below is an example of the artifacts with the previous technique (left) on the windows of the ship that are fixed using the newer method (right).
I learned about this from Ian MacLarty’s twitter thread about how outline rendering works in Mars First Logistics:
Live demo & source code
Try out this outline rendering implementation in the demo below. You can drag and drop your own models into the viewer (as a single .glb/glTF file) or log in with Sketchfab and paste a model URL.
Source code for this demo: https://github.com/OmarShehata/webgl-outlines/tree/main/threejs
See also a minimal version without any of the debug parameters: https://github.com/OmarShehata/webgl-outlines/tree/main/threejs-outlines-minimal
How it works — an overview
The previous technique uses a combination of the normals & depth of the scene to detect edges in a post process pass. The new technique uses “surface IDs” instead of the normals.
We compute surface IDs for each mesh at startup time (or offline). We assign a globally unique ID for each “surface” of a mesh (more on that below). These ID’s are stored as a vertex attribute, rendered to a buffer, and used as an input for the edge detection post process.
Here is a simple example where the normals are not sufficient to detect an edge:
The front face of the box has the same normal as the wall behind it, so we can’t detect edges in certain camera angles where they align.
Now let’s take a look at the version where each surface has a unique ID (here I’m visualizing these ID’s by assigning a unique color to each one):
We can always detect an edge here.
Looking back at the ship example, we can see that this is why the outlines disappear on the windows of the ship as it rotates. Below is the normal buffer visualized on the left, and the surface IDs buffer visualized on the right:
The normals of the window interior align with the body of the ship for a few frames, so we are no longer able to detect an edge, while the depth difference isn’t sufficient to detect the edge either.
Whereas with the surface IDs, the two surfaces always have distinct values assigned to them, regardless of the camera angles.
How to compute surface IDs of a mesh
Ian MacLarty defines a surface as “a set of vertices that share triangles”. That means what we’re doing here is looking for all connected components in our mesh and assigning each a unique ID.
Let’s see how this works on a simple example. Here I have a cube in Blender with the top face selected, and the index of each vertex shown.
The top face of the cube is made of 2 triangles, and 6 vertices. The vertices “1” and “2” are shared between these two triangles, so we consider them part of the same “surface”.
If we take a look at another face of the cube, we’ll see that they do NOT share any vertices with the top face:
This means vertices “4” and “1” both sit right on top of each other, and so do “5” and “0”.
Why is the cube constructed this way? We technically only need to define 8 vertices total for this cube if we removed all duplicate vertices that are on top of each other like this. Instead, this cube has a total of 24 vertices (each face has 4 distinct vertices, and we have 6 faces).
The reason geometry is often constructed this way is because it allows us to define normals that are stored at each vertex. Vertices 0 and 1 have normals that point up, and vertices 4 and 5 have normals that point sideways.
Below are the normals visualized with the cube having 24 vertices (left) and if we combined them into 8 vertices (right). We can’t store all the normals correctly in the version on the right.
Going back to our goal of finding distinct surfaces on the mesh: we’re using the fact that any triangles that are physically connected & face the same direction can all share vertices. So if we treat our mesh as a graph, with edges as links that connect vertices, then we can traverse this graph looking for connected components. This is what the FindSurfaces.js class does.
One important caveat here is the word “can” in the sentence: “any triangles that are physically connected & face the same direction CAN all share vertices”. It’s entirely possible that your mesh has triangles that should be sharing vertices but are not. In that case a false edge will be rendered. We’ll look at this case in the “Tweaking your geometry” section below.
Using the surface IDs in the edge detection post process
Once we have the surface ID’s assigned to each vertex, we store them as a vertex attribute. We then have a render pass to render all surface ID’s in the scene to a buffer.
When drawing surface ID’s to the screen, I found the best results are achieved by:
- Normalizing them between 0 and 1
- Writing it to a float texture
I found a texture type of HalfFloat to be sufficient precision. The shader for writing the surfaceID to a buffer can be found in FindSurfaces.js.
In the edge detect post process, once we detect a non-zero difference between the surface ID value of one pixel to its neighbors, we consider that an edge. This happens in CustomOutinePass.js.
The FindSurfaces class keeps track of a “max surface ID” that is set as a uniform when rendering the surface ID buffer so that normalizing the surfaceID happens in the shader. This allows new geometry to be added to the scene dynamically without having to go back and update the vertex attributes of all existing geometry.
Tweaking your geometry for outline rendering
Because this method of detecting edges relies on how the geometry is constructed, you may find artifacts caused by how your geometry is constructed.
To me this is one of the biggest advantages of this technique: you have very fine control over exactly which edges in your model should be part of the outline rendering when authoring it. You don’t have to fiddle with your edge detection parameters that may fix one case in one model but produce artifacts in another model. It is extra effort to fix these by hand but it’s a good tradeoff for getting high quality outline results.
Let’s take a look at an example using the Tugboat model (originally from Google Poly). Originally the windows looked like this:
We’re seeing the 2 triangles that make up the windows, instead of the whole thing as one square. We can verify that this is a geometry problem by looking at the “Surface ID debug buffer” mode which shows us a unique color for each surface ID:
We want both triangles in the windows to have the same surface ID.
All we need to do is merge the vertices for these 2 triangles. I’ve had some trouble figuring out the best way to do this in Blender (even if I merge the vertices, the exported glTF often still has separate vertices for the triangles). My current method is to merge the triangles themselves:
- Select the two (or more) faces in edit mode that should be considered one surface
2. Switch to vertex mode to automatically selected all the vertices attached to those faces
3. Press “M” to merge vertices, select “By Distance”. The distance can be very small since we only want to merge vertices that are right on top of each other
4. Finally, switch back to face selection mode, right click on the selected face, and click “Dissolve faces”. This combines the 2 triangles in each window into one face, and ensures each window gets 1 surface ID computed.
Notes on other ways to compute surface IDs
Instead of manually fixing your geometry by merging vertices, you could pre-compute and bake in a value in your model authoring workflow.
Christopher Sims talks about his method of getting perfect outlines by using a Blender add on called IDMapper. This lets you bake in unique IDs across your mesh as vertex colors, and also makes it easy to edit them. I believe this is partially automatically generated.
What I’d love to see is a completely automatic method that lets you mark surfaces based on a threshold, in the same way the ThreeJS EdgesGeometry works. That class looks over all triangles, and checks the angle they make with the triangles they’re connected to. You could use that information to merge vertices for triangles that are all relatively flat. And you’d be able to tweak this threshold per model.
Edit: I’ve put together a tool to automate this, but turns out automation some drawbacks, namely texture coordinates may get distorted: https://github.com/OmarShehata/webgl-outlines/tree/main/vertex-welder#vertex-welder
In any case I think it’s clear that the best outline results come from methods that do some form of geometry processing instead of relying just on screen space information, since that can be much more consistent across camera angles while also giving you more control over all edge cases (pun intended).
Thanks for reading! If you found this helpful, sign up to my newsletter to follow my work & stay in touch.
If you have any suggestions or corrections to the code or technique, open an issue on GitHub (https://github.com/OmarShehata/webgl-outlines/) or reach out to me directly. You can find my contact info at: https://omarshehata.me/