This is super interesting, thanks for sharing! It sounds like both are trying to solve this same problem of wanting to be able to differentiate pixels that belong to different surfaces, but the linked shader above does so by relying on the depth buffer:
However, instead of using central differences blindly, it picks
either the left or right neighbor based on which one is estimated
to belong to the same surface as the pixel for which we are
currently computing a normal. This is done by examining a second
pixel to the right and left and doing a comparison between the
current pixel's depth and that we'd expect to have at the current
pixel if the neighbors were forming a plane.
I think this approach would definitely fix many of these artifacts. But one reason I still prefer the "baking of surfaceIds as a vertex attribute" approach is that it allows the model author to add/remove specific edges. Of course that means doing manual work which may not be desirable for many applications. I link to this at the end of the article but Christopher Sims' demos are some of the best outline rendering I've seen and that uses the slightly more manual "bake it in the model" approach: https://twitter.com/csims314/status/1482123212188180490