WebGL / WebGPU Jan 2023 meetup highlights
The Khronos WebGL/WebGPU meetup is a recurring online event where the folks working on these web graphics API’s share updates followed by talks from the community.
Here I’ve summarized my personal highlights & takeaways from this week’s meetup.
Recording link: https://www.youtube.com/watch?v=Jl06sOvMnvU
For my past event recaps, see April 2022, January 2022, July 2021.
WebGL Updates
Ken Russell, Khronos Group
Slides
Some of the things Ken was most excited to share:
- Wide color gamut support coming to WebGL — allows you to make use of newer displays with more vibrant color spaces
- Proposed Pixel Local Storage extension — it potentially has “dramatic performance improvements for many use cases”. Can follow its development here.
- Provoking vertex extension — can improve performance a lot for applications using flat shading.
- Proposed polygon offset clamp extension — easy to use extension that reduces many artifacts in real time shadows
Ken also shared that they are very close to shipping WebGL’s ANGLE/Metal backend. This is exciting because it’ll allow WebGL applications to continue being fully supported on the latest Metal API across MacOS & iOS.
I found this to be a good article on what pixel local storage is. It allows you to write per-pixel data to a fast on-chip storage, removing the need to write to a buffer for many lighting techniques. So that uses less memory and is also much faster.
The OpenGL wiki has a good explanation for what the “provoking vertex” is, see also a discussion on it in WebGL.
WebGPU Updates
Kelsey Gilbert, Mozilla
Slides (same as above)
- v1.0 is almost there! Currently targeting 2023 Q1. Expecting release around Chromium version 113 (see release schedule) on Windows, ChromeOS, and MacOS (and Linux and Android later)
- Lots of progress on using WebGPU outside the browser: (1) Deno has built in WebGPU support (2) Dawn, a cross platform implementation of WebGPU, is “99% on par with Chromium” and has Node bindings
Kelsey called for community support for helping develop WebGPU. The best ways to contribute right now are (1) providing feedback on the API & general usage through GitHub or Matrix chat, she emphasized this is now the time when feedback would be most impactful & helpful (2) help writing conformance tests.
ThreeJS & WebGPU
Ricardo Cabello, Google
Mrdoob talked about how ThreeJS has been working towards supporting WebGPU. The biggest obstacle to making it a drop-in replacement for WebGL is that WebGL shaders must be written in GLSL and WebGPU shaders are WGSL.
The reason this is a problem for ThreeJS is that many developers use material.onBeforeCompile to edit the generated shader GLSL code before it is compiled, to create custom materials. WebGPU would be a breaking change for this because you’d need to rewrite your modifications in WGSL.
“NodeMaterial to the rescue!” says Mrdoob. Instead of writing your materials in GLSL or WGSL, or any shading language, you write it in this higher level node system, and it’ll be easy for the engine to generate the right shader for the renderer/platform.
Below is an example code snippet (source) for this iridescence material demo.
They’ve also built a nodes material editor playground you can use to interactively create and edit these kinds of materials:
https://threejs.org/examples/?q=nodes#webgl_nodes_playground
Mrdoob mentioned that they’ve been looking at MaterialX (https://materialx.org/) as an open standard for representing these kinds of materials. They’ve been trying to generally align how they design the materials in ThreeJS with this standard so they aren’t reinventing everything from scratch.
MaterialX already has a lot of support in various engines & authoring tools, and ThreeJS now has a loader for it. This is exciting because it means you can export your models from Blender & Maya etc and get them to look exactly the same on the web with ThreeJS.
A final resource Mrdoob shared was NodeToy, an interactive playground where you can create, edit, and share materials with a node editor. It even generates ThreeJS code or GLSL code for any of the materials on the site.
Mrdoob mentioned there’ll still be an option to write your own shaders if you want to, but for most use cases NodeMaterial will be the way to go.
A technical journey through Google Earth
John Anderson, Google
John talked about the evolution of Google Earth’s technical architecture over the years, starting with this awesome sneak peak of the very first time they tried running it with Emscripten and AsmJS:
Originally Google Earth only worked in Chrome, running native code with PNaCL. This is the reason they always had a “launch” button earth.google.com instead of being able to launch directly to the app.
The switch to a fully WebAssembly version happened in 2020, and allowed it to work in all browsers. In 2021 the “launch” button was removed. Google Earth now runs on a shared C++ renderer across Web, Android, and iOS.
There were two really interesting optimizations that John explained:
- Google Earth uses JavaScript <canvas> for correct text rendering
Instead of trying to include a whole font/text engine in the WASM application, which would have been too big and too difficult to maintain, they instead rely on the web browser which can already do this. So to render text, the app draws the text into a JS canvas, reads it back, and then renders that inside of the 3D Google Earth scene.
2. Runtime performance was significantly improved by doing a directly copy from <canvas> to a WebGL texture
What they were initially doing for the text rendering was:
- Draw into a canvas
- Read the canvas as bytes, send it to the WASM app
- WASM app creates a WebGL texture with these bytes, and uploads it to the GPU
To speed this up, they:
- Draw into a canvas
- Copy the canvas image data directly to a WebGL texture, give the WASM app a reference to that texture
That reduced a lot of FPS jank.
An example of this kind of direct copy in WebGPU, see: https://gpuweb.github.io/gpuweb/#dictdef-gpuimagecopyexternalimage
Q&A
There was a lot of interesting discussion at the Q&A at the end. One question in particular that stood out to me was on whether we can expect a “devtools” debugger/inspector for WebGPU. It sounded like the answer was yes, there’s a lot of interest both from the WebGPU spec writers and the browser vendors for providing good debugging tools.
Kelsey mentioned debug groups & labels which is one tool that WebGPU has today that allows you to label parts of your pipeline to make it easier to debug when things go wrong. You can read more about it here: https://gpuweb.github.io/gpuweb/explainer/#errors-errorscopes-labels
Ken closed out the session answering a question about how WebGPU is going to raise the bar of performance of realtime graphics on the web much closer to native applications.
“The future is bright,” he said. And I agree!
I hope you found this useful! You can find me on Twitter or on my website.
You can sign up to be notified of the future WebGL/WebGPU meetups here: https://www.khronos.org/news/subscribe/