This is one where I only know some of the words and am not the target audience, but I enjoyed the writing and the excitement and I read it anyway. Keep up the good work!
Using a strict subset of Rust that can be compiled and run on the CPU as normal Rust code is a strong concept and very generally applicable.
What you really want to avoid is a pseudo-subset that looks and smells like a well-defined subset but where the illusion falls apart as soon as you leave the happy path. That depressingly common half-way commitment is deep in the uncanny valley and usually leads to a terrible user experience.
If you take the approach of running the Rust subset code on the CPU honestly (i.e. don't translate to your own IR and then lower it back to Rust) you're much less likely to end up in that situation. It sounds like that's what they're doing here, which I find very encouraging.
On the flip side, the major advantage of using the same common IR for both CPU and GPU execution (as opposed to the aforementioned "honest" pass-through execution model) is that it's easier to keep the semantics in sync, especially as the subset grows more complex. If it were me, I would substitute for that with a random program generator for the subset and differential fuzzing.
One of the pain points of using any shading language, at least for me, is duplicating data structures in Rust and in the shaders --- countless times I've gotten the padding wrong, and I end up bytemucking garbage or specifying the wrong offsets/strides if it's vertex data.
What particularly excites me about writing shaders in Rust is being able to share those data structures and use structural, compile-time information about shaders to type buffers and check things like "is it valid to bind a buffer slice made of Thing to a bind group slot that is a buffer of Thing" or "does this type have valid uniform buffer offset" or "what are the strides and offsets for the vertex buffer if my vertices look like Thing and the buffer is [Thing]" --- I don't know how in scope this kind of thing is for wgpu, or indeed wgsl-rs (I don't think any of the examples involve sharing Rust data structures between the CPU and shader side), but I applaud any work that makes something like this more possible.
EDIT: apparently I've never heard of wgsl_to_wgpu lmao. Still would be nice to keep it all Rust.
Yeah, this has been a minor quality-of-life issue since the early days of shader programming. It slightly favors C/C++ since you can do what almost all graphics programmers do in the C/C++ world and use header files with the C preprocessor to share struct definitions between shader code and engine code, e.g. https://docs.daxa.dev/wiki/shader-integration/. But honestly, it shouldn't be that hard to adapt that approach directly to Rust if you restrict yourself to a simple type definition subset of GLSL/HLSL that can be parsed with a proc macro to emit the corresponding Rust type definitions and metadata. That should also let you pull off your idea for a more strongly typed API for data binding.
Edit: Nevermind, that's exactly what wgsl_to_wgpu does but for WGSL instead of GLSL/HLSL.
That's real Rust code! You can call hello_triangle::vtx_main(0) on the CPU and get a Vec4f back. You can write unit tests against your shader logic. You can step through it in a debugger, though I hardly ever use debuggers with Rust - but you can!
This is really cool!! Huge upgrade over writing raw WGSL and using wgsl_to_wgpu to create the bindings. Might see if I can adopt it in my own project next...
gnafuthegreat | 19 hours ago
This is one where I only know some of the words and am not the target audience, but I enjoyed the writing and the excitement and I read it anyway. Keep up the good work!
pervognsen | 18 hours ago
Using a strict subset of Rust that can be compiled and run on the CPU as normal Rust code is a strong concept and very generally applicable.
What you really want to avoid is a pseudo-subset that looks and smells like a well-defined subset but where the illusion falls apart as soon as you leave the happy path. That depressingly common half-way commitment is deep in the uncanny valley and usually leads to a terrible user experience.
If you take the approach of running the Rust subset code on the CPU honestly (i.e. don't translate to your own IR and then lower it back to Rust) you're much less likely to end up in that situation. It sounds like that's what they're doing here, which I find very encouraging.
On the flip side, the major advantage of using the same common IR for both CPU and GPU execution (as opposed to the aforementioned "honest" pass-through execution model) is that it's easier to keep the semantics in sync, especially as the subset grows more complex. If it were me, I would substitute for that with a random program generator for the subset and differential fuzzing.
allie | 5 hours ago
One of the pain points of using any shading language, at least for me, is duplicating data structures in Rust and in the shaders --- countless times I've gotten the padding wrong, and I end up bytemucking garbage or specifying the wrong offsets/strides if it's vertex data.
What particularly excites me about writing shaders in Rust is being able to share those data structures and use structural, compile-time information about shaders to type buffers and check things like "is it valid to bind a buffer slice made of
Thingto a bind group slot that is a buffer ofThing" or "does this type have valid uniform buffer offset" or "what are the strides and offsets for the vertex buffer if my vertices look likeThingand the buffer is[Thing]" --- I don't know how in scope this kind of thing is for wgpu, or indeed wgsl-rs (I don't think any of the examples involve sharing Rust data structures between the CPU and shader side), but I applaud any work that makes something like this more possible.EDIT: apparently I've never heard of wgsl_to_wgpu lmao. Still would be nice to keep it all Rust.
pervognsen | 28 minutes ago
Yeah, this has been a minor quality-of-life issue since the early days of shader programming. It slightly favors C/C++ since you can do what almost all graphics programmers do in the C/C++ world and use header files with the C preprocessor to share struct definitions between shader code and engine code, e.g. https://docs.daxa.dev/wiki/shader-integration/. But honestly, it shouldn't be that hard to adapt that approach directly to Rust if you restrict yourself to a simple type definition subset of GLSL/HLSL that can be parsed with a proc macro to emit the corresponding Rust type definitions and metadata. That should also let you pull off your idea for a more strongly typed API for data binding.
Edit: Nevermind, that's exactly what wgsl_to_wgpu does but for WGSL instead of GLSL/HLSL.
polywolf | 8 hours ago
This is really cool!! Huge upgrade over writing raw WGSL and using
wgsl_to_wgputo create the bindings. Might see if I can adopt it in my own project next...