Show HN: Eyot, A programming language where the GPU is just another thread

73 points by steeleduncan a month ago on hackernews | 14 comments

LorenDB | a month ago

This reminds me that I'd love to see SYCL get more love. Right now, out of the computer hardware manufacturers, it seems that only Intel is putting any effort into it.

sourcegrift | a month ago

Don't mean to be rust fanatic or whatever but anyone know of anything similar for rust?

embedding-shape | a month ago

Not similar in the way of "Decorate any function and now it's a thread on the GPU", but Candle been pretty neat for experimenting with ML on Rust, and easy to move things between CPU and GPU, more of a library than a DSL though: https://github.com/huggingface/candle

[OP] steeleduncan | a month ago

I'm not totally sure what it is, but I believe there is something for running Rust code on the GPU easily

ModernMech | a month ago

You could use wgpu to replicate this demo.

https://wgpu.rs

notnullorvoid | a month ago

It seems somewhat similar to rust-gpu https://github.com/Rust-GPU/rust-gpu

wingertge | a month ago

I hate doing self-promotion, but this is basically exactly what CubeCL does. CubeCL is a bit more limited because as a proc macro we can't see any real type info, but it's the closest thing I'm aware of. Other solutions need a bunch of boilerplate and custom (nightly-only) compiler backends.

MeteorMarc | a month ago

That is fun: it lends c-style block markers (curly braces) and python-style line separation (new lines). No objection.

maxloh | a month ago

JavaScript and Kotlin do that too.

[OP] steeleduncan | a month ago

It uses the same trick as Go [1]. The grammar has semicolons, but the tokeniser silently inserts them for ease of use. I think quite a few languages do it now

[1] https://go.dev/doc/effective_go#semicolons

NuclearPM | a month ago

Lends? What does that mean?

CyberDildonics | a month ago

Every time someone does something with threading and makes it a language feature it always seems like it could just be done with stock C++.

Whatever this is doing could be wrapped up in another language.

Either way it's arguable that is even a good idea, since dealing with a regular thread in the same memory space, getting data to and from the GPU and doing computations on the GPU are all completely separate and have different latency characteristics.

shubhamintech | a month ago

The latency point matters more than it looks imo like the GPU work isn't just async CPU work at a different speed, the cost model is completely different. In LLM inference, the hard scheduling problem is batching non-uniform requests where prompt lengths and generation lengths vary, and treating that like normal thread scheduling leads to terrible utilization. Would be curious if Eyot has anything to say about non-uniform work units.

[OP] steeleduncan | a month ago

Not right now, it is far too early days. I'm currently working through bugs, and missing stdlib, to get a simple backpropagation network efficient. Once I'm happy with that I'd like to move onto more complex models.