I Wrote a Scheme in 2025

74 points by mplant a day ago on lobsters | 20 comments

natfu | a day ago

On the garbage collector, @wingo has plenty of war stories. It's a deep undertaking of its own!

[OP] mplant | a day ago

Very deep. The garbage collector has been re-written several times and it's still pretty slow! Unfortunately I think its maximum speed is somewhat limited by using reference counts, but at the current moment that's the best that can be done in Rust.

Well, I could also use a conservative collector like Boehm, but that feels wrong!

sjamaan | 11 hours ago

I've also written about CHICKEN Scheme's garbage collector, in case you're interested.

jmmv | a day ago

Nice! How does this compare to https://github.com/mattwparas/steel? (I'm no Lisper but I'm curious, and the though of "rebasing" EndBASIC to be powered by a Scheme instead of a non-standard BASIC dialect has crossed my mind.)

[OP] mplant | a day ago

The implementations differ substantially, Steel has a bytecode VM + JIT, scheme-rs is a pure JIT compiler, and scheme-rs intends to implement R6RS completely, while Steel is more of its own dialect based on R5RS and R7RS

adamo | 14 hours ago

You are living the dream!

pervognsen | 21 hours ago

Seeing those books reminds me of my old prized copy of Lisp in Small Pieces, but apparently not so prized that I hung onto it over the years. Aside from the unmatched technical content, it was a truly excellent translation from the original French, and some of the delightful little unidiomatic turns of phrase like "sites of call" instead of "call sites" only added to its charm.

obsoleszenz | 19 hours ago

This is really cool! What i'm thinking of though is a bit of a different angle, what i would like to see (or make one day ^^) is a prescheme that is getting JITed so it can be used in a soft realtime contexts, eg for scripting MIDI bindings. I'm currently writing a open source DJ software in Rust and would like to have low latency user scriptable midi bindings. And honestly i think something lispy without a GC could be a good fit for this.

(Repo of the dj software for anyone curious https://codeberg.org/OpenDJLab/LibreDJ)

trenchant | 4 hours ago

Does it have to be JITed / GC-free? Or just fast enough?

philzook | 23 hours ago

Interesting reading you've got in that picture! You have a list somewhere? I've never seen those Dybvig papers

[OP] mplant | 20 hours ago

It’s sort of in the alt text for the image, but I’ll post it here when I get back to my computer

bandali | 20 hours ago

algernon | 21 hours ago

Ooooh! A scheme! Embeddable in Rust, with a sane looking API! I like it. Also uses Cranelift, which is promising (one of my beefs with Steel was that it was prohibitively slow, but Cranelift-based languages in my experience, fare a lot better, so fingers crossed).

Next time I have time to play with embedding another language, I'm gonna try embedding scheme-rs in iocaine. Can't wait. Need more parentheses in my life.

mattwparas | 20 hours ago

For what its worth steel has gotten faster, it now has a JIT powered by cranelift too. Could you elaborate on prohibitively slow? What were you trying it on?

algernon | 20 hours ago

I'll have to revisit steel too, then. I last tried it last September, and it was a few orders of magnitudes slower than mlua (which is about an order of magnitude slower than Roto for my use case - see below).

The use case is that I have a little tool - iocaine - a thing that sits between a reverse proxy (or is embedded in one) and the backends, and runs a small script for every request. That script decides what to do with the request (let it through, serve it a challenge, or garbage, or whatever you want), and can even generate a response. The most expensive parts (garbage generation, and various matchers) are implemented in Rust, so the script's calling a ton of Rust code, and there's not much that can be JIT'ed about it I guess. The script runs for a short time, and I run a lot of them in parallel.

So what I'm looking for is something where I can set up a runtime once, then use it in async code in parallel, to run isolated scripts on a performance critical fast path.

I'm doing my benchmarks on my desktop, AMD Ryzen 7 3700X + 32GiB RAM, running NixOS. The raw throughput doesn't matter much, though: I'm deploying iocaine on a tiny 2 vCPU + 4GiB RAM VPS, a lot weaker than my desktop. It's the relative performance compared to mlua and Roto is what I care about when exploring new languages for the purpose of embedding them into iocaine. If it can bring numbers at least on an mlua level, yay! But Lua is already a big compromise - I only added it to iocaine because I wanted a lisp, and Fennel was there.

mattwparas | 20 hours ago

Got it, so the engine is send and sync now with the "sync" flag enabled. My crates publish is also quite behind, so maybe that version doesn't have that flag yet. I probably should improve the docs at lot to make experiments like yours easier. Its also definitely optimized to not re interpret the script, but rather load the script as a module (or load as a function) and then call that function repeatedly. If I can help that experiment in any way feel free to reach out to me here, on github, or discord/matrix (or email) if you're interested in trying it again. I'm pretty sure the performance would be better than 20 messages a second if you're reusing an engine instance.

Regardless, give scheme rs a shot. It has an async API which is certainly an advantage over steel for this kind of thing if you need to yield back to the runtime at any point.

algernon | 20 hours ago

Yeah, that 20 req/sec was with creating a new engine for every request - it was never going to perform well. I've added Steel back to my experiment list, and will drop you an email (or a message on matrix) next time I take it for a spin, thanks!

[OP] mplant | 20 hours ago

I hope it’s fast enough to suite your needs! I haven’t done a ton of benchmarking but the ones I did seemed reasonable. If you have any problems please file an issue and I’ll work on prioritizing it.

algernon | 20 hours ago

I ran scheme-rs's benchmarks, and those looked promising indeed. It will be a while until I have time for more experiments, but taking scheme-rs for a test drive is on top of the list.

Out of curiosity, are you okay with issues reported via email, or any other method that doesn't involve GitHub? (for all practical purposes, I do not have a GitHub account)

[OP] mplant | 20 hours ago

Great! And yes, you can find my email on my website

pervognsen | 21 hours ago

Comment removed by author