Maybe if you’ve got some ancient software that’s missing source code and only runs with X Y and Z conditions, you could continue to offer it on the web and build around it like that? Not sure if that would be practical at all, but could be interesting
I use bellard.org/jslinux to test compilation of strange code sometimes[1], since it came with compilers that are different versions from what I have installed locally, and it's easier to open up a browser than starting a VM.
Most such emulators have Internet access on the IP level. Therefore, this is a very cheap way to test anything on the Internet.
apk add nmap
nmap your.domain.com
However, the speed is heavily throttled. You can even use ssh and login to your own server.
It can also be used as a very cheap way to provide a complete build environment on a single website, for example to teach C/C++. Or to learn the shell. You don't have to install anything.
I use a similar emulator (v86) as a way to share my hobby OS. Approximately zero people, even my friends, are going to boot my hobby OS on real hardware; I did manage to convince some of them to run it in qemu, but it's difficult. A browser environment shows the thing quite well; and easy networking is cool too.
My hobby OS itself is not very useful, but it's fun if you're in the right mood.
Agentic workloads create and then run code. You don't want to just run that code in a "normal" environment like a container, or even a very well protected VM. There are other options, ofc - eg. gvisor, crossvm, firecracker, etc, but this one is uncommon enough to have a small number of attackers trying to hack it.
What's wrong with a well protected VM? Especially compared to something where the security selling point is "no one uses it" (according to your argument; I don't know how secure this actually is)
Yeah but GP was answering to a comment saying "you don't want to run code in a well protected VM". Which is of course complete non sense to say and GP was right to question it.
Because unless you can fund several teams - kernel, firmware(bios,etc), GPU drivers, qemu, KVM, extra hardening(eg. qemu runs under something like bpfilter) + a red team, security through obscurity is cheaper. The attack surface area is just too large.
What is this "security through obscurity" you're talking about? We're talking about running linux in a VM running in a browser. That has just as much attack surface (and in some ways, more) as running linux in a hypervisor.
Obscurity is a shrinking moat unless you are upstreaming changes regularly, and most uncommon emulators lag behind on the boring but needed patches compared to QEMU or Firecracker. If you shift to a niche emulator for security, you really need a plan to audit the new attack surface it brings. Even a weird stack tends to attract attackers once it gets popular enough or just irritates someone determined.
Working with VMs always felt difficult because of this. So authoring was built-in to Docker. Now you can use Apptron to author and embed a Linux system on a web page. This aspect is usable, but it's only going to get better.
The tech docs at bellard.org/jslinux/tech.html describe the TEMU config format. Each VM is just a .cfg file - you can see the existing ones linked on the main page. The VFsync integration (vfsync.org) is probably your best bet for getting files into the VM without rebuilding the disk image.
For a classroom with Windows PCs this is close to ideal - zero install, no admin rights, works in any browser. Students get a real gcc toolchain and shell without touching the host OS.
We are a playful species. People enjoy play. If we didn't have to work for a living but still enjoyed food security that is all most of us would do. But we are also a very exploitative species, some more than others. Companies have made billions of dollars on top of Fabrice Bellard's works, qemu, ffmpeg etc.
These companies don't have any imagination. Their management has no vision. They could not create anything new and wonderful if they tried. People like Fabrice do and we are all richer for it. If your asking about the practical use you are likely in the exploitative mindset which is understandable on HN. The hacker/geek mindset enjoys this for what it is.
https://infinitemac.org/ is an example of a good use: users can try out old versions of Mac OS, to see what's changed and what software used to be available for old versions. It doesn't use JSLinux, but other emulators [1]
I guess for the author its learning about how Linux can be ported to the browser. For us, it's more of a nice amusement.
But then again, I've never understood why Buddhist monks create sand mandalas[1] and then let them be blown away (the mandalas not the monks!).
I think one should see it from the authors PoV instead of thinking "what is in it for me". If I were to use this, then to create digital sand mandalas in the browser! ;)
It looks like container2wasm uses a forked version of Bochs to get the x86-64 kernel emulation to work. If one pulled that out separately and patched it a bit more to have the remaining feature support it'd probably be the closest overall. Of course one could say the same about patching anything with enough enthusiasm :).
> Access to Internet is possible inside the emulator. It uses the websocket VPN offered by Benjamin Burns (see his blog). The bandwidth is capped to 40 kB/s and at most two connections are allowed per public IP address. Please don't abuse the service.
Sorry for the off-topic, but what a bliss to see Windows 2000 interface. And what an absolute abomination from hell pretty much all the modern UIs are.
The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.
Claude Code / Codex CLI / etc are all great because they know how to drive Bash and other Linux tools.
The browser is probably the best sandbox we have. Being able to run an agent loop against a WebAssembly Linux would be a very cool trick.
I had a play with v86 a few months ago but didn't quite get to the point where I hooked up the agent to it - here's my WIP: https://tools.simonwillison.net/v86 - it has a text input you can use to send commands to the Linux machine, which is pretty much what you'd need to wire in an agent too.
In that demo try running "cat test.lua" and then "lua test.lua".
Has there ever been any other topic that was not only the subject of the majority of submissions, but also had a subset of users repeatedly butting into completely unrelated discussions to go "b-but what about <thing>? we need to talk about <thing> here too! how can I relate this to <thing>? look at my <thing> product!"?
You can't just roll in to a random post to tell people about your revolutionary new AI agent for the 50th time this week and expect them not to be at least mildly annoyed.
I'm with you, but he wasn't telling us about his agent, he was saying "this is a cool technology and I've been wanting to use it to make a thing". The thing just happened to be LLM-adjacent.
Almost all of his comments "just happen" to be LLM-adjacent. At some point it stops "just happening" and it becomes clear that certain people (or their AI bots) are frequenting discussion spaces for the sole purpose of seeking out opportunities to bring up AI and self-promote.
Simon has been here since way before LLMs were a thing, and it's fairly obvious (to me, at least) that he's genuinely excited about LLMs, he's not just spamming sales or anything.
The entire thing is just quotes and a retelling of events. The closest thing to a "take" I could find is this:
> I have no idea how this one is going to play out. I’m personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.
Which effectively says nothing. It doesn't add anything the discussion around the topic, informed or not, and the post doesn't seem to serve any purpose beyond existing as an excuse to be linked to and siphon attention away from the original discussion (I wonder if the sponsor banner at the top of the blog could have something to do with that...?)
Literally just a quote from his fellow member of the "never stops talking about AI" club, Karpathy. No substance, no elaboration, just something someone else said or did pasted on his blog followed by a short agreement. Again, doesn't add anything or serve any real purpose, but was for some reason submitted to HN[1], and I may be misremembering but I believe it had more upvotes/comments than the original[2] at one point.
I think my coverage of the Mark Pilgrim situation added value in that most people probably aren't aware that Mark Pilgrim removed himself from internet life in 2011, which is relevant to the chardet story.
That second Karpathy example is from my link blog. Here's my post describing how I try to add something new when I write about things on my link blog: https://simonwillison.net/2024/Dec/22/link-blog/
In the case of that Karpathy post I was amplifying the idea that "Claw" is now the generic name for that class of software, which is notable.
> I mean I don’t have to remember the horrible git command line anymore
Every time I see a comment like this, I have to wonder what the heck other devs were doing. Don’t you know there were shell aliases, and snippet managers, and a ton of other tools already? I never had to commit special commands to memory, and I could always reference them faster than it takes to query any LLM.
The point I’m making is there are tons of solutions. Deterministic, fast, low-energy, customisable. Which is why I said “I have to wonder what the heck other devs were doing”. As in, have you never looked for a solution to your frustration? Hard to believe there was nothing out there before which wouldn’t have improved your Git command-line experience. Like, say, one of the myriad GUI tools which exist.
> Because it’s custom there is no standard curriculum you could point me to etc.
Not true. There are tons of resources out there not only explaining the solutions but even how different people use them and why.
If I sat with you for ten minutes and you explained me the exact difficulties you have, I doubt I couldn’t have suggested something.
It's very much a bimodal distribution: an enthusiast subset and an allergic subset. It's impossible to satisfy both, but that's the dynamic of HN anyhow: guaranteed to dissatisfy everybody! It's a strange game; the only to win is to complain.
It's normal for HN to be preoccupied with the major technical trend of the moment, and this is unquestionably the biggest technical trend in many years.
People can argue about where to insert it in the list, but it is certainly in the top 5 of many decades (smartphones, web, PCs, etc.) That's why it's inescapable.
Your complaint isn't really about simonw's comment, but rather the fact that it was heavily upvoted - in other words, you were dissenting from the community reaction to the comment. That's understandable; in fact it's a fundamental problem with forums and upvoting systems: the same few massive topics suck in all the smaller ones until we get one big ball of topic mud: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....
~20x slower for a naive recursive Fibonacci implementation in Python (1300 ms for fib(30) in this VM vs 65ms on bare metal. For comparison, CPython directly compiled to WASM without VM overhead does it in 140ms.)
~2500x slower for 1024x1024 matrix multiplication with NumPy (0.25 GFLOPS in VM vs 575 GFLOPS on bare metal).
This is not correct. You are using WebVM here, not BrowserPod.
WebVM is based on x86 emulation and JIT compilation, which at this time lowers vector instructions as scalar. This explains the slowdowns you observe. WebVM is still much faster than v86 in most cases.
BrowserPod is based on a pure WebAssembly kernel and WebAssembly payload. Performance is close to native speed.
> The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.
Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.
Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together.
https://www.youtube.com/watch?v=kGBeT8lwbo0
tldr; devcontainers let you completely containerize your development environment. You can run them on Linux natively, or you can run them on rented computers (there are some providers, such as GitHub Codespaces) or you can also run them in a VM (which is what you will be stuck with on a Mac anyways - but reportedly performance is still great).
All CLI dev tools (including things like Neovim) work out of the box, but also many/most GUI IDEs support working with devcontainers (in this case, the GUI is usually not containerized, or at least does not live in the same container. Although on Linux you can do that also with Flatpak. And for instance GitHub Codespaces runs a VsCode fully in the browser for you which is another way to sandbox it on both ends).
This is interesting (and I've seen it mentioned in some editors), but how do I use it? It would be great if it had bubblewrap support, so I don't have to use Docker.
Do you know if there's a cli or something that would make this easier? The GitHub org seems to be more focused on the spec.
Maybe not, but in the past some here see that the blog is the product that is being promoted here.
Even in this thread alone https://news.ycombinator.com/item?id=47314929 some commenters here are clearly annoyed with the way AI is being shoved in each place where they do not want it.
I don't care, but I can see why many here are getting tired of it.
Please don't cross into personal attack on this site. We ban accounts that do that, and you've unfortunately done it repeatedly in this thread. Current comment was the worst case of this by far, but https://news.ycombinator.com/item?id=47317411, for example, is also on the wrong side of the line.
While this may be a better sandbox, actually having a separate computer dedicated to the task seems like a better solution still and you will get better performance.
Besides, prompt injection or simpler exploits should be addressed first than making a virtual computer in a browser and if you are simulating a whole computer you have a huge performance hit as another trade off.
On the other hand using the browser sandbox that also offers a UI / UX that the foundation models have in their apps would ease their own development time and be an easy win for them.
I run agents as a separate Linux user. So they can blow up their own home directory, but not mine. I think that's what most people are actually trying to solve with sandboxing.
(I assume this works on Macs too, both being Unixes, roughly speaking :)
> The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.
Well, there it is, the dumbest thing I'll read on the internet all week.
Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.
Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.
> Well, there it is, the dumbest thing I'll read on the internet all week.
Rude.
In case you're open to learning, here's why I think this is useful.
The big lesson we've learned from Claude Code, Codex CLI et al over the past twelve months is that the most useful tool you can provide to an LLM is Bash.
Last year there was enormous buzz around MCP - Model Context Protocol. The idea was to provide a standard for wiring tools into LLMs, then thousands of such tools could bloom.
Claude Code demonstrated that a single tool - Bash - is actually much more interesting than dozens of specialized tools.
Want to edit files without rewriting the whole thing every time? Tell the agent to use sed or perl -e or python -c.
Look at the whole Skills idea. The way Skills work is you tell the LLM "if you need to create an Excel spreadsheet, go read this markdown file first and it will tell you how to run some extra scripts for Excel generation in the same folder". Example here: https://github.com/anthropics/skills/tree/main/skills/xlsx
That only works if you have a filesystem and Bash style tools for navigating it and reading and executing the files.
This is why I want Linux in WebAssembly. I'd like to be able to build LLM systems that can edit files, execute skills and generally do useful things without needing an entire locked down VM in cloud hosting somewhere just to run that application.
Here's an alternative swipe at this problem: Vercel have been reimplementing Bash and dozens of other common Unix tools in TypeScript purely to have an environment agents know how to use: https://github.com/vercel-labs/just-bash
I'd rather run a 10MB WASM bundle with a full existing Linux build in then reimplement it all in TypeScript, personally.
> Linux RISC-V virtual machine, powered by the Cartesi Machine emulator, running in the browser via WebAssembly
> a single 32MiB WebAssembly file containing the emulator, the kernel and Alpine Linux operating system. Networking supports HTTP/HTTPS requests, but is subject to CORS restrictions
My demo here loads 12.7MB (if you watch the browser network panel) to get to a usable Linux machine, it even has Lua! https://tools.simonwillison.net/v86
> while taking the joyful bits of software development away from you
Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?
Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.
Out of interest I tried running my Primes benchmark [1] on both the x86_64 and x86 Alpine and the riscv64 Buildroot, both in Chrome on M1 Mac Mini. Both are 2nd run so that all needed code is already cached locally.
x86_64:
localhost:~# time gcc -O primes.c -o primes
real 0m 3.18s
user 0m 1.30s
sys 0m 1.47s
localhost:~# time ./primes
Starting run
3713160 primes found in 456995 ms
245 bytes of code in countPrimes()
real 7m 37.97s
user 7m 36.98s
sys 0m 0.00s
localhost:~# uname -a
Linux localhost 6.19.3 #17 PREEMPT_DYNAMIC Mon Mar 9 17:12:35 CET 2026 x86_64 Linux
x86 (i.e. 32 bit):
localhost:~# time gcc -O primes.c -o primes
real 0m 2.08s
user 0m 1.43s
sys 0m 0.64s
localhost:~# time ./primes
Starting run
3713160 primes found in 348424 ms
301 bytes of code in countPrimes()
real 5m 48.46s
user 5m 37.55s
sys 0m 10.86s
localhost:~# uname -a
Linux localhost 4.12.0-rc6-g48ec1f0-dirty #21 Fri Aug 4 21:02:28 CEST 2017 i586 Linux
riscv64:
[root@localhost ~]# time gcc -O primes.c -o primes
real 0m 2.08s
user 0m 1.13s
sys 0m 0.93s
[root@localhost ~]# time ./primes
Starting run
3713160 primes found in 180893 ms
216 bytes of code in countPrimes()
real 3m 0.90s
user 3m 0.89s
sys 0m 0.00s
[root@localhost ~]# uname -a
Linux localhost 4.15.0-00049-ga3b1e7a-dirty #11 Thu Nov 8 20:30:26 CET 2018 riscv64 GNU/Linux
Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.
Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.
MIPS (the arch of which RISCV is mostly a copy) is even easier to emulate, unlike RV it does not scatter immediate bits al over the instruction word, making it easier for an emulator to get immediates. If you need emulated perf, MIPS is the easiest of all
There are two interesting differences of ISA between MIPS and RISC-V: that MIPS does not have branch on condition, only on zero/non-zero and that MIPS has 16 bit immediates with appropriate sign extension (all zeroes for ORI, all ones for ANDI). The first difference makes MIPS programs about 10% larger and second difference makes MIPS programs smaller (RISC-V immediates are 11.5 bits due to mandatory sign extension, 13 bits are required to cover 95% of immediates in MIPS-like scheme), a percent or so, I think.
Interesting to see the gcc version gap between the targets. The x86_64 image shipping gcc 15.2.0 vs 7.3.0 on riscv64 makes the performance comparison less apples-to-apples than it looks - newer gcc versions have significantly better optimization passes, especially for register allocation.
> If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.
I don't really think this bears out in practice. RISC-V is easy to emulate but this does not make it fast to emulate. Emulation performance is largely dominated by other factors where RISC-V does not uniquely dominate.
I've been using the x86_64 Alpine jslinux browser image in Chrome for the last 4 hours - pulling code down via git, building several large packages from source, editing and altering code, and running their test suites. This VM may be 50 times slower than native, but it is rock solid - worked perfectly and is stable. It's simply remarkable.
I am almost sure it was done so carefully that you can extract it from the abominations which are the whatng cartel web engines with a direct to OS abstraction layer that with only some little amount of work.
Is JSLinux still an interpreter, or does it JIT compile these days?
Or are modern JS JITs so good that this is no longer a relevant distinction, i.e. is the performance of a JITted x86 interpreter effectively equivalent to a JITting x86-to-Javascript translator where the result is then itself JIT interpreted?
petcat | a day ago
varun_ch | a day ago
maxloh | a day ago
We have Windows PCs in the classroom.
jgtrosh | 22 hours ago
omoikane | a day ago
[1] For example:
https://www.ioccc.org/2020/yang/index.html#:~:text=tcc%200.9...
https://www.ioccc.org/2018/yang/index.html#:~:text=tcc%200.9...
s-macke | a day ago
It can also be used as a very cheap way to provide a complete build environment on a single website, for example to teach C/C++. Or to learn the shell. You don't have to install anything.
Onavo | 22 hours ago
s-macke | 12 hours ago
toast0 | a day ago
My hobby OS itself is not very useful, but it's fun if you're in the right mood.
redleader55 | a day ago
srdjanr | a day ago
g947o | 23 hours ago
TacticalCoder | 21 hours ago
cloudfudge | 21 hours ago
redleader55 | 10 hours ago
cloudfudge | 2 hours ago
hrmtst93837 | 14 hours ago
postalrat | 23 hours ago
peterburkimsher | 22 hours ago
Any advice on how to create a JSLinux clone with a specific file pre-installed and auto-launching would be much appreciated!
progrium | 21 hours ago
vexnull | 19 hours ago
For a classroom with Windows PCs this is close to ideal - zero install, no admin rights, works in any browser. Students get a real gcc toolchain and shell without touching the host OS.
shirro | 22 hours ago
These companies don't have any imagination. Their management has no vision. They could not create anything new and wonderful if they tried. People like Fabrice do and we are all richer for it. If your asking about the practical use you are likely in the exploitative mindset which is understandable on HN. The hacker/geek mindset enjoys this for what it is.
kristianp | 16 hours ago
[1] https://blog.persistent.info/2025/03/infinite-mac-os-x.html
Towaway69 | 13 hours ago
But then again, I've never understood why Buddhist monks create sand mandalas[1] and then let them be blown away (the mandalas not the monks!).
I think one should see it from the authors PoV instead of thinking "what is in it for me". If I were to use this, then to create digital sand mandalas in the browser! ;)
[1]: https://en.wikipedia.org/wiki/Sand_mandala
maxloh | a day ago
For a more open-source version, check out container2wasm (which supports x86_64, riscv64, and AArch64 architectures): https://github.com/container2wasm/container2wasm
zamadatix | a day ago
maxloh | a day ago
zamadatix | 23 hours ago
It looks like container2wasm uses a forked version of Bochs to get the x86-64 kernel emulation to work. If one pulled that out separately and patched it a bit more to have the remaining feature support it'd probably be the closest overall. Of course one could say the same about patching anything with enough enthusiasm :).
zoobab | 11 hours ago
It's not open source? If that's the case, it should be in his FAQ.
wolttam | a day ago
maxloh | a day ago
> Access to Internet is possible inside the emulator. It uses the websocket VPN offered by Benjamin Burns (see his blog). The bandwidth is capped to 40 kB/s and at most two connections are allowed per public IP address. Please don't abuse the service.
https://bellard.org/jslinux/tech.html
Lockal | 10 hours ago
notorandit | a day ago
westurner | a day ago
From "Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents" (2026) https://news.ycombinator.com/item?id=46825119 :
>>> How to run vscode-container-wasm-gcc-example with c2w, with joelseverin/linux-wasm?
>> linux-wasm is apparently faster than c2w
From "Ghostty compiled to WASM with xterm.js API compatibility" https://news.ycombinator.com/item?id=46118267 :
> From joelseverin/linux-wasm: https://github.com/joelseverin/linux-wasm :
>> Hint: Wasm lacks an MMU, meaning that Linux needs to be built in a NOMMU configuration
From https://news.ycombinator.com/item?id=46229385 :
>> There's a pypi:SystemdUnitParser.
westurner | a day ago
blackhaz | a day ago
shevy-java | a day ago
MBCook | 17 hours ago
cheema33 | a day ago
nout | a day ago
dmd | 23 hours ago
diabllicseagull | 22 hours ago
stavros | 21 hours ago
simonw | a day ago
Claude Code / Codex CLI / etc are all great because they know how to drive Bash and other Linux tools.
The browser is probably the best sandbox we have. Being able to run an agent loop against a WebAssembly Linux would be a very cool trick.
I had a play with v86 a few months ago but didn't quite get to the point where I hooked up the agent to it - here's my WIP: https://tools.simonwillison.net/v86 - it has a text input you can use to send commands to the Linux machine, which is pretty much what you'd need to wire in an agent too.
In that demo try running "cat test.lua" and then "lua test.lua".
jraph | a day ago
This thing is really inescapable those days.
simonw | a day ago
I should have replied there instead, my mistake.
stavros | 21 hours ago
I'm excited about them and I think discussion on how to combine two exciting technologies are exactly what I'd like to see here.
bakugo | 21 hours ago
You can't just roll in to a random post to tell people about your revolutionary new AI agent for the 50th time this week and expect them not to be at least mildly annoyed.
stavros | 20 hours ago
bakugo | 20 hours ago
stavros | 20 hours ago
yokoprime | 20 hours ago
bakugo | 19 hours ago
The entire thing is just quotes and a retelling of events. The closest thing to a "take" I could find is this:
> I have no idea how this one is going to play out. I’m personally leaning towards the idea that the rewrite is legitimate, but the arguments on both sides of this are entirely credible.
Which effectively says nothing. It doesn't add anything the discussion around the topic, informed or not, and the post doesn't seem to serve any purpose beyond existing as an excuse to be linked to and siphon attention away from the original discussion (I wonder if the sponsor banner at the top of the blog could have something to do with that...?)
This seems to be a pattern, at least in recent times. Here's another egregious example: https://simonwillison.net/2026/Feb/21/claws/
Literally just a quote from his fellow member of the "never stops talking about AI" club, Karpathy. No substance, no elaboration, just something someone else said or did pasted on his blog followed by a short agreement. Again, doesn't add anything or serve any real purpose, but was for some reason submitted to HN[1], and I may be misremembering but I believe it had more upvotes/comments than the original[2] at one point.
[1] https://news.ycombinator.com/item?id=47099160
[2] https://news.ycombinator.com/item?id=47096253
simonw | 16 hours ago
That second Karpathy example is from my link blog. Here's my post describing how I try to add something new when I write about things on my link blog: https://simonwillison.net/2024/Dec/22/link-blog/
In the case of that Karpathy post I was amplifying the idea that "Claw" is now the generic name for that class of software, which is notable.
purerandomness | 20 hours ago
stavros | 20 hours ago
yokoprime | 20 hours ago
stavros | 20 hours ago
girvo | 13 hours ago
Towaway69 | 13 hours ago
Towaway69 | 13 hours ago
It's turtles all the way down ....
;)
fsloth | 12 hours ago
I mean I don’t have to remember the horrible git command line anymore which already improves my exprience as a dev 50%.
It’s not all hype bs this time.
latexr | 9 hours ago
Every time I see a comment like this, I have to wonder what the heck other devs were doing. Don’t you know there were shell aliases, and snippet managers, and a ton of other tools already? I never had to commit special commands to memory, and I could always reference them faster than it takes to query any LLM.
fsloth | 3 hours ago
Because it’s custom there is no standard curriculum you could point me to etc.
So it’s great you’ve found a setup that works for you but I hope you realize it’s silly to become idignant I don’t share it.
latexr | 2 hours ago
> Because it’s custom there is no standard curriculum you could point me to etc.
Not true. There are tons of resources out there not only explaining the solutions but even how different people use them and why.
If I sat with you for ten minutes and you explained me the exact difficulties you have, I doubt I couldn’t have suggested something.
fsloth | 2 hours ago
So the only time I need terminal, it’s for something non-obvious.
”There are tons of resources”
This is not a standard curriculum as such though.
I’ve tried to come to terms with posix for 25 years and am so happy I don’t need to anymore. That’s just me!
dang | 2 hours ago
It's very much a bimodal distribution: an enthusiast subset and an allergic subset. It's impossible to satisfy both, but that's the dynamic of HN anyhow: guaranteed to dissatisfy everybody! It's a strange game; the only to win is to complain.
stavros | an hour ago
yokoprime | 20 hours ago
grimgrin | 20 hours ago
brumar | 16 hours ago
dang | 2 hours ago
People can argue about where to insert it in the list, but it is certainly in the top 5 of many decades (smartphones, web, PCs, etc.) That's why it's inescapable.
Your complaint isn't really about simonw's comment, but rather the fact that it was heavily upvoted - in other words, you were dissenting from the community reaction to the comment. That's understandable; in fact it's a fundamental problem with forums and upvoting systems: the same few massive topics suck in all the smaller ones until we get one big ball of topic mud: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....
apignotti | a day ago
For a full-stack demo see: https://vitedemo.browserpod.io/
To get an idea of our previous work: https://webvm.io
otterley | 18 hours ago
johndough | 12 hours ago
~20x slower for a naive recursive Fibonacci implementation in Python (1300 ms for fib(30) in this VM vs 65ms on bare metal. For comparison, CPython directly compiled to WASM without VM overhead does it in 140ms.)
~2500x slower for 1024x1024 matrix multiplication with NumPy (0.25 GFLOPS in VM vs 575 GFLOPS on bare metal).
apignotti | 12 hours ago
WebVM is based on x86 emulation and JIT compilation, which at this time lowers vector instructions as scalar. This explains the slowdowns you observe. WebVM is still much faster than v86 in most cases.
BrowserPod is based on a pure WebAssembly kernel and WebAssembly payload. Performance is close to native speed.
johndough | 10 hours ago
apignotti | 10 hours ago
johndough | 9 hours ago
The performance is pretty amazing. fib(35) runs in 60ms, compared to 65ms in NodeJS on Desktop.
But I can't find a shell. Is there only support for NodeJS at the moment?
apignotti | 9 hours ago
See the launch blog post for our full timeline: https://labs.leaningtech.com/blog/browserpod-10
Also, could I ask you to quickly edit your previous comment to clarify you were benchmarking against the older project?
johndough | 8 hours ago
the_mitsuhiko | 23 hours ago
That exists: https://github.com/container2wasm/container2wasm
Unfortunately I found the performance to be enough of an issue that I did not look much further into it.
stingraycharles | 18 hours ago
pancsta | 11 hours ago
d_philla | 23 hours ago
progrium | 23 hours ago
Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.
Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together. https://www.youtube.com/watch?v=kGBeT8lwbo0
kantord | 21 hours ago
tldr; devcontainers let you completely containerize your development environment. You can run them on Linux natively, or you can run them on rented computers (there are some providers, such as GitHub Codespaces) or you can also run them in a VM (which is what you will be stuck with on a Mac anyways - but reportedly performance is still great).
All CLI dev tools (including things like Neovim) work out of the box, but also many/most GUI IDEs support working with devcontainers (in this case, the GUI is usually not containerized, or at least does not live in the same container. Although on Linux you can do that also with Flatpak. And for instance GitHub Codespaces runs a VsCode fully in the browser for you which is another way to sandbox it on both ends).
stavros | 21 hours ago
Do you know if there's a cli or something that would make this easier? The GitHub org seems to be more focused on the spec.
bakugo | 21 hours ago
iamjackg | 20 hours ago
pelcg | 20 hours ago
Even in this thread alone https://news.ycombinator.com/item?id=47314929 some commenters here are clearly annoyed with the way AI is being shoved in each place where they do not want it.
I don't care, but I can see why many here are getting tired of it.
dang | 2 hours ago
https://news.ycombinator.com/newsguidelines.html
zitterbewegung | 20 hours ago
Besides, prompt injection or simpler exploits should be addressed first than making a virtual computer in a browser and if you are simulating a whole computer you have a huge performance hit as another trade off.
On the other hand using the browser sandbox that also offers a UI / UX that the foundation models have in their apps would ease their own development time and be an easy win for them.
johnhenry | 19 hours ago
ZeWaka | 17 hours ago
andai | 17 hours ago
(I assume this works on Macs too, both being Unixes, roughly speaking :)
repstosb | 12 hours ago
Well, there it is, the dumbest thing I'll read on the internet all week.
Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.
Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.
simonw | 7 hours ago
Rude.
In case you're open to learning, here's why I think this is useful.
The big lesson we've learned from Claude Code, Codex CLI et al over the past twelve months is that the most useful tool you can provide to an LLM is Bash.
Last year there was enormous buzz around MCP - Model Context Protocol. The idea was to provide a standard for wiring tools into LLMs, then thousands of such tools could bloom.
Claude Code demonstrated that a single tool - Bash - is actually much more interesting than dozens of specialized tools.
Want to edit files without rewriting the whole thing every time? Tell the agent to use sed or perl -e or python -c.
Look at the whole Skills idea. The way Skills work is you tell the LLM "if you need to create an Excel spreadsheet, go read this markdown file first and it will tell you how to run some extra scripts for Excel generation in the same folder". Example here: https://github.com/anthropics/skills/tree/main/skills/xlsx
That only works if you have a filesystem and Bash style tools for navigating it and reading and executing the files.
This is why I want Linux in WebAssembly. I'd like to be able to build LLM systems that can edit files, execute skills and generally do useful things without needing an entire locked down VM in cloud hosting somewhere just to run that application.
Here's an alternative swipe at this problem: Vercel have been reimplementing Bash and dozens of other common Unix tools in TypeScript purely to have an environment agents know how to use: https://github.com/vercel-labs/just-bash
I'd rather run a 10MB WASM bundle with a full existing Linux build in then reimplement it all in TypeScript, personally.
lioeters | 6 hours ago
We'll get there I'm sure of it. In case you hadn't seen: https://github.com/edubart/webcm
> Linux RISC-V virtual machine, powered by the Cartesi Machine emulator, running in the browser via WebAssembly
> a single 32MiB WebAssembly file containing the emulator, the kernel and Alpine Linux operating system. Networking supports HTTP/HTTPS requests, but is subject to CORS restrictions
simonw | 5 hours ago
lioeters | 5 hours ago
thepasch | 6 hours ago
Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?
Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.
yjftsjthsd-h | 3 hours ago
Cheaper than renting a server, more isolated than a container.
shevy-java | a day ago
AlecMurphy | a day ago
zb3 | 23 hours ago
Even though it has no JIT. Truly magic :)
stjo | 21 hours ago
brucehoult | 20 hours ago
x86_64:
x86 (i.e. 32 bit): riscv64: Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.
[1] http://hoult..rg/primes.txt
dmitrygr | 20 hours ago
brucehoult | 19 hours ago
Also MIPS code is much larger.
dmitrygr | 8 hours ago
thesz | 7 hours ago
There are two interesting differences of ISA between MIPS and RISC-V: that MIPS does not have branch on condition, only on zero/non-zero and that MIPS has 16 bit immediates with appropriate sign extension (all zeroes for ORI, all ones for ANDI). The first difference makes MIPS programs about 10% larger and second difference makes MIPS programs smaller (RISC-V immediates are 11.5 bits due to mandatory sign extension, 13 bits are required to cover 95% of immediates in MIPS-like scheme), a percent or so, I think.
anthk | 10 hours ago
http://blog.schmorp.de/2015-06-08-emulating-linux-mips-in-pe...
vexnull | 19 hours ago
brucehoult | 16 hours ago
> newer gcc versions have significantly better optimization passes
So what you're saying is that with a modern compiler RISC-V would win by even more?
TBH I doubt much has changed with register allocation on register-rich RISC ISAs since 2018. On i386, yeah, quite possible.
saagarjha | 9 hours ago
I don't really think this bears out in practice. RISC-V is easy to emulate but this does not make it fast to emulate. Emulation performance is largely dominated by other factors where RISC-V does not uniquely dominate.
lxgr | 7 hours ago
camel-cdr | 5 hours ago
testifye | 16 hours ago
joey5403 | 11 hours ago
hashkitly | 10 hours ago
bonzini | 10 hours ago
(For APX I have patches at https://lore.kernel.org/qemu-devel/20260301144218.458140-1-p... but I have never tested them on system emulation).
sylware | 8 hours ago
lxgr | 7 hours ago
Or are modern JS JITs so good that this is no longer a relevant distinction, i.e. is the performance of a JITted x86 interpreter effectively equivalent to a JITting x86-to-Javascript translator where the result is then itself JIT interpreted?
cxplay | 6 hours ago
bvrmn | 5 hours ago
lasgawe | 2 hours ago