I'm curious what practical purpose you could have for running a js execution engine in an environment that already contains a (substantially faster) js execution engine? Is it just for the joy of doing it (if so good for you, absolutely nothing wrong with that).
It allows, for example, to create bindings as I did for raylib graphics library. exaequOS can run any program that can be built to WebAssembly It will soon support WASI p1 and p2. So many programming languages will be possible for creating programs targeting exaequOS
Is there not a way to use the browser native js execution environment for that? You lose a non-trivial amount of performance running js inside quickjs inside of wasm vs the browser native js engine. I wouldn't be surprised if that's 10 or even 20 times slower, and of course requires loading more code into the browser (slower startup, more ram usage). Maybe you don't care about that, but all of that is pretty orthogonal to the environments I an embedded engine like this is intended for.
Maybe with plugins. The WebAssembly way is cross platform. You would be very surprised with the performances of WebAssembly. I have built a Fibonacci test program in Rust that runs faster when built to Wasi than the native target on my MacBook
This is because the execution is very predictable, so the JIT in the runtime can emit optimized code with the knowledge of how the code is going to run. Embedding unpredictable code (like a javascript interpreter) typically has substantially worse performance when executing under a JIT. This is in addition to the fact that Quickjs (despite being pretty good) can't match the performance of sophisticated JIT implementations like V8 or JavaScriptCore
WebAssembly also runs in places other than the web, where there isn't a JavaScript interpreter at hand. It'd be nice to have a fast JavaScript engine that integrates inside the WebAssembly sandbox, and can call and be called by other languages targeting WebAssembly.
That way, programs that embed WebAssembly in order to be scriptable can let people use their choice of languages, including JavaScript.
It's unfortunate that he uploaded this without notable commit history, it would be interesting to see how long it takes a programmer of his caliber to bring up a project like this.
That said, judging by the license file this was based on QuickJS anyway, making it a moot comparison.
He is extremely productive in its specialty: the intersection of programming language and system programming. I don't think that makes him superhuman.
It's more a model of what a really talented person who applies themselves building things they enjoy building can do.
I prefer thinking of it this way: if Bellard can make a small JS engine from scratch by himself, what's really stoping you from knocking down this library you are thinking about.
If he's anything like me (doubtful but roll with it), the commit history when prototyping is probably something like "commit", "commit", "fixed a bug", etc.
If there were some form of "developed contributions to computing" award, his name is definitely up there. I think there could be a need for such an award - for people who reliably have created the foundations of modern computing. Otherwise it's almost always things from an academic context, which can be a little too abstract.
I think this is such an important point. I know all about Bellard's main works. I actually have no idea what he looks like, I've also never seen an interview with him, and I've never read about his specific philosophies when it comes to different software engineering topics. In a world of never-ending bloviations from "influencers" and "thought leaders" it's so awesome to see a real example of true excellence.
The Turing Award is given for breakthroughs in computer science, not for "most productive programmer of all time", and it wouldn't be appropriate for Ballard.
Between ffmpeg and qemu, I always think of https://xkcd.com/2347/ when I see Fabrice's work. Especially since ffmpeg provides the backbone of almost all video streaming systems today.
Being an engineer and coding at this stage/level is just remarkable- sadly this trade craft is missing in most (big?) companies as you get promoted away into oblivion.
I mean, you can do all that now, so that's not the problem. The problem would be convincing millions of people to switch, when 99.99999% of them couldn't care less.
My idea is to use Markdown over HTTP(S). It's relatively easy to implement Markdown renderer, compared to HTML renderer. It's possible to browse that kind of website with HTML browser with very simple wrapper either on client or server side, so it's backwards compatible. It's rich enough for a lot of websites with actually useful content.
Now I know that Markdown generally can include HTML tags, so probably it should be somewhat restricted.
It could allow to implement second web in a compatible way with simple browsers.
HTML4 + CSS (2?) + JavaScript is already huge platform, very much not trivial to implement. Like you can already do something like that with niche browsers like links, but it's obviously not working, so something else is needed...
A few too many 9s there I think. You're estimating that only 1 person in every 10 million could care less. So less than 50 such people in the USA for example
But most "apps" are just webviews running overcomplicated websites in them, many of which are using all the crazy features that the GP post wants to strip out.
And, I don't have to run a binary to try your product. The web has a lot of flaws, but it's a good way to deliver properly sandboxed applications with low hassle on the part of the user. I've built my fair share of native vs web apps, and I vastly prefer working on web apps. As a user, I vastly prefer web apps for most things. Not all things, but most. No, I don't want to install your crappy app on my computer and risk you doing something irresponsible. I'll keep you sandboxed in a browser tab that I can easily "uninstall" by closing.
I have several web apps installed over the native alternatives. Discord is the most prominent one; I've found their native app has been getting shittier by the day over recent months, while the web app remains as snappy as any Safari page. Plus I can run an adblocker and other extensions in the web app which improve the experience.
I will pick a web app over a proprietary "native" app every time. That way, it can stay in a sandbox where it belongs. Discord, Zoom, Meet, Trello, YouTube, and various others, all stay in sandboxed browser tabs.
Well worth it. Even the very best web apps struggle to be as good as a decent native app, let alone mediocre web apps. The native operating system blows the web out of the water as an app platform.
This would never happen because there's zero incentive to do this.
Browsers are complex because they solve a complex problem: running arbitrary applications in a secure manner across a wide range of platforms. So any "simple" browser you can come up with just won't work in the real world (yes, that means being compatible with websites that normal people use).
> that means being compatible with websites that normal people use
No, new adhering websites would emerge and word of mouth would do the rest : normal people would see this fast nerd-web and want rid of their bloated day-to-day monster of a web life.
Just like all those normal people want rid of their bloated day-to-day monster of a web and therefore go and do something like, say, install an ad blocker?
Oh right. 99% of people don't do even that, much less switch their life over to entirely new websites.
In 2025, depending on the study, it is said that 31.5~42.7% of internet users now block ads. Nearly one-third of Americans (32.2%) use ad blockers, with desktop leading at 37%.
I have to disagree, AMP showed that even Google had an internal conflict with the results of WHATWG.. It's naturally quite hard to reach agreements on a subset when many parties will prefer to go backwards to everything but there situations like the first iPhone, ebooks, TV browsing, etc, where normal people buy simpler things and groups that use the simpler subset achieve more in total than those stuck in the complex only format.
(There are even a lot of developers who would inherently drop any feature usage as soon as you can get 10% of users to bring down their stats on caniuse.com to bellow ~90%.)
I think both wearables and AI assistant could be an incentive on one hand, also towards a more HATEOAS web. However, I guess we haven't really figured out how to replace ad revenue as the primary incentive to make things as complex as possible.
In the earlier days of the web, there were a lot more plugins you'd install to get around on most websites: not just Flash, but things like PDF viewers, Real Video, etc. You'd regularly have to install new codecs, etc. To say nothing of the days when there were some sites you'd have to use a different browser for. A movement towards more of a standards-driven web (in the sense of de facto, not academic, standards) is what made most of this possible.
I think there needs to be a split between the web browser as a document renderer and link follower, and the web browser as a portable target for GUI applications. But frankly my biggest gripe is that you need HTML, JS, and CSS. Three distinct languages that are extremely dissimilar in syntax and semantics that you need all three of (or some bastard cross compiler for your JSX to convert from one format to them). Just make a decent scripting language and interface for the browser and you don't need that nonsense.
I understand this has been tried before (flash, silverlight, etc). They weren't bad ideas, they were killed because of companies that were threatened by the browser as a standard target for applications.
I think this is the ideal direction mainly because a lot of the webs current tech problems stem from websites that don't need app-level features using them. I was in web dev at the advent of SPA-style navigation and understand why everyone switched to it, but at the same time I feel like it's the source of many if not most bugs an performance issues that frustrate the average user.
Years ago I wrote a tiny xhtml-basic browser for a job. It was great. Some of my best work. But then the iPhone came out and xhtml-basic died practically overnight.
There could be a way:
This HTML-lite spec would be subset of current standard so that if you open this HTML lite page in normal browser it would still work. but HTML-lite browser would only open HTML-lite sites, apart from tech itch it could be used in someplace where not full browser is needed, especially if you are control content generation.
- TV screens UI
- some game engines embed chrome embed thing ( steam store page kind)
- some electron apps / lighter cross platform engine
- less sucky QML
- i think weechat or sth has own xml bashed app froamework thing (so could be useful to people wanting to build everything app app platform
- much richer markdown format ?
WML/WAP got a bad rap I think, largely because of the way it was developed and imposed/introduced.
But it was not insane, and it represented a clarity of thought that then went missing for decades. Several things that were in WML are quite reminiscent of interactions designed in web components today.
Lots of comments talking about how existing browsers can already do this, but the big benefit that current browsers can't give you is the sheer level of speed and efficiency that a highly restricted "lite web" browser could achieve, especially if the restrictions are made with efficiency in mind.
The embedded use case is obvious, but it'd also be excellent for things like documentation — with such a browser you could probably have a dozen+ doc pages open with resource usage below that of a single regular browser tab. Perfect for things that you have sitting open for long periods of time.
Better not. It already exists former QuickJS and QuickJS-NG, and parsing JS is a no light task by any means. Even Edbrowse https://github.com/cmb/edbrowse can grind down to a halt an n270 netbook becaus of some sites with JS (both with qjs and qjs-ng). So Dillo would be no better.
Also, legacy machines couldn't run it as fast as they could.
Is MQJS faster or lighter than other engines though? It says the engine itself takes very little memory, but that doesn't say how it performs running all that bloated JS out there. Well also has "quick" in the name.
How it performs with existing JS doesn’t really matter in the context of my post, though.
For a “lite web” browser that’s built for a thin, select slice of the web stack (HTML/CSS/JS), dragging around the heft of a full fat JS engine like V8 is extreme overkill, because it’s not going to be running things like React but instead enabling moderate use of light enhancing scripts — something like a circa-2002 website would skew toward the heavy side of what one might expect for a “lite web” site.
The JS engine for such a browser could be trimmed down and aggressively optimized, likely even beyond what has been achieved with MQJS and similar engines, especially if one is willing to toss out legacy compatibility and not keep themselves beholden to every design decision of standard JS.
I would actually merge html and js in a single language and bring the layout part of css too (something like having grid and flexbox be elements themselves instead of display styles, more typst kind of showed this is possible in a nice way) and keep css only for the styling part.
Not likely to happen. There is geminiprotocol with gemtext though for those of us that are fine with that level of simplicity.
Work towards an eventual feature freeze and final standardisation of the web would be fantastic though, and a huge benefit to pretty much everyone other than maybe the Chrome developers.
Be the change you want to see in the world. If you want to use a specific subset of HTML, CSS and JS, go ahead, make a website using it and offer a browser for similar-spec sites.
On a phone at the moment so I can't try it out, but in regards to this "stricter mode" it says global variables must be declared with var. I can't tell if that means that's just the only way to declare a global or if not declaring var makes it scoped in this mode. Based on not finding anything skimming through the examples, I assume the former?
it also talks about the global object not being a place to add properties. So how you might do `window.foo = ...` or `globalThis.foo = ...` to make something from the local context into a global object. in this dialect I guess you would have to reserve any global objects you wanted to set with a `var` and then set them by reference eg
// global. initialized by SomeConstructor
var fooInstance
class SomeConstructor {
constructor(...) {
fooInstance = this;
}
static getInstance(...) {
if (fooInstance != null) return fooInstance;
return new SomeConstructor(...);
}
}
Clarification added later: One of my key interests at the moment is finding ways to run untrusted code from users (or generated by LLMs) in a robust sandbox from a Python application. MicroQuickJS looked like a very strong contender on that front, so I fired up Claude Code to try that out and build some prototypes.
I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...
Down to -4. Is this generic LLM-dislike, or a reaction to perceived over-self-promotion, or something else?
No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.
I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.
(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)
Forget about the AI bit. Do you think it's interesting that MicroQuickJS can be used from Python via FFI or as a compiled module, and can also be compiled to WebAssembly and called from Node.js and Deno and from Pyodide running in a browser?
... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?
I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.
Usually I watch your stuff very closely (and positively) because you're pushing the edges of how LLMs can be useful for code (and are a lot more honest/forthwright than most enthusiasts about it Going Horribly Wrong and how much work you need to do to keep on top of it.) This one... looks like a crossbar of random things that don't seem like things anyone would actually want to do? Mentioning the sandboxing bit in the first post would have helped a lot, or anything that said why that particular modes are interesting.
I had been in a similar boat and here are some softwares that I recommend or would discuss with you
https://github.com/libriscv/libriscv (I talked with the author of this project, amazing author fwsgonzo is amazing) and they told me that this has the least latency out of any sandbox at only minor consequence of performance that they know of
Btw for sandboxing, kvm itself feels good too and I had discussed it with them in their discord server when they had mentioned that they were working on a minimal kvm server which has since been open sourced (https://github.com/varnish/tinykvm)
Honestly Simon, Deno hosting/the way deno works is another good interesting tidbit for sandboxing. I wish something like deno's sandboxing capabilities came to python perhaps since python fans can appreciate it.
I will try to look more into your github repository too once I get more free.
> Unfortunately it means those languages will be the permanent coding platforms.
not really,
I suspect training volume has a role in debugging a certain class of errors, so there is an advantage to python/ts/sql in those circumstances: if, as an old boss once told me, you code by the bug method :)
The real problems I've had that hint at training data vs logic have been with poorly documented old versions of current languages.
To me, the most amazing capability is not the code they generate but the facility for natural language analysis.
my experience is that agent tools enable polyglot systems because we can now use the right tool for the job, not just the most familiar.
It's interesting but I don't think it belongs as a comment under this post. I can use LLMs to create something tangential for each project posted on HN, and so can everyone else. If we all started doing this then the comment section will quickly become useless and not on point.
Offtopic but I went to your website and saw that you created hackernews-mute and recently I was commenting about how one must have created such an extension and ranted about it. So kudos for you to have created it earlier on.
Have a nice day! Awesome stuff, would keep an eye on your blog, Does your blog/website use mataroa by any chance as there are some similarities even if they are both minimalist but overall nice!
Thank you! I don't use Mataroa but I can see the similarities. My current blog setup is a Python script that parses content written in markdown and emits HTML. The CSS is inspired by the other minimal blogs I see here.
Thanks a lot for checking out my blog/project. Have a great day!
but there is signal in what people are inspired to do upon seeing a new project-- why not simply evaluate the interestingness level of these sorts of mashups on their own terms? it actually feels very "hacker"-y to go out and show people possibilities like this. i have no particular comment on how "interesting" the derivative projects are in this case, but i have a feeling if his original post had been framed more like "i think it's super interesting how easy it is to use via FFI on various runtimes X & Y (oh btw in the spirit of full transparency: i used ai to help me. and you can see more details at <link>). especially because i think everyone who peruses HN with some regularity is likely to know of simon's work in at least some capacity, and i am-- speaking only for myself-- essentially always interested in any sort of project he embarks on, especially his llm-assisted experiments and stuff. but hey-- at the end of the day, all of this value judgment is realized as plainly as possible with +1 / -1 (up- and down-vote) and i guess it just is what it is. if number bad, HN no like. shrug.
I agree that there is signal, and that phrasing his original post as you pointed out would have been better.
My issue is that the cost, in terms of time, for these experiments have really gone down with LLMs. Earlier, if someone played around with the posted project, we knew they spent a reasonable amount of time on it, and thus care about the subject. With LLMs, this is not the case.
That’s true - the assumed time is different now. We have to judge it on the content/findings of the experiment, rather than the fact that someone experimented with it. I share your frustration with random GitHub repos though. Used to, if someone could create a new GitHub repository with a few commits, there was likely to be some intelligence or quality behind it, but I commonly stumble across vibe coded slop with AI-slop READMEs. So maybe you are describing a similar reaction here in HN posts.
Tangential would be “I wrote a Fibonacci function in this and it worked, just like it said on the tin!”
Compiling this to wasm and calling it from python as a sandboxed runtime isn’t tangential. I wouldn’t have know from reading the project’s readme that this was possible, and it’s a really interesting use case. We might as well get mad at simonw for using an IDE while he explored the limits of a new library.
Simon although I find it interesting. And I respect you in this field. I still feel like the reason people call out AI usage or downvote in this case is that in my honest opinion, it would be also more interesting to see people actually write the code and more so (maintain) it as well and create a whole community/github project around microquickjs wasm itself
I read this post of yours https://simonwillison.net/2025/Dec/18/code-proven-to-work/ and although there is a point that can be made that what you are doing isn't a job and I myself create prototypes of code using AI, long term (in my opinion) what really matters are the maintainance and claim (like your article says in a way, that I can pin point a person whose responsible for code to work)
If I find any bug right now, I wouldn't blame it on you but AI and I have varying amount of trust on it
My opinion on the matter is that for prototyping AI can be considered good use but long term it definitely isn't and I am sure that you share a similar viewpoint.
Perhaps you can build a blog post about the nuance of AI? I imagine that a lot of people might share a similar aspect of AI policy where its okay to tinker with it. I am part of the new generation and trust be told I don't think that there becomes much incentives long term unless someone realizes things of not using AI because using AI just feels so lucrative for especially the youngsters.
I am 17 years old and I am going to go into a decent college with (might I add immense competition to begin with) when I have passion about such topics but only to get dissuaded because the benchmark of solving assignments etc. are done by AI and the signal ratio of universities themselves are reducing but they haven't reduced to the point that they are irrelevant but rather that you need to have a university to try to get a job but companies have either freezed hiring which some point out with LLM's
If you ask me, Long term to me it feels like more people might associate themselves with hobbyist computing and even using AI (to be honest sort of like pewdiepie) without being in the industry.
I am not sure what the future holds for me (or for any of us as a matter of fact) but I guess the point I am trying to state is that there is nuance to the discussion from both sides
No I mean post it as an HN post and if anybody cares to see it, they'll upvote that and comment in there. That, instead of pigging backing on other posts to get visibility.
I love it, I find the note interesting, educational, and adds to the discussion in context. Guess you're bound to get a few haters when you share your work in public, but I for one appreciate all your posts, comments, articles, open-source projects.
Your tireless experimenting (and especially documenting) is valuable and I love to see all of it. The avant garde nature of your recent work will draw the occasional flurry of disdain from more jaded types, but I doubt many HN regulars would think you had anything but good intentions! Guess I am basically just saying.. keep it up.
Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.
The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.
This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?
Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.
I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.
And again you're linking to your site. Maybe try pasting the few relevant sentences instead of constantly pushing your content in almost every comment. That's what people find annoying. Maybe link to other people's stuff more, or just write what you think here on HN.
If someone wants to read your blog, they will, they know it exists, and some people even submit your new articles here. There's no need to do what you're doing. Every day you're irritating more people with this behavior, and eventually the substance won't matter to them anymore, so you're acting against your own interests.
Unless you want people to develop the same kind of ad blindness mechanism, where they automatically skip anything that looks like self promotion. Some people will just see a comment by simonw and do the same.
A lot of people have told you this in many threads, but it seems you still don’t get it.
I think you're misreading what the "normalization" problem actually is and why my comment got a lot of upvotes.
You're not pushing against an arbitrary taboo where people dislike self links in principle. People already accept self links on HN when they're occasional and clearly relevant. What people are reacting to is the pattern when "my answer is a link to my site" becomes your default state, it stops reading like helpful reference and starts reading like your distribution strategy.
And that's why "I'm determined to normalize it" probably won't work because you can't normalize your way out of other people's experience of friction. If your behavior reliably adds a speed bump to reading threads forcing people to context switch/click out and wonder if they're being marketed to then the community will develop a shortcut I mentioned in my previous comment which basically is : this is self promo so just ignore.
If your goal is genuinely to share useful ideas, you're better off meeting people where they are: put the relevant 2-6 sentences directly in the comment, and then add something like "I wrote more about it on my blog" or whatever and if anyone is interested they will scroll through your blog (you have it in your profile so anyone can find it with one click) or ask for a link.
Otherwise you're not "normalizing" anything, you're training readers to stop paying attention to you. And I assure you once that happens, it's hard to undo, because people won't relitigate your intent every time. They'll just scroll.
It's a process that's already started, but you can still reverse it.
No, I'm determined to normalize it. I would like a LOT more people to have personal websites where they write at length about things, and then share links to what they have already written where it is relevant to the conversation.
I'm actively pushing back against the "don't promote your site, don't link to it, restate your content in the comments instead" thing.
I am willing to take on risk to my personal reputation and credibility in support of my goal here.
There might be a bit of growth-hacking resistance here, and maybe some StackOverflow culture as well. Neither should be leveled at you, IMHO. I've followed and admired your work since Datasette launched, and I think you're exhibiting remarkably good judgment in how you discuss topics with links to deeper discussion, and it's in keeping with a long tradition of good practices for the web. Thanks for working to normalize the practice.
Well, that's your choice. You can do whatever you want with your reputation, but you can't change human nature and that's essentially what you're trying to do. People don't want HN to turn into another LinkedIn style feed full of AI slop, spam and self promotion. That's exactly what your attempt to "normalize" this behavior would encourage (and I'm confident it won't catch on, sorry).
If everyone starts dropping their "relevant content" in the comments, most of it won't be relevant, and a lot of it will be spam. People don't have time to sift through hundreds of links in the comments and tens of thousands of words when the whole point of HN is that discussion and curation work in the opposite direction.
If your content is good, someone else will submit it as a story. Your blog is probably already read by thousands of people from HN, if they think a particular post belongs in the discussion in some comment, they'll link it. That's why other popular HN users who blog don't constantly promote or link their own content here, unlike you. They know that you don't need to do it yourself, and doing it repeatedly sends the wrong signal (which is obvious and plenty of socially aware people have already pointed out to you in multiple threads).
Trying to normalize that kind of self promoting is like normalizing an annoying mosquito buzz, most people simply don't want it and no amount of "normalizing" will change that.
This is simonw though. I look forward to his thoughts on a topic and would find it annoying if he was forced to summarize his research in a HN thread and then not link to it.
The difference between LinkedIn slop and good content is not the presence or absence of a link to one’s own writing, but the substance and quality of the writing.
If simonw followed these rules you want him to follow, he would be forced to make obscure references to a blog post that I would then need to Google or hope that his blog post surfaces on HN in the next few days. It seems terribly inefficient.
I agree with you that self-promotion is off-putting, and when people post their competing project on a Show HN post, I don’t click those links. But it’s not because they are linking to something they wrote. It’s because they are engaged in “self-promotion”, usually in an attempt to ride someone else’s coattails or directly compete.
If simonw plugged datasette every chance he got, I’d be rolling my eyes too, but linking to his related experiments and demos isn’t that.
Counterpoint to the sibling comment: posting your own site is fine. Your contributions are substantial, and your site is a well-organized repository of your work. Not everything fits (or belongs) in a comment.
I'd chalk up the -4 to generic LLM hate, but I find examples of where LLMs do well to be useful, so I appreciated your post. It displays curiosity, and is especially defensible given your site has no ads, loads blazingly fast, and is filled with HN-relevant content, and doesn't even attempt to sell anything.
I didn't downvote you. You're one of "the AI guys" to me on HN. The content of your post is fine, too, but, even if it was sketch, I'd've given you the benefit of the doubt.
A lot of HN people got cut by AI in one way or another, so they seem to have personal beefs with AI. I am talking about not only job shortages but also general humbling of the bloated egos.
> I am talking about not only job shortages but also general humbling of the bloated egos.
I'm gonna give you the benefit for the doubt here. Most of us do not dislike genAI because we were fired or "humbled". Most of us dislike it because a) the terrible environmental impacts, b) the terrible economic impacts, and c) the general non-production-readiness of results once you get past common, well-solved problems
Your stated understanding comes off a little bit like "they just don't like it because they're jealous".
It is not about artists per ce, it is about manipulative entities. For any manipulation to succeed, one has to create a fog, disorientation, muddy waters.
AI brings clarity. This results in a lot of pain for those who tried to hijack the game in one way or another.
From the psychological point of view, AI is a mirror of one's personality. Depending on who you are, you see different reflections: someone sees a threat, others see the enlightenment.
Do you mean that kind of clarity when no audio/video evidence is a proof of anything anymore?
> This results in a lot of pain for those who tried to hijack the game in one way or another.
I'm not quite sure if any artists, designers, musicians and programmers whose work was used to train AI without their consent tried to manipulate anyone or hijack anything. Care to elaborate?
I think the people interacting with this post are just more likely to appreciate the raw craftsmanship and talent of an individual like Bellard, and coincidentally might be more critical of the machinery that in their perception devalues it. I count myself among them, but didn’t downvote, as I generally think your content is of high quality.
On the contrary, it's pretty possible that LLMs themselves will be perceied as a quaint historic artefact and join the ranks of mechanical turks, zeppelins, segways, google glasses and blockchains.
I appreciate all your work and I did not downvote you. One suggestion, though, is that the README looks very AI generated, which makes the project feel low effort, like you just said “hey Claude do a security analysis of this package”. I don’t think this is actually what you did, but it’s hard to know. It’s also very difficult to identify the highlights. Just a few handwritten sentences would be better.
The README is indeed AI generated, as is everything else in that simonw/research repository - it's my public demo of the asynchronous research process I use.
I don't know why people are downvoting your comment, but it could be considered a low-effort post: here's (a link to) something I prompted AI with, here's (a link to) what it produced (the whole repo).
I would guess people don't know how you expect them to evaluate this, so it comes off as spamming us with a bunch of AI slop.
(That C can be compiled to WASM or wrapped as a python library isn't really something that needs a proof-of-concept, so again it could be understood as an excuse to spam us with AI slop.)
What is the purpose of compiling this to web assembly? What web assembly runtimes are there where there is not already an easily accessible (substantially faster) js execution environment? I know wasmtime exists and is not tied to a js execution engine like basically every other web assembly implementation, but the uses of wasmtime are not restricted from dependencies like v8 or jsc. Usually web assembly is used for providing sandboxing something a js execution environment is already designed to provide, and is only used when the code that requires sandboxing is native code not javascript. It sounds like a good way to waste a lot of performance for some additional sandboxing, but I can't imagine why you would ever design a system that way if you could choose a different (already available and higher performance) sandbox.
I want to build features - both client- and server-side - where users can provide JavaScript code that I then execute safely.
Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.
Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.
If there's a better option than that I'd love to hear about it!
This is generally the purpose of JavaScript execution environments like v8 or jsc (or quickjs although I understand not trusting that as a sandbox to the same degree). They are specifically intended for executing untrusted scripts (eg web browsers). Web assembly’s sandboxing comes from js sandboxing, since it was originally a feature of the same programs for the same reasons. Wrapping one sandbox in another is what I’m surprised by.
Is it though? I have not personally used these libraries, but a cursory google search reveals several options:
- cloudflare/STPyV8: [0] From cloudflare, intended for executing untrusted code.
- Pythonmonkey: [1] Embeds spidermonkey. Not clearly security focused, but sandboxing untrusted code is literally the point of browser js engines.
It's a little less clear how you would do this from node, but the v8 embedding instructions should work https://v8.dev/docs/embed even if nodejs is already a copy of v8.
... whoa, I don't know how I missed it but I hadn't seen STPyV8 before.
I'd seen PyV8 and ruled it out as very unmaintained.
One of my requirements for a sandbox is that it needs to me maintained by a team of professionals who are running a multi-million dollar business on it. Cloudflare certainly count! I wonder what they use STPyV8 for themselves?
PythonMonkey looks promising too: the documentation says "MVP as of September 2024" so clearly it's intended to be stable for production use at this point.
I’m sure you are aware the sandbox that requires maintaining is v8 itself. Of course there are ways for the wrapper to break the sandbox by providing too much in thr global context, but short of that, which the application code could easily do as well, I don’t see why a wrapper should require significant resources to maintain beyond consuming regular updates from upstream. Is there some other reason you hold such a high bar for what is basically just python glue code for the underlying v8 embed api?
1. The ability to restrict the amount of memory that the sandboxed code can use
2. The ability to set a reliable time limit on execution after which the code will be terminated
My third requirement is that a company with real money on the line and a professional security team is actively maintaining the library. I don't want to be the first person to find out about any exploits!
Oh that looks neat! It appears to have the memory limits I want (engine.MaxIsolateMemory) and a robust CPU limit: sandbox.MaxCPUTime
One catch: the sandboxing feature isn't in the "community edition", so only available under the non-open-source (but still sometimes free, I think?) Oracle GraalVM.
As I noted in another comment Figma has used QuickJS to run JS inside Wasm ever since a security vulnerability was discovered in their previous implementation.
In a browser environment it's much easier to sandbox Wasm successfully than to sandbox JS.
That’s very interesting! Have they documented the reasoning for that approach? I would have expected iframes to be both simpler and faster sandboxing mechanism especially in compute bound cases. Maybe the communication overhead is too high in their workload?
I'm not an embedded systems guy (besides using esp32 boards) so this might be a dumb question but does something like this open up the possibility of programming an esp32/arduino board with Javascript, like Micro/Circuit Python?
Sort of related: About ten years ago there was a device called the Tessel by Technical Machine which you programmed with Javascript, npm, the whole nine yards. It was pretty clever - the javascript got transpiled to Lua VM bytecode and ran in the Lua VM on the device (a Cortex M3 I believe). I recently had Claude rewrite their old Node 0.8 CLI tools in Rust because I wasn't inclined to do the javascript archeology needed to get the old tools up and running. Of course then I put the Tessel back in its drawer, but fun nonetheless.
This engine restricts JS in all of the ways I wished I could restrict the language back when I was working on JSC.
You can’t restrict JS that way on the web because of compatibility. But I totally buy that restricting it this way for embedded systems will result in something that sparks joy
I bet MQJS will also be very popular. Quite impressive that bro is going to have two JS engines to brag about in addition to a lot of other very useful things!
Yes, quite! Monsieur Bellard is a legend of computer programming. It would be hard to think of another programmer whose body of public work is more impressive than FB.
Unfortunate that he doesn't seem to write publicly about how he thinks about software. I've never seen him as a guest on any podcast either.
I have long wondered who the "Charlie Gordon" who seems to collaborate with him on everything is. Googling the name brings up a young ballet dancer from England, but I doubt that's the person in question.
> It would be hard to think of another programmer whose body of public work is more impressive than FB.
I am of the firm belief that "Monsieur Fabrice Bellard" is not one person but a group of programmers writing under this nom de plume like "Nicolas Bourbaki" was in Mathematics ;-)
I don't know of any other programmer who has similar breadth and depth in so many varied domains. Just look at his website - https://bellard.org/ and https://en.wikipedia.org/wiki/Fabrice_Bellard No self-aggrandizing stuff etc. but only tech. He is an ideal for all of us to strive for.
Watson's comment on how Sherlock Holmes made him feel can be rephrased in this context as;
"I trust that I am not more dense than my neighbours [i.e. fellow programmers], but I was [and am] always oppressed with a sense of my own stupidity in my dealings with [the works of Fabrice Bellard]."
Since you’re on the topic, what ever happened to the multi threading stuff you were doing on JSC? Did it stop when you left Apple? Is the code still in JSC or did it get taken out?
I remember LZEXE from those olden days. When I discovered the author of FFmpeg and QEMU also created LZEXE, I was so impressed. I've been using his software for my entire computing career.
It's similar to the respect I have for the work of Anders Hejlsberg, who created Turbo Pascal, with which I learned to program; and also C# and TypeScript.
Fabrice, if you're reading this, please consider replacing Rust instead with your own memory safe language.
The design intent of Rust is a powerful idea, and Rust is the best of its class, but the language itself is under-specified[1] which prevents basic, provably-correct optimizations[0]. At a technical level, Rust could be amended to address these problems, but at a social level, there are now too many people who can block the change, and there's a growing body of backwards compatibility to preserve. This leads reasonable people to give up on Rust and use something else[0], which compounds situations like [2] where projects that need it drop it because it's hard to find people to work on it.
Having written low-level high-performance programs, Fabrice Bellard has the experience to write a memory safe language that allows hardware control. And he has the faculties to assess design changes without tying them up in committee. I covet his attentions in this space.
I think of Rust might trigger a new generation of languages that are developed with the hindsight of rust.
The principle of zero cost abstractions avoids a slow slide of compromising abstraction cost, but I think there could be small cost abstractions that would make for a more pragmatic language. Having Rust to point at to show what performance you could be achieving would aid in avoiding bloating abstractions.
I thought Bellard might be behind even llama.cpp (that would be completely expected for Bellard) but it's actually another great who's done that: Georgi Gerganov: https://github.com/ggerganov
For all the praise he gets here, few seem interested in his methods: writing complete programs, based on robust computer science, with minimal dependencies and tooling.
When I first read the source for his original QuickJS implementation I was amazed to discover he created the entirety of JavaScript in a single xxx thousand line C file (more or less).
That was a sort of defining moment in my personal coding; a lot of my websites and apps are now single file source wherever possible/practical.
I honestly think the single file thing is best reserved for C, given how bad the language support for modularity is.
I've had the inverse experience dealing with a many thousand line "core.php" file way back in the day helping debug an expressionengine site (back in the php 5.2ish days) and it was awful.
Unless you have an editor which can create short links in a hierarchical tree from semantic comments to let you organize your thoughts, digging through thousands of lines of code all in the same scope can be exceptionally painful.
C has no problems splitting programs in N files, to be honest.
The reason FB (and myself, for what it is worth) often write single file large programs (Redis was split after N years of being a single file) is because with enough programming experience you know one very simple thing: complexity is not about how many files you have, but about the internal structure and conceptually separated modules boundaries.
At some point you mainly split for compilation time and to better orient yourself into the file, instead of having to seek a very large mega-file. Pointing the finger to some program that is well written because it's a single file, strlongly correlates to being not a very expert programmer.
I split to enforce encapsulation by defining interfaces in headers based on incomplete structure types. So it helps me with he conceptually separated module boundaries. Super fast compilation is another benefit.
The file granularity you chose was at the perfect level for somebody to approach the source code and understand how Redis worked. It was my favorite codebases to peruse and hack. It’s been a decade and my memory palace there is still strong.
It reminded me how important organization is to a project and certainly influenced me, especially applied in areas like Golang package design. Deeply appreciate it all, thank you.
It may not be immediately obvious how to approach modularity since it isn't directly accomplished by explicit language features. But, once you know what you're doing, it's possible to write very large programs with good encapsulation, that span many files, and which nevertheless compile quite rapidly (more or less instantaneously for an incremental build).
I'm not saying other languages don't have better modularity, but to say that C's is bad misses the mark.
Unironically JavaScript is quite good for single file projects (albeit a package.json usually needed)
You can do a huge website entirely in a single file with NodeJS; you can stick re-usable templates in vars and absue multi-line strings (template literals) for all your various content and markup. If you get crafty you can embed clientside code in your 'server.js' too or take it to the next level and use C++ multi-line string literals to wrap all your JS ie- client.js, server.js and package.json in a single .cpp file
Is there any as large as possible single source (or normal with amalgamation version) more or less meaningful project that could be compiled directly with rustc -o executable src.rs? Just to compare build time / memory consumption.
Yes, that's why I've asked about possible rust support of creating such version of normal project. The main issue, I'm unaware of comparably large rust projects without 3rdparty dependencies.
I believe ripgrep has only or mostly dependencies that the main author also controls. It's structured so that ripgrep depends on regex crates by the same author, for example.
Because he choose the hardest path. Difficult problems, no shortcuts, ambitious, taking time to complete. Our environment in general is the opposite of that.
We spend a lot of time doing busy work that's part of the process but doesn't actually move the needle. We write a lot of code that manages abstractions, but doesn't do a lot. All of this busy work feels like progress, but it's avoiding the hard work of actually writing working code.
We underestimate how inefficient working in teams is compared with individuals. We don't value skill and experience and how someone who understands a problem well can be orders of magnitude more productive.
I agree: he loves to "roll your own" a lot. Re: minimal dependencies - the codebase has a software FP implementation including printing and parsing, and some home-rolled math routines for trigonometric and other transcendental functions.
Honestly, it's a reminder that, for the time it takes, it's incredibly fun to build from scratch and understand through-and-through your own system.
Although you have to take detours from, say, writing a bytecode VM, to writing FP printing and parsing routines...
You are absolutely wrong here. Most of us wish that somebody would get him to sit for an in-depth interview and/or get him to write a book on his thinking, problem-solving approach, advice etc. i.e. "we want to pick his brain".
But he is not interested and seems to live on a different plane :-(
Sure. You’ll notice no libraries, no CI, no issue tracker, written in C, no landing page, no dashboard.
So much of the discussion here is about professional practice around software. You can become an expert in this stuff without actually ever learning to write code. We need to remember that most of these tools are a cost that only benefits for managing collaboration between teams. The smaller the team the less stuff you need.
I also have insights from reading his C style but they may be of less interest.
I think it’s also impressive that he identifies a big and interesting problem to go after that’s usually impactful.
You can call 1000 averaged programmers and see if they can write MicroQuickJS using the same amount of time, or call one averaged programmer and see if he/she can write MicroQuickJS to the same quality in his/her life time. 10X, 100X or 1000X measures the productivity of us mortals, not someone like Fabrice Bellard.
People only deny the existence of such people based on their own ego, believing that no one could possibly be worth 10x more or produce 10x more than they can. Those who have seen those people know full well these people exist.
It's kind of crazy it ever became some accepted world view, given how every field has a 10xer that is rather famous for it, whether it be someone who dominates in sport, an academic like Paul Erdős or Euler, a programmer like Fabrice or Linus Torvalds, a leader like Napoleon , or any number of famous inventors throughout history.
For all the praise he's receiving, I think his web design skills have gone overlooked. bellard.org is fast, responsive and presents information clearly. Actually I think the fancier the website, the shittier the software. Examples: Tarsnap - minimal website, brilliant software. Discord - Whitespacey, animation-heavy abomination of a website. Software: hundreds of MB of JS slop, government wiretap+botnet for degenerates.
As a maintainer of an ASN.1 compiler, I think his ASN.1 compiler must be quite awesome (it's not open source), and it's brilliant of him to make it proprietary. I bet he makes good money from it.
Always interesting when people as talented as Bellard manage to (apparently) never write a "full-on" GUI-fronted application, or more specifically, a program that sits between a user with constantly shifting goals and workflows and a "core" that can get the job done.
I would not want to dismiss or diminish by any amount the incredible work he has done. It's just interesting to me that the problems he appears to pick generally take the form of "user sets up the parameters, the program runs to completion".
Reading some of these comments, it's clear very few in here have ever written a productive customer facing full stack app "javascript is really good for a single file app!!!" ok, maybe if you're rendering static HTML... -> these are not serious people
When reading through the projects list of JS restrictions for "stricter" mode, I was expecting to see that it would limit many different JS concepts. But in fact none of the things which are impossible in this subset are things I would do in the course of normal programming anyway. I think all of the JS code I've written over the past few years would work out of the box here.
I was surprised by this one that only showed up lower in the document:
- Date: only Date.now() is supported. [0]
I certainly understand not shipping the js date library especially in an embedded environment both for code-size, and practicality reasons (it's not a great date library), but that would be an issue in many projects (even if you don't use it, libraries yo use almost certainly do.
Good catch. I didn't realize that there was a longer list of restrictions below the section called "Stricter mode", and it seems like a lot of String functions I use are missing too.
Anyone know how this compares to Espruino? The target memory footprint is in the same range, at least. (I know very little about the embedded js space, I just use shellyplugs and have them programmed to talk to BLE lightswitches using some really basic Espruino Javascript.)
Well, as Jeff Atwood famously said [0], "any application that can be written in JavaScript, will eventually be written in JavaScript". I guess that applies to embedded systems too
Fabrice is an absolute legend. Most people would be content with just making QEMU, but this guy makes TinyC and FFmpeg and QuickJS and MicroQuickJS and a bunch of other huge projects.
I am envious that I will never anywhere near his level of productivity.
Not to detract from his status as a legend, but I think the kind of person that singlehandedly makes one of these projects is exactly the kind of person that would make the others.
I forgot about FFmpeg (thanks for the reminder), but my first thought was "yup that makes perfect sense".
I know it's not true, but it would be funny if Bellard had access to AI for 15 years (time-traveler, independent invention, classified researcher) and that was the cause of his superhuman producitvity.
And FFMPEG, the standard codec suite for Unix today. And Qemu, the core of KVM. Plus TCC, a great small compiler compared to C/Clang altough cparser has better C99 coverage. Oh, and some DVB transmitter reusing the MHZ radiation from a computer screen by tweaking the Vidtune values from X. It's similar to what Tempest for Eliza does.
People talk about how productive Fabrice Bellard is, but I don't think anyone appreciates just how productive he is.
Here's the commit history for this project
b700a4d (2025-12-22T1420) - Creates an empty project with an MIT license
295a36b (2025-12-22T1432) - Implements the JavaScript engine, the C API, the REPL, and all documentation
He went from zero to a complete JS implementation in just 12 minutes!
I couldn't do that even if you gave me twice as much time.
Okay, but seriously, this is super cool, and I continue to be amazed by Fabrice. I honestly do think it would be interesting to do an analysis of a day or week of Fabrice's commits to see if there's something about his approach that others can apply besides just being a hardworking genius.
That doesn't mean anything. I quite often start with writing a proof-of-concept, and only initialize the git repository when I'm confident the POC will actually lead to something useful. Common sense says that those files already existed at the time of the first commit.
If this had been available in 2010, Redis scripting would have been JavaScript and not Lua. Lua was chosen based on the implementation requirements, not on the language ones... (small, fast, ANSI-C). I appreciate certain ideas in Lua, and people love it, but I was never able to like Lua, because it departs from a more Algol-like syntax and semantics without good reasons, for my taste. This creates friction for newcomers. I love friction when it opens new useful ideas and abstractions that are worth it, if you learn SmallTalk or FORTH and for some time you are lost, it's part of how the languages are different. But I think for Lua this is not true enough: it feels like it departs from what people know without good reasons.
It wouldn't fix the issue of semantics, but "language skins"[1][2] are an underexplored area of programming language development.
People go through all this effort to separate parsing and lexing, but never exploit the ability to just plug in a different lexer that allows for e.g. "{" and "}" tokens instead of "then" and "end", or vice versa.
Not "never exploit"; Reason and BuckleScript are examples of different "language skins" for OCaml.
The problem with "skins" is that they create variety where people strive for uniformity to lower the cognitive load. OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
That's precisely the point of using tabs for indentation: you don't need to fight over it, because it's a local display preference that does not affect the source code at all, so everyone can just configure whatever they prefer locally without affecting other people.
The idea of "skins" is apparently to push that even further by abstracting the concrete syntax.
Do you mean that files produced with "wide" tabs might have hard newlines embedded more readily in longer lines? Or that maybe people writing with "narrow" tabs might be comfortable writing 6-deep if/else trees that wrap when somebody with their tabs set to wider opens the same file?
I don't see why? Your window width will presumably be tailored to accommodate common scenarios in your preferred tab width.
More than that, in the general case for common C like languages things should almost never be nested more than a few levels deep. That's usually a sign of poorly designed and difficult to maintain code.
Lisps are a notable exception here, but due to limitations (arguably poor design) with how the most common editors handle lines that contain a mix of tabs and spaces you're pretty much forced to use only spaces when writing in that family of languages. If anything that language family serves as case in point - code written with an indentation width that isn't to one's preference becomes much more tedious to adapt due to alternating levels of alignment and indentation all being encoded as spaces (ie loss of information which automated tools could otherwise use).
I find it tends to be a structural thing, Tabs for indenting are fine, hell I prefer tabs for indenting. But use tabs for spacing and columnar layout and the format tends to break on tab width changes. Honestly not a huge deal but as such I tend to avoid tabs for layout work.
I completely agree, hence my point about Lisps. In terms of the abstraction a tab communicates a layer of indentation, with blocks at different indentation levels being explicitly decoupled in terms of alignment.
Unfortunately the discussion tends to be somewhat complicated by the occasional (usually automated) code formatting convention that (imo mistakenly) attempts to change the level of indentation in scenarios where you might reasonably want to align an element with the preceding line. For example, IDEs for C like languages that will add an additional tab when splitting function arguments across multiple lines. Fortunately such cases are easily resolved but their mere existence lends itself to objections.
I love the idea of "tabs for indents, spaces for alignment", but I don't even bring it up anymore because it (the combination of the two) sets so many people off. I also like the idea of elastic tabs, but that requires editor buy-in.
All that being said, I've very much a "as long as everyone working on the code does it the same, I'll be fine" sort of person. We use spaces for everything, with defined indent levels, where I am, and it works just fine.
> OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
That's one of my hopes for the future of the industry: people will be able to just choose the code style and even syntax family (which you're calling skin) they prefer when editing code, and it will be saved in whatever is the "default" for the language (or even something like the Unison Language: store the AST directly which allows cool stuff like de-duplicating definitions and content-addressable code - an idea I first found out on the amazing talk by Joe Armstrong, "The mess we're in" [1]).
Rust, in particular, would perhaps benefit a lot given how a lot of people hate its syntax... but also Lua for people who just can't stand the Pascal-like syntax and really need their C-like braces to be happy.
> transparent switching between skins (about as easy as changing the tab sizes)
One of my pet "not today but some day" project ideas. In my case, I wanted to give Python/Gdscript syntax to any & all the curly languages (a potential boon to all users of non-Anglo keyboard layouts), one by one, via VSCode extension that implements a virtual filesystem over the real one which translates back & forth the syntaxes during the load/edit/save cycle. Then the whole live LSP background running for the underlying real source files and resurfacing that in the same extension with line-number matchings etc.
Anyone, please steal this idea and run with it, I'm too short on time for it for now =)
I want to do the opposite: Give curly braces to all the indentation based languages. Explicit is better than implicit, auto format is better than guessing why some block of code was executed outside my if statement.
That example only shows the opposite of what it sounds like you’re saying, although you could be getting at a few different true things. Anyway:
- Every property access in JavaScript is semantically coerced to a string (or a symbol, as of ES6). All property keys are semantically either strings or symbols.
- Property names that are the ToString() of a 31-bit unsigned integer are considered indexes for the purposes of the following two behaviours:
- For arrays, indexes are the elements of the array. They’re the properties that can affect its `length` and are acted on by array methods.
- Indexes are ordered in numeric order before other properties. Other properties are in creation order. (In some even nicher cases, property order is implementation-defined.)
{ let a = {}; a['1'] = 5; a['0'] = 6; Object.keys(a) }
// ['0', '1']
{ let a = {}; a['1'] = 5; a['00'] = 6; Object.keys(a) }
// ['1', '00']
This indeed is not Algol (or rather C) heritage, but Fortran heritage, not memory offsets but indices in mathematical formulae. This is why R and Julia also have 1-based indexing.
Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.
I learned the Wirth languages first (and then later did a lot of programming in MOO, a prototype OO 1-indexed scripting language). Because of that early experience I still slip up and make off by 1 errors occasionally w/ 0 indexed languages.
(Actually both Modula-2 and Ada aren't strictly 1 indexed since you can redefine the indexing range.)
Pascal, frankly, allowed to index arrays by any enumerable type; you could use Natural (1-based), or could use 0..whatever. Same with Modula-2; writing it, I freely used 0-based indexing when I wanted to interact with hardware where it made sense, and 1-based indexes when I wanted to implement some math formula.
It's fine, I can see the advantages. I just think it's a weird level of blindness to act like 1 indexing is some sort of aberration. It's really not. It's actually quite friendly for new or casual programmers, for one.
I think the objection is not so much blindness as the idea that professional tools should not generally be tailored to the needs of new or casual users at the expense of experienced users.
Is there any actual evidence that new programmers really find this hard? Python is renowned for being beginner friendly and I've never heard of anyone suggesting it was remotely a problem.
There are only a few languages that are purely for beginners (LOGO and BASIC?) so it's a high cost to annoy experienced programmers for something that probably isn't a big deal anyway.
As I understand it Julia changed course and is attempting to support arbitrary index ranges, a feature which Fortran enjoys. (I'm not clear on the details as I don't use either of them.)
Let’s hope that they don’t also replicate ISO Fortran’s design flaws with lower array bounds, which contain enough pitfalls and portability problems that I don’t recommend their use.
> Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.
The problem is that Lua is effectively an embedded language for C.
If Lua never interacted with C, 1-based indexing would merely be a weird quirk. Because you are constantly shifting across the C/Lua barrier, 1-based indices becomes a disaster.
There's nothing wrong with 1-based indexing. The only reason it seems wrong to you is because you're familiar with 0-based, not because it's inherently worse.
That's simply untrue. 1-based indexing is inherently worse because it leads to code that is less elegant and harder to understand. And slightly less efficient but that's a minor factor.
I read this comment, about to snap back with an anecdote how I as a 13 year old was able to learn Lua quite easily, and then I stopped myself because that wasn't productive, then pondered what antirez might think of this comment, and then I realized that antirez wrote it.
I think the older you are the harder Lua is to learn. GP didn't say it made wrong choices, just choices that are gratuitously different from other languages in the Algol family.
I’m tickled that one of my favorite developers is commenting on another of my favorites work. Would be great if Nicolas Cannasse were also in this thread!
> it feels like it departs from what people know without good reasons.
Lua was first released in 1993. I think that it's pretty conventional for the time, though yeah it did not follow Algol syntax but Pascal's and Ada's (which were more popular in Brazil at the time than C, which is why that is the case)!
Ruby, which appeared just 2 years later, departs a lot more, arguably without good reasons either? Perl, which is 5 years older and was very popular at the time, is much more "different" than Lua from what we now consider mainstream.
I'm reving _why's syck right now. Turns out my fork from 2013 was still the most advanced. It doesn't implement the latest YAML specs, and all of it's new insecurities, which is a good thing. And it's much, much faster than the sax-like libyaml.
But since syck uses the ruby hashtable internally, I got stuck in the gem for a while. It fell out of their stdlib, and is not really maintained neither. PHP had the latest updates for it. And perl (me) extended it to be more recursion safe, and added more policies (what to do on duplicate keys: skip or overwrite).
So the ruby bindings are troublesome because of its GC, which with threading requires now7 a global vm instance. And using the ruby alloc/free pairs.
PHP, perl, python, Lua, IO, cocoa, all no problem. Just ruby, because of its too tight coupling. Looks I have to decouple it finally from ruby.
I don't think you understand his point. Ruby has a different syntax because it presents different/more language features than a very basic C-like language; it's inspired by Lisp/SmallTalk, after all. Lua doesn't but still decided to change its looks a lot, according to him.
The Redis test suite is still written in Tcl: https://news.ycombinator.com/item?id=9963162 (although more recently antirez said somewhere he wished he'd written it in C for speed)
In 1994 at the second WWW conference we presented "An API to Mosaic". It was TCL embedded inside the (only![1]) browser at the time - Mosaic. The functionality available was substantially similar to what Javascript ended up providing. We used it in our products especially for integrating help and preferences - for example HTML text could be describing color settings, you could click on one, select a colour from the chooser and the page and setting in our products would immediately update. In another demo we were able to print multiple pages of content from the start page, and got a standing ovation! There is an alternate universe where TCL could have become the browser language.
For those not familiar with TCL, the C API is flavoured like main. Callbacks take a list of strings argv style and an argc count. TCL is stringly typed which sounds bad, but the data comes from strings in the HTML and script blocks, and the page HTML is also text, so it fits nicely and the C callbacks are easy to write.
[1] Mosaic Netscape 0.9 was released the week before
> it feels like it departs from what people know without good reasons.
Lua is a pretty old language. In 1993 the world had not really settled on C style syntax. Compared to Perl or Tcl, Lua's syntax seems rather conventional.
Some design decisions might be a bit unusual, but overall the language feels very consistent and predictable. JS is a mess in comparison.
> because it departs from a more Algol-like syntax
Huh? Lua's syntax is actually very Algol-like since it uses keywords to delimit blocks (e.g. if ... then ... end)
That's what matters to me, not how similar Lua is to other languages, but that the language is well-designed in its own system of rules and conventions. It makes sense, every part of it contributes to a harmonious whole. JavaScript on the other hand.
When speaking of Algol or C-style syntax, it makes me imagine a "Common C" syntax, like taking the best, or the least common denominator, of all C-like languages. A minimal subset that fits in your head, instead of what modern C is turning out to be, not to mention C++ or Rust.
I don't write modern C for daily use, so I can't really say. But I've been re-learning and writing C99 more these days, not professionally but personal use - and I appreciate the smallness of the language. Might even say C peaked at C99. I mean, I'd be crazy to say that C-like languages after C99, like Java, PHP, etc., are all misguided for how unnecessarily big and complex they are. It might be that I'm becoming more like a caveman programmer as I get older, I prefer dumb primitive tools.
C11 adds a couple of nice things like static asserts which I use sometimes to document assumptions I make.
They did add some optional sections like bounds checking that seem to have flopped, partly for being optional, partly for being half-baked. Having optional sections in general seems like a bad idea.
IDK about C11; but C99 doesn't change a lot compared to ANSI C. You can read The C Programming Language 2nd edition and pick up C99 in a week. It adds boleans, some float/complex math ops, an optional floating point definition and a few more goodies:
C++ by comparison it's a behemoth. If C++ died and, for instance, the FLTK guys rebased their libraries into C (and Boost for instance) it would be a big loss at first but Chromium and the like rewritten in C would slim down a bit, the complexity would plummet down and similar projects would use far less CPU and RAM.
It's not just about the binary size; C++ today makes even the Common Lisp standard (even with UIOP and some de facto standard libraries from QuickLisp) pretty much human-manageable, and CL always has been a one-thousand pages thick standard with tons of bloat compared to Scheme or it's sibling Emacs Lisp. Go figure.
C++ is a katamari ball of programming trends and half baked ideas. I get why google built golang, as they were already pretty strict about what parts of the c++ sediments you were allowed to use.
Not Google actually, but the same people from C, AWK and Unix (and 9front, which is "Unix 2.0" and it has a simpler C (no POSIX bloat there) and the compilers are basically the philosophy of Golang (cross compile from any to any arch, CSP concurrency...)
Originally Go used the Ken C compilers for Plan9. It still uses CSP. The syntax it's from Limbo/Inferno, and probably the GC came from Limbo too.
If any, Golang was created for Google by reusing a big chunk of plan9 and Inferno's design, in some cases even straightly, as it shows with the concurrency model. Or the cross-compiling suite.
A bit like MacOS X under Apple. We all know it wasn't born in a vacuum. It borrowed Mach, the NeXTStep API and the FreeBSD userland and they put the Carbon API on top for compatibility.
Before that, the classic MacOS had nothing to do with Unix, C, Objective C, NeXT or the Mach kernel.
Mac OS X is to NeXT what Go is for Alef/Inferno/Plan9 C.
As every MacOS user it's using something like NeXTStep with the Macintosh UI design for the 21th century, Go users are like using a similar, futuristic version of the Limbo/Alef programming languages with a bit of the Plan9 concurrency and automatic crosscompilation.
That's wonderful how you tied those threads together to describe Go's philosophical origins. I'm having a great time exploring the links. And the parallel with NeXTSTEP is fascinating too, I've been interested in that part of software history since learning that Tim Berners-Lee created WorldWideWeb.app on the NeXTcube.
I known for very long time that c (and co) inherited the syntax from algol.
But only after long time I tried to check what Algol actually looked like. To my surprise, Algol does not look anything like C to me.
I would be quite interested in the expanded version of “C has inherited syntax from Algol”
Edit: apparently the inheritance from Algol is a formula: lexical scoping + value returning functions (expression based) - parenthesitis. Only last item is about visual part of the syntax.
I don't love a good deal of Lua's syntax, but I do think the authors had good reasons for their choices and have generally explained them. Even if you disagree, I think "without good reasons" is overly dismissive.
Personally though, I think the distinctive choices are a boon. You are never confused about what language you are writing because Lua code is so obviously Lua. There is value in this. Once you have written enough Lua, your mind easily switches in and out of Lua mode. Javascript, on the other hand, is filled with poor semantic decisions which for me, cancel out any benefits from syntactic familiarity.
More importantly, Lua has a crucial feature that Javascript lacks: tail call optimization. There are programs that I can easily write in Lua, in spite of its syntactic verbosity, that I cannot write in Javascript because of this limitation. Perhaps this particular JS implementation has tco, but I doubt it reading the release notes.
I have learned as much from Lua as I have Forth (SmallTalk doesn't interest me) and my programming skill has increased significantly since I switched to it as my primary language. Lua is the only lightweight language that I am aware of with TCO. In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
In particular, Lua is an exceptional language for writing compilers. Compilers are inherently recursive and thus languages lacking TCO are a poor fit (even if people have been valiantly forcing that square peg through a round hole for all this time).
Having said all that, perhaps as a scripting language for Redis, JS is a better fit. For me though Lua is clearly better than JS on many different dimensions and I don't appreciate the needless denigration of Lua, especially from someone as influential as you.
I'd love to hear more how it is, the state of the library ecosystem, language evolution (wasn't there a new major version recently?), pros/cons, reasons to use it compared to other languages.
About tail-calls, in other languages I've found sometimes a conversion of recursive algorithm to a flat iterative loop with stack/queue to be effective. But it can be a pain, less elegant or intuitive than TCO.
Lua isn't my primary programming language now, but it was for a while. My personal experience on the library ecosystem was:
It's definitely smaller than many languages, and this is something to consider before selecting Lua for a project. But, on the positive side: With some 'other' languages I might find 5 or 10 libraries all doing more or less the same thing, many of them bloated and over-engineered. But with Lua I would often find just one library available, and it would be small and clean enough that I could easily read through its source code and know exactly how it worked.
Another nice thing about Lua when run on LuaJIT: extremely high CPU performance for a scripting language.
In summary: A better choice than it might appear at first, but with trade-offs which need serious consideration.
Yeah, you can usually write a TCO based algorithm differently without recursion though it's often more messy of an implementation... In practice, with JS, I find that if I know I'm going to wind up more/less than 3-4 calls deep I'll optimize or not to avoid the stack overflow.
Also worth noting that some features in JS may rely on application/environment support and may raise errors that you cannot catch in JS code. This is often fun to discover and painful to try to work around.
Does the language give any guarantee that TCO was applied? In other words can it give you an error that the recursion is not of tail call form? Because I imagine a probability of writing a recursion and relying on it being TCO-optimized, where it's not. I would prefer if a language had some form of explicit TCO modifier for a function. Is there any language that has this?
At least in Lua then the rule is simply 'last thing a function dose' this is unambiguous. `return f()` is always a tail call and `return f() + 1` never is.
Formally JavaScript is specified as having TCO as of ES6, although for unfortunate and painful reasons this is spec fiction - Safari implements it, but Firefox and Chrome do not. Neither did QuickJS last I checked and I don't think this does either.
ES is now ES2025, not ES6/2015. There are still platforms that don't even fully implement enough to shim out ES5 completely, let alone ES6+. Portions of ES6 require buy in from the hosting/runtime environment that aren't even practical for some environments... so I feel the statement itself is kind of ignorant.
It wouldn't be the first time the specs have gone too far and beyond their perimeter.
C's "register" variables used to have the same issue, and even "inline" has been downgraded to a mere hint for the compiler (which can ignore it and still be a C compiler).
inline and register still have semantic requirements that are not just hints. Taking the address of a register variable is illegal, and inline allows a function to be defined in multiple .c files without errors.
I don't think you're wrong per se. This is a "correct" way of thinking of the situation, but it's not the only correct way and it's arguably not the most useful.
A more useful way to understand the situation is that a language's major implementations are more important than the language itself. If the spec of the language says something, but nobody implements it, you can't write code against the spec. And on the flip side, if the major implementations of a language implement a feature that's not in the spec, you can write code that uses that feature.
A minor historical example of this was Python dictionaries. Maybe a decade ago, the Python spec didn't specify that dictionary keys would be retrieved in insertion order, so in theory, implementations of the Python language could do something like:
But the CPython implementation did return all the keys in insertion order, and very few people were using anything other than the CPython implementation, so some codebases started depending on the keys being returned in insertion order without even knowing that they were depending on it. You could say that they weren't writing Python, but that seems a bit pedantic to me.
In any case, Python later standardized that as a feature, so now the ambiguity is solved.
It's all very tricky though, because for example, I wrote some code a decade that used GCC's compare-and-swap extensions, and at least at that time, it didn't compile on Clang. I think you'd have a stronger argument there that I wasn't writing C--not because what I wrote wasn't standard C, but because the code I wrote didn't compile on the most commonly used C compiler. The better approach to communication in this case, I think, is to simply use phrases that communicate what you're doing: instead of saying "C", say "ANSI C", "GCC C", "Portable C", etc.--phrases that communicate what implementations of the language you're supporting. Saying you're writing "C" isn't wrong, it's just not communicating a very important detail: what implementations of the compiler can compile your code. I'm much more interested in effectively communicating what compilers can compile a piece of code than pedantically gatekeeping what's C and what's not.
I'm saying it isn't very useful argue about whether a feature is a feature of the language or a feature of the implementation, because the language is pretty useless independent of its implementation(s).
There actually was a time when Python dictionary keys weren't guaranteed to be in the order they were inserted, as implemented in CPython, and the order would not be preserved.
You could not reliably depend on that implementation detail until much later, when optimizations were implemented in CPython that just so happened to preserve dictionary key insertion order. Once that was realized, it was PEP'd and made part of the spec.
Python’s dicts for many years did not return keys in insertion order (since Tim Peters improved the hash in iirc 1.5 until Raymond Hettinger improved it further in iirc 3.6).
After the 3.6 changed, they were returned in order. And people started relying on that - so at a later stage, this became part of the spec.
You’re wrong in the way in which many people are wrong when they hear about a thing called “tail-call optimization”, which is why some people have been trying to get away from the term in favour of “proper tail calls” or something similar, at least as far as R5RS[1]:
> A Scheme implementation is properly tail-recursive if it supports an unbounded number of active tail calls.
The issue here is that, in every language that has a detailed enough specification, there is some provision saying that a program that makes an unbounded number of nested calls at runtime is not legal. Support for proper tail calls means that tail calls (a well-defined subgrammar of the language) do not ever count as nested, which expands the set of legal programs. That’s a language feature, not (merely) a compiler feature.
I still think that the language property (or requirement, or behavior as seen by within the language itself) that we're talking about in this case is "unbounded nested calls" and that the language specs doesn't (shouldn't) assume that such property will be satisfied in a specific way, e.g. switching the call to a branch, as TCO usually means.
Unbounded nested calls as long as those calls are in tail position, which is a thing that needs to be defined—trivially, as `return EXPR(EXPR...)`, in Lua; while Scheme, being based around expressions, needs a more careful definition, see link above.
Otherwise yes. For instance, Scheme implementations that translate the Scheme program into portable C code (not just into bytecode interpreted by C code) cannot assume that the C compiler will translate C-level tail calls into jumps and thus take special measures to make them work correctly, from trampolines to the very confusingly named “Cheney on the M.T.A.”[1], and people will, colloquially, say those implementations do TCO too. Whether that’s correct usage... I don’t think really matters here, other than to demonstrate why the term “TCO” as encountered in the wild is a confusing one.
Cheney on the MTA is a great paper/algorithm, and I'd like to add (for the benefit of the lucky ten thousand just learning about this) that it's pun on a great old song: Charlie on the MTA ( https://www.youtube.com/watch?v=MbtkL5_f6-4 ). The joke is that in both cases it will never return, either because the subway fare is too high or because you don't want to keep the call stack around.
I sort of see what you are getting at but I am still a bit confused:
If I have a program that based on the input given to it runs some number of recursions of a function and two compilers of the language, can I compile the program using both of them if compiler A has PTC and compiler B does not no matter what the actual program is? As in, is the only difference that you won’t get a runtime error if you exceed the max stack size?
I think features of the language can make it much easier (read: possible) for the compiler to recognize when a function is tail call optimizable. Not every recursion will be, so it matters greatly what the actual program is.
It is a feature of the language (with proper tail calls) that a certain class of calls defined in the spec must be TCOd, if you want to put things that way. It’s not just that it’s easier for the compiler to recognize them, it’s that it has to.
(The usual caveats about TCO randomly not working are due to constraints imposed by preexisting ABIs or VMs; if you don’t need to care about those, then the whole thing is quite straightforward.)
That is correct, the difference is only visible at runtime. So is the difference between garbage collection (whether tracing or reference counting) and lack thereof: you can write a long-lived C program that calls malloc() throughout its lifetime but never free(), but you’re not going to have a good time executing it. Unless you compile it with Fil-C, in which case it will work (modulo the usual caveats regarding syntactic vs semantic garbage).
> For me though Lua is clearly better than JS on many different dimensions and I don't appreciate the needless denigration of Lua, especially from someone as influential as you.
Is it needless? It's useful specifically because he is someone influential, and someone might say "Lua was antirez's choice when making redis, and I trust and respect his engineering, so I'm going to keep Lua as a top contender for use in my project because of that" and him being clear on his choices and reasoning is useful in that respect. In any case where you think he has a responsibility to be careful what he says because of that influence, that can also be used in this case as a reason he should definitely explain his thoughts on it then and now.
> More importantly, Lua has a crucial feature that Javascript lacks: tail call optimization. There are programs that I can easily write in Lua, in spite of its syntactic verbosity, that I cannot write in Javascript because of this limitation. Perhaps this particular JS implementation has tco, but I doubt it reading the release notes.
> [...] In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
This is not a great language feature, IMO. There are two ways to go here:
1. You can go the Python way, and have no TCO, not ever. Guido van Rossum's reasoning on this is outlined here[1] and here[2], but the high level summary is that TCO makes it impossible to provide acceptably-clear tracebacks.
2. You can go the Chicken Scheme way, and do TCO, and ALSO do CPS conversion, which makes EVERY call into a tail call, without language user having to restructure their code to make sure their recursion happens at the tail.
Either of these approaches has its upsides and downsides, but TCO WITHOUT CPS conversion gives you the worst of both worlds. The only upside is that you can write most of your loops as recursion, but as van Rossum points out, most cases that can be handled with tail recursion, can AND SHOULD be handled with higher-order functions. This is just a much cleaner way to do it in most cases.
And the downsides to TCO without CPS conversion are:
1. Poor tracebacks.
2. Having to restructure your code awkwardly to make recursive calls into tail calls.
3. Easy to make a tail call into not a tail call, resulting in stack overflows.
I'll also add that the main reason recursion is preferable to looping is that it enables all sorts of formal verification. There's some tooling around formal verification for Scheme, but the benefits to eliminating loops are felt most in static, strongly typed languages like Haskell or OCaml. As far as I know Lua has no mature tooling whatsoever that benefits from preferring recursion over looping. It may be that the author of the post I am responding to finds recursion more intuitive than looping, but my experience contains no evidence that recursion is inherently more intuitive than looping: which is more intuitive appears to me to be entirely a function of the programmer's past experience.
In short, treating TCO without CPS conversion as a killer feature seems to me to be a fetishization of functional programming without understanding why functional programming is effective, embracing the madness with none of the method.
EDIT: To point out a weakness to my own argument: there are a bunch of functional programming language implementations that implement TCO without CPS conversion. I'd counter by saying that this is a function of when they were implemented/standardized. Requiring CPS conversion in the Scheme standard would pretty clearly make Scheme an easier to use language, but it would be unreasonable in 2025 to require CPS conversion because so many Scheme implementations don't have it and don't have the resources to implement it.
EDIT 2: I didn't mean for this post to come across as negative on Lua: I love Lua, and in my hobby language interpreter I've been writing, I have spent countless hours implementing ideas I got from Lua. Lua has many strengths--TCO just isn't one of them. When I'm writing Scheme and can't use a higher-order function, I use TCO. When I'm writing Lua and can't use a higher order function, I use loops. And in both languages I'd prefer to use a higher order function.
EDIT 3: Looking at Lua's overall implementation, it seems to be focused on being fast and lightweight.
I don't know why Lua implemented TCO, but if I had to guess, it's not because it enables you to replace loops with recursion, it's because it... optimizes tail calls. It causes tail calls to use less memory, and this is particularly effective in Lua's implementation because it reuses the stack memory that was just used by the parent call, meaning it uses memory which is already in the processor's cache.
The thing is, a loop is still going to be slightly faster than TCOed recursion, because you don't need to move the arguments to the tail call function into the previous stack frame. In a loop your counters and whatnot are just always using the same memory location, no copying needed.
Where TCO really shines is in all the tail calls that aren't replacements for loops: an optimized tail call is faster than a non-optimized tail call. And in real world applications, a lot of your calls are tail calls!
I don't necessarily love the feature, for the reasons that I detailed in the previous post. But it's not a terrible problem, and I think it at makes sense as an optimization within the context of Lua's design goals of being lightweight and fast.
> I think the distinctive choices are a boon. You are never confused about what language you are writing because Lua code is so obviously Lua. There is value in this.
This. And not just Lua , but having different kind of syntax for scripting languages or very high level languages signal it is something entirely different, and not C as in system programming language.
The syntax is also easier for people who dont intend to make programming as their profession, but simply want something done. It used to be the case in the old days people would design simple PL for new beginners, ActionScript / Flash era and even Hypercard before that. Unfortunately the industry is no longer interested in it, and if anything intend to make every as complicated as possible.
I do not think your compiler argument in support of TCO is very convincing.
Do you really need to write compilers with limitless nesting? Or is nesting, say, 100.000 deep enough, perhaps?
Also, you'll usually want to allocate some data structure to create an AST for each level. So that means you'll have some finite limit anyway. And that limit is a lot easier to hit in the real world, as it applies not just to nesting depth, but to the entire size of your compilation unit.
TCO is not just for parse trees or AST, but in imperative languages without TCO this is the only place you are "forced" to use recursion. You can transform any loop in you program to recursion if you prefer, which is what the author does.
JavaScript in 2010 was a totally different beast, standartization-wise. Lots of sharp corners and blank spaces were still there.
So, even if an implementation like MicroQuickJS existed in 2010, it's unlikely that too many people would have chosen JS over Lua, given all the shortcomings that JavaScript had at the time.
While you're not wrong that JS has come a long way in that time, it's not the case that it was an extremely unusual choice at the time - Ryan Dahl chose it for node in 2009.
Lua has been a wild success considering it was born in Brazil, and not some high wealth, network-effected country with all its consequent influential muscle (Ruby? Python? C? Rust? Prolog? Pascal? APL? Ocaml? Show me which one broke out that wasn't "born in the G7"). We should celebrate its plucky success which punches waaay above its adoption weight. It didn't blindly lockstep ALGOL citing "adooooption!!", but didn't indulge in revolution either, and so treads a humble path of cooperative independence of thought.
Come to think of it I don't think I can name a single mainstream language other than Lua that wasn't invented in the G7.
It sounds like you're trying to articulate why you don't like Lua, but it seems to just boil down to syntax and semantics unfamiliarity?
I see this argument a lot with Lua. People simply don't like its syntax because we live in a world where C style syntax is more common, and the departure from that seem unnecessary. So going "well actually, in 1992 when Lua was made, C style syntax was more unfamiliar" won't help, because in the current year, C syntax is more familiar.
The first language I learned was Lua, and because of that it seems to have a special place in my heart or something. The reason for this is because in around 2006, the sandbox game "Garry's Mod" was extended with scripting support and chose Lua for seemingly the same reasons as Redis.
The game's author famously didn't like Lua, its unfamiliarity, its syntax, etc. He even modified it to add C style comments and operators. His new sandbox game "s&box" is based on C#, which is the language closest to his heart I think.
The point I'm trying to make is just that Lua is familiar to me and not to you for seemingly no objective reason. Had Garry chosen a different language, I would likely have a different favorite language, and Lua would feel unfamiliar and strange to me.
In that case, my point about Garry not liking Lua despite choosing it for Garrysmod, for seemingly the same reason as antirez is very appropriate.
I haven't read antirez'/redis' opinions about Lua, so I'm just going off of his post.
In contrast I do know more about what Garry's opinion on Lua is as I've read his thoughts on it over many years. It ultimately boils down to what antirez said. He just doesn't like it, it's too unfamiliar for seemingly no intentional reason.
But Lua is very much an intentionally designed language, driven in cathedral-style development by a bunch of professors who seem to obsess about language design. Some people like it, some people don't, but over 15 years of talking about Lua to other developers, "I don't like the syntax" is ultimately the fundamental reason I hear from developers.
So my main point is that it just feels arbitrary. I'm confident the main reason I like Lua is because garry's mod chose to implement it. Had it been "MicroQuickJS", Lua would likely feel unfamiliar to me as well.
If I am remembering correctly, there was a moment where Garry was seriously considering using Squirrel instead of Lua. I think he experimented with JavaScript too.
I’m not sure it’s still the case but he modified Lua to be zero indexed and some other tweaks because they annoyed him so much, so it’s possible if you learned GMod Lua you learned Garry’s Lua.
Of course his heart has been with C# for many years now.
I also strongly disliked luas syntax at first but now I feel like the meta tables and what not and pcall and all that stuff is kinda worth it. I like everything about Lua except some of the awkward syntax but I find it so much better then JS, but I haven't been a web dev in over a decade
Initially I agreed, just because so many other languages do it that way.
But if you ignore that and clean slate it, IMO, 1 based makes more sense. I feel like 0 based mainly gained foothold because of C's bastardization of arrays vs pointers and associated tricks. But most other languages don't even support that.
You can only see :len(x)-1 so many times before you realize how ridiculous it is.
0 based has a LOT of benefits whereas the reasoning, if I recall, for 1-indexing in Lua was to make the language more approachable to non-devs.
Having written a game in it (via LÖVE), the 1-indexing was a continued source of problems. On the other hand, I rarely need to use len-1, especially since most languages expose more readable methods such as `last()`.
Python got this right. Zero-based indexing combined with half-open slice notation means as a practical matter you don't see too many -1s in the code. Certainly far fewer than when I wrote a game in Löve for a gamejam, where screen co-ordinates are naturally zero-indexed, which has implications for everything onscreen (tile indices, sprites, ...)
I remember seeing this a long time ago and liking it, I just didn't have a use for it at the time. How does it stack up against luahit for perf and memory, and threading? It also looks like it could be worth looking at porting the compiler to zig which excels at both compiler writing and cross platform tooling.
+1 for the incredibly niche (but otherwise make-it-or-break-it) fact that PUC-Rio is and likely always will be strict C89 (i.e. ANSI C). I think this was (and still is?) most relevant to gamedev on Windows using older versions of MSVC, which has until recently been a few pennies short of a full C99 implementation.
I did once manage to compile Lua 5.4 on a Macintosh SE with 4MB of RAM, and THINK C 5.0 (circa 1991), which was a sick trick. Unfortunately, it took about 30 seconds for the VM to fully initialize, and it couldn't play well with the classic MacOS MMU-less handle-based memory management scheme.
In that sense Premake looks significantly better than CMake with its esoteric constructs.
Having regular and robust PL to implement those 10% of configuration cases that cannot be defined with "standard" declarations is the way to go, IMO.
I for one would be would be very interested in a Redbean[0] implementation with MicroQuickJS instead of Lua, though I lack the resources to create it myself.
[0] https://redbean.dev/ - the single-file distributable web server built with Cosmopolitan as an αcτµαlly pδrταblε εxεcµταblε
LuaJIT’s C FFI integration is super useful in a scripting language and I’ve replaced numerous functions previously written in things like Bash with it.
it also helps that it has ridiculously high performance for a scripting language
> It only supports a subset of Javascript close to ES5 [...]
I have not read the code of the solver, but solving YouTube's JS challenge is so demanding that the team behind yt-dlp ditched their JS emulator written in Python.
It looks like if you write the acceptable MQJS subset of JS+types, then run your code through a checker+stripper that doesn't try to inject implementations of TS's enums and such it should just work?
It’s not even the languages or runtimes that inhibit embedded adoption but the software to hardware tooling. Loader scripts, HAL/LL/CMSIS, flashing, etc. They all suck.
I don't think you can transpile arbitrary TS in mqjs's JS subset. Maybe you can lint your code in such a way that certain forbidden constructs fail the lint step, but I don't think you can do anything to avoid runtime errors (i.e. writing to an array out of its bonds).
If anyone wants to try out MicroQuickJS in a browser here's a simple playground interface for executing a WebAssembly compiled version of it: https://tools.simonwillison.net/microquickjs
Thanks for sharing! The link to the PR looks like a wrong paste. I found https://github.com/simonw/tools/pull/181 which seems to be what was intended to be shared instead.
I was interested to try Date.now() since this is mentioned as being the only part of the Date implementation that is supported but was surprised to find it always returns 0 for your microquickjs version - your quickjs variant appears to return the current unix time.
Good catch. WebAssembly doesn't have access to the current time unless the JavaScript host provides it through injecting a function, so the WASM build would need to be hooked up specially to support that.
This makes me wonder, is there analysis of the syntax and if so can't it pick the lightest implementation? I see how light dillo is on the same page as chrome and I don't know why a web browser of the caliber of chrome does so much worse than a browser worked by a handful of people.
They are very similar in terms of ROM footprint (esp: 128K vs mqjs: 100K) and min RAM (esp: 8K vs mqjs: 10K), but spec coverage need to be examined in detail to see the actual difference.
Interesting. I wonder if mqjs would make it feasible to massively parallelize JavaScript on the GPU. I’m looking for a way to run thousands of simultaneous JS interpreters, each with an isolated heap and some shared memory. There are some research projects [1, 2] in this direction, but they are fairly experimental.
>> Arrays cannot have holes. Writing an element after the end is not allowed:
a = []
a[0] = 1; // OK to extend the array length
a[10] = 2; // TypeError
If you need an array like object with holes, use a normal object instead
Guess I'm a bit fuzzy on this, I wouldn't use numeric keys to populate a "sparse array", but why would it be a problem to just treat it as an iterable with missing values undefined? Something to do with how memory is being reserved in C...? If someone jumps from defining arr[0] to arr[3] why not just reserve 1 and 2 and inform that there's a memory penalty (ie that you don't get the benefit of sparseness)?
Guidance towards correct usage: eg. If you allow `a[10] = 2` and just make the Array dense, the user might not even realise the difference and will assume it's sparse. Next they perform `a[2636173747] = 3` and clog up the entire VM memory or just plain crash it from OOM. Since it's likely that the small indexes appear in testing and the large indexes appear in production, it is better to make the misunderstanding an explicit error and move it "leftwards" in time, so that it doesn't crash production at an inopportune moment.
Very excited about this. I was programming an ESP32 for a project recently and was like, computer chips are fast enough, why can't I just write TypeScript?
As a fellow (but way junior) JavaScript engine developer I'm really happy to see the stricter mode, and especially Arrays being dense while Objects don't treat indexed properties specially at all: it is my opinion that this is where we should drive JavaScript towards, slow and careful though it may be.
In my engine Arrays are always dense from a memory perspective and Objects don't special case indexes, so we're on the same page in that sense. I haven't gotten around to creating the "no holes" version of Array semantics yet, and now that we have an existing version of it I believe I'll fully copy out Bellard's semantics: I personally mildly disagree with throwing errors on over-indexing since it doesn't align with TypedArrays, but I'd rather copy an existing semantic than make a nearly identical but slightly different semantic of my own.
Fabrice, Mr Bellard, O Indefatigable One, if you are reading this, I would love for you to make a JavaScript that compiles to assembly and works across Windows PE, macOS and Linux. Surrendering the various efficiencies of the V8 JIT bytecode in favor of AOT is entirely acceptable for the concision, speed and the chance to "begin again" that this affords. In fact, I believe you may already be working on such an idea! If you are not (highly doubtful) I encourage you to ponder it, and if we are so lucky and the universe wills it, you shall turn the hand of your incomparable craftsmanship upon this worthy goal, and doubtless such a magnificent creation shall be realized by you in a surprisingly short amount of time!
Not sure about the impact of these, I guess it depends on the context where this engine is used. But there seems to be already exploits for the engine:
As a long-time Espruino user I was immediately interested.
At first glance Espruino has broader coverage including quite a bit of ES6 and even up to parts of ES2020. (https://www.espruino.com/Features). And obviously has a ton of libraries and support for a wide range of hardware.
For a laugh, and to further annoy the people annoyed by @simonw's experiments, I got Cursor to butcher it and run as a REPL on an ESP32-S3 over USB-Serial using ESP-IDF.
Sound like this would have been a good option for PLJS in PostgreSQL (currently using QuickJS), not sure if it'd be appropriate to consider a switch or if that would/could improve availability... IMO interaction with JSON(B) and other json and objects being the biggest usefulness.
baudaux | a day ago
MobiusHorizons | a day ago
baudaux | 23 hours ago
MobiusHorizons | 23 hours ago
baudaux | 22 hours ago
MobiusHorizons | 19 hours ago
JoshTriplett | 23 hours ago
That way, programs that embed WebAssembly in order to be scriptable can let people use their choice of languages, including JavaScript.
kettlecorn | 17 hours ago
Figma for example used QuickJS, the prior version of the library this post is about, to sandbox user authored Javascript plugins: https://www.figma.com/blog/an-update-on-plugin-security/
It's pretty handy for things like untrusted user authored JS scripts that run on a user's client.
timschumi | a day ago
That said, judging by the license file this was based on QuickJS anyway, making it a moot comparison.
incognito124 | a day ago
agumonkey | a day ago
MisterTea | a day ago
oblio | 23 hours ago
christophilus | 23 hours ago
deafpolygon | 11 hours ago
StopDisinfo910 | 10 hours ago
It's more a model of what a really talented person who applies themselves building things they enjoy building can do.
I prefer thinking of it this way: if Bellard can make a small JS engine from scratch by himself, what's really stoping you from knocking down this library you are thinking about.
khazhoux | 22 hours ago
saagarjha | 18 hours ago
ale | 20 hours ago
k4rli | 23 hours ago
zarzavat | 15 hours ago
booi | a day ago
bArray | a day ago
lacedeconstruct | a day ago
throw-qqqqq | a day ago
https://www.macplus.net/depeche-82364-interview-le-createur-...
https://www.mo4tech.com/fabrice-bellard-one-man-is-worth-a-t... (few quotes, more like a profile piece)
He keeps a low profile and let his work speak for itself.
He really is brilliant.
userbinator | 22 hours ago
appreciatorBus | 14 hours ago
hn_throwaway_99 | 22 hours ago
wyldfire | a day ago
One such award is the Turing Award [1], given "for contributions of lasting and major technical importance to computer science."
[1] https://en.wikipedia.org/wiki/Turing_Award
svat | a day ago
AtlasBarfed | 23 hours ago
hn_throwaway_99 | a day ago
senbrow | 22 hours ago
AIUI the Turing award is primarily CS focused.
sxp | a day ago
makapuf | a day ago
treavorpasan | 21 hours ago
IlikeMadison | a day ago
textlapse | a day ago
Being an engineer and coding at this stage/level is just remarkable- sadly this trade craft is missing in most (big?) companies as you get promoted away into oblivion.
alcover | a day ago
stronglikedan | a day ago
alcover | a day ago
makapuf | a day ago
vbezhenar | a day ago
Now I know that Markdown generally can include HTML tags, so probably it should be somewhat restricted.
It could allow to implement second web in a compatible way with simple browsers.
coryrc | a day ago
vbezhenar | 14 hours ago
sunshine-o | a day ago
With a markdown over HTTP browser I could already almost browse Github through the READMEs and probably other websites.
Markdown is really a loved and now quite popular format. It is sad gemini created a separate closed format instead of just adopting it.
billforsternz | a day ago
bArray | a day ago
mikepurvis | a day ago
bogdan | a day ago
thwarted | a day ago
ameliaquining | a day ago
christophilus | a day ago
zppln | a day ago
nozzlegear | a day ago
wiseowise | a day ago
JoshTriplett | 17 hours ago
bigstrat2003 | 16 hours ago
qweqwe14 | a day ago
Browsers are complex because they solve a complex problem: running arbitrary applications in a secure manner across a wide range of platforms. So any "simple" browser you can come up with just won't work in the real world (yes, that means being compatible with websites that normal people use).
alcover | a day ago
No, new adhering websites would emerge and word of mouth would do the rest : normal people would see this fast nerd-web and want rid of their bloated day-to-day monster of a web life.
One can still hope..
dmd | a day ago
Oh right. 99% of people don't do even that, much less switch their life over to entirely new websites.
lioeters | a day ago
In 2025, depending on the study, it is said that 31.5~42.7% of internet users now block ads. Nearly one-third of Americans (32.2%) use ad blockers, with desktop leading at 37%.
dmd | a day ago
lioeters | a day ago
morshu9001 | 4 hours ago
foobarian | a day ago
mxey | 8 hours ago
notKilgoreTrout | a day ago
(There are even a lot of developers who would inherently drop any feature usage as soon as you can get 10% of users to bring down their stats on caniuse.com to bellow ~90%.)
riedel | a day ago
groundzeros2015 | a day ago
mewse-hn | a day ago
fireflies_ | a day ago
bdcravens | a day ago
duped | a day ago
I understand this has been tried before (flash, silverlight, etc). They weren't bad ideas, they were killed because of companies that were threatened by the browser as a standard target for applications.
alcover | a day ago
foobarchu | 21 hours ago
afavour | a day ago
oefrha | a day ago
speed_spread | a day ago
augustk | a day ago
https://en.wikipedia.org/wiki/Progressive_enhancement
mromanuk | a day ago
lioeters | a day ago
dtj1123 | a day ago
dcminter | a day ago
pests | 13 hours ago
hinkley | a day ago
born-jre | a day ago
zero_bias | a day ago
AtlasBarfed | 23 hours ago
exasperaited | 19 hours ago
But it was not insane, and it represented a clarity of thought that then went missing for decades. Several things that were in WML are quite reminiscent of interactions designed in web components today.
anthk | 10 hours ago
exasperaited | 9 hours ago
Gemini is not a good or sensible design. It's reactionary more than it is informed.
keepamovin | a day ago
cosmic_cheese | a day ago
The embedded use case is obvious, but it'd also be excellent for things like documentation — with such a browser you could probably have a dozen+ doc pages open with resource usage below that of a single regular browser tab. Perfect for things that you have sitting open for long periods of time.
alcover | 23 hours ago
zem | 22 hours ago
anthk | 10 hours ago
Also, legacy machines couldn't run it as fast as they could.
morshu9001 | 4 hours ago
cosmic_cheese | 3 hours ago
For a “lite web” browser that’s built for a thin, select slice of the web stack (HTML/CSS/JS), dragging around the heft of a full fat JS engine like V8 is extreme overkill, because it’s not going to be running things like React but instead enabling moderate use of light enhancing scripts — something like a circa-2002 website would skew toward the heavy side of what one might expect for a “lite web” site.
The JS engine for such a browser could be trimmed down and aggressively optimized, likely even beyond what has been achieved with MQJS and similar engines, especially if one is willing to toss out legacy compatibility and not keep themselves beholden to every design decision of standard JS.
aziis98 | 22 hours ago
Or maybe just make it all a single lispy language
1313ed01 | 22 hours ago
Work towards an eventual feature freeze and final standardisation of the web would be fantastic though, and a huge benefit to pretty much everyone other than maybe the Chrome developers.
fud101 | 4 hours ago
GaryBluto | an hour ago
zamadatix | a day ago
frabert | a day ago
lioeters | a day ago
MobiusHorizons | 23 hours ago
simonw | a day ago
I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...
PR and prompt I used here: https://github.com/simonw/research/pull/50 - using this pattern: https://simonwillison.net/2025/Nov/6/async-code-research/
simonw | a day ago
No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.
I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.
(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)
colesantiago | a day ago
In this particular case AI has nothing to do with Fabrice Bellard.
We can have something different on HN like what Fabrice Bellard is up to.
You can continue AI posting as normal in the coming days.
simonw | a day ago
... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?
I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.
eichin | a day ago
simonw | a day ago
I'm currently on a multi-year side-quest to find safe ways to execute untrusted user-provided code in my Python and web applications.
As such, I pay very close attention to any new language or library that looks like it might be able to provide a robust sandbox.
MicroQuickJS instantly struck me as a strong candidate for that, and initial protoyping has backed that up.
None of that was clear from my original comment.
Imustaskforhelp | 23 hours ago
https://github.com/libriscv/libriscv (I talked with the author of this project, amazing author fwsgonzo is amazing) and they told me that this has the least latency out of any sandbox at only minor consequence of performance that they know of
Btw for sandboxing, kvm itself feels good too and I had discussed it with them in their discord server when they had mentioned that they were working on a minimal kvm server which has since been open sourced (https://github.com/varnish/tinykvm)
Honestly Simon, Deno hosting/the way deno works is another good interesting tidbit for sandboxing. I wish something like deno's sandboxing capabilities came to python perhaps since python fans can appreciate it.
I will try to look more into your github repository too once I get more free.
claar | 23 hours ago
AtlasBarfed | 23 hours ago
Unfortunately it means those languages will be the permanent coding platforms.
justatdotin | 21 hours ago
not really,
I suspect training volume has a role in debugging a certain class of errors, so there is an advantage to python/ts/sql in those circumstances: if, as an old boss once told me, you code by the bug method :)
The real problems I've had that hint at training data vs logic have been with poorly documented old versions of current languages.
To me, the most amazing capability is not the code they generate but the facility for natural language analysis.
my experience is that agent tools enable polyglot systems because we can now use the right tool for the job, not just the most familiar.
alabhyajindal | a day ago
Imustaskforhelp | a day ago
Maybe we HN users have minds in sync :)
https://news.ycombinator.com/item?id=46359396#46359695
Have a nice day! Awesome stuff, would keep an eye on your blog, Does your blog/website use mataroa by any chance as there are some similarities even if they are both minimalist but overall nice!
alabhyajindal | 23 hours ago
Thanks a lot for checking out my blog/project. Have a great day!
KomoD | 23 hours ago
Maybe someone finds it useful: https://paste.ubuntu.com/p/rD6Dz7hN2V/
Imustaskforhelp | 21 hours ago
Thanks for sharing it.
keeganpoppen | a day ago
alabhyajindal | 23 hours ago
My issue is that the cost, in terms of time, for these experiments have really gone down with LLMs. Earlier, if someone played around with the posted project, we knew they spent a reasonable amount of time on it, and thus care about the subject. With LLMs, this is not the case.
TheTaytay | 13 hours ago
rpdillon | 22 hours ago
TheTaytay | 13 hours ago
Compiling this to wasm and calling it from python as a sandboxed runtime isn’t tangential. I wouldn’t have know from reading the project’s readme that this was possible, and it’s a really interesting use case. We might as well get mad at simonw for using an IDE while he explored the limits of a new library.
Imustaskforhelp | a day ago
I read this post of yours https://simonwillison.net/2025/Dec/18/code-proven-to-work/ and although there is a point that can be made that what you are doing isn't a job and I myself create prototypes of code using AI, long term (in my opinion) what really matters are the maintainance and claim (like your article says in a way, that I can pin point a person whose responsible for code to work)
If I find any bug right now, I wouldn't blame it on you but AI and I have varying amount of trust on it
My opinion on the matter is that for prototyping AI can be considered good use but long term it definitely isn't and I am sure that you share a similar viewpoint.
I think that AI is so contrasting that there stops existing any nuance. Read my recent comment (although warning, its long) (https://news.ycombinator.com/item?id=46359684)
Perhaps you can build a blog post about the nuance of AI? I imagine that a lot of people might share a similar aspect of AI policy where its okay to tinker with it. I am part of the new generation and trust be told I don't think that there becomes much incentives long term unless someone realizes things of not using AI because using AI just feels so lucrative for especially the youngsters.
I am 17 years old and I am going to go into a decent college with (might I add immense competition to begin with) when I have passion about such topics but only to get dissuaded because the benchmark of solving assignments etc. are done by AI and the signal ratio of universities themselves are reducing but they haven't reduced to the point that they are irrelevant but rather that you need to have a university to try to get a job but companies have either freezed hiring which some point out with LLM's
If you ask me, Long term to me it feels like more people might associate themselves with hobbyist computing and even using AI (to be honest sort of like pewdiepie) without being in the industry.
I am not sure what the future holds for me (or for any of us as a matter of fact) but I guess the point I am trying to state is that there is nuance to the discussion from both sides
Have a nice day!
halfmatthalfcat | a day ago
If you care that much, write a blog post and post that, we don't need low effort LLM show and tell all day everyday.
simonw | 22 hours ago
halfmatthalfcat | 22 hours ago
simonw | 21 hours ago
yeasku | 11 hours ago
lioeters | 18 hours ago
petercooper | a day ago
claar | a day ago
Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.
The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.
simonw | a day ago
My simonw/research GitHub repo is deliberately separate from everything else I do because it's entirely AI-generated. I wrote about that here: https://simonwillison.net/2025/Nov/6/async-code-research/#th...
This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?
Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.
I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.
lossolo | 23 hours ago
If someone wants to read your blog, they will, they know it exists, and some people even submit your new articles here. There's no need to do what you're doing. Every day you're irritating more people with this behavior, and eventually the substance won't matter to them anymore, so you're acting against your own interests.
Unless you want people to develop the same kind of ad blindness mechanism, where they automatically skip anything that looks like self promotion. Some people will just see a comment by simonw and do the same.
A lot of people have told you this in many threads, but it seems you still don’t get it.
simonw | 20 hours ago
lossolo | 19 hours ago
You're not pushing against an arbitrary taboo where people dislike self links in principle. People already accept self links on HN when they're occasional and clearly relevant. What people are reacting to is the pattern when "my answer is a link to my site" becomes your default state, it stops reading like helpful reference and starts reading like your distribution strategy.
And that's why "I'm determined to normalize it" probably won't work because you can't normalize your way out of other people's experience of friction. If your behavior reliably adds a speed bump to reading threads forcing people to context switch/click out and wonder if they're being marketed to then the community will develop a shortcut I mentioned in my previous comment which basically is : this is self promo so just ignore.
If your goal is genuinely to share useful ideas, you're better off meeting people where they are: put the relevant 2-6 sentences directly in the comment, and then add something like "I wrote more about it on my blog" or whatever and if anyone is interested they will scroll through your blog (you have it in your profile so anyone can find it with one click) or ask for a link.
Otherwise you're not "normalizing" anything, you're training readers to stop paying attention to you. And I assure you once that happens, it's hard to undo, because people won't relitigate your intent every time. They'll just scroll. It's a process that's already started, but you can still reverse it.
simonw | 18 hours ago
I'm actively pushing back against the "don't promote your site, don't link to it, restate your content in the comments instead" thing.
I am willing to take on risk to my personal reputation and credibility in support of my goal here.
rpdillon | 17 hours ago
lossolo | 16 hours ago
If everyone starts dropping their "relevant content" in the comments, most of it won't be relevant, and a lot of it will be spam. People don't have time to sift through hundreds of links in the comments and tens of thousands of words when the whole point of HN is that discussion and curation work in the opposite direction.
If your content is good, someone else will submit it as a story. Your blog is probably already read by thousands of people from HN, if they think a particular post belongs in the discussion in some comment, they'll link it. That's why other popular HN users who blog don't constantly promote or link their own content here, unlike you. They know that you don't need to do it yourself, and doing it repeatedly sends the wrong signal (which is obvious and plenty of socially aware people have already pointed out to you in multiple threads).
Trying to normalize that kind of self promoting is like normalizing an annoying mosquito buzz, most people simply don't want it and no amount of "normalizing" will change that.
TheTaytay | 13 hours ago
The difference between LinkedIn slop and good content is not the presence or absence of a link to one’s own writing, but the substance and quality of the writing.
If simonw followed these rules you want him to follow, he would be forced to make obscure references to a blog post that I would then need to Google or hope that his blog post surfaces on HN in the next few days. It seems terribly inefficient.
I agree with you that self-promotion is off-putting, and when people post their competing project on a Show HN post, I don’t click those links. But it’s not because they are linking to something they wrote. It’s because they are engaged in “self-promotion”, usually in an attempt to ride someone else’s coattails or directly compete.
If simonw plugged datasette every chance he got, I’d be rolling my eyes too, but linking to his related experiments and demos isn’t that.
rpdillon | 22 hours ago
I'd chalk up the -4 to generic LLM hate, but I find examples of where LLMs do well to be useful, so I appreciated your post. It displays curiosity, and is especially defensible given your site has no ads, loads blazingly fast, and is filled with HN-relevant content, and doesn't even attempt to sell anything.
gaigalas | 22 hours ago
You can safely assume so. Bellard is the creator of jslinux. The news here would be if it _didn't_.
> What's the difference to regular JavaScript?
It's in the project's README!
> Is it safe to use as a sandbox against attacks like the regex thing?
This is not a sandbox design. It's a resource-constrained design like cesanta/mjs.
---
If you vibe coded a microcontroller emulation demo, perhaps there would be less pushback.
SeanAnderson | a day ago
garganzol | 22 hours ago
A lot of HN people got cut by AI in one way or another, so they seem to have personal beefs with AI. I am talking about not only job shortages but also general humbling of the bloated egos.
foobarchu | 21 hours ago
I'm gonna give you the benefit for the doubt here. Most of us do not dislike genAI because we were fired or "humbled". Most of us dislike it because a) the terrible environmental impacts, b) the terrible economic impacts, and c) the general non-production-readiness of results once you get past common, well-solved problems
Your stated understanding comes off a little bit like "they just don't like it because they're jealous".
wartywhoa23 | 11 hours ago
Especially so when it concerns AI theft of human music and visual art.
"Those pompous artists, who do they think they are? We'll rob them of their egos".
The problem is that these ego-accusations don't quite come from egoless entities.
garganzol | 6 hours ago
AI brings clarity. This results in a lot of pain for those who tried to hijack the game in one way or another.
From the psychological point of view, AI is a mirror of one's personality. Depending on who you are, you see different reflections: someone sees a threat, others see the enlightenment.
wartywhoa23 | 4 hours ago
Do you mean that kind of clarity when no audio/video evidence is a proof of anything anymore?
> This results in a lot of pain for those who tried to hijack the game in one way or another.
I'm not quite sure if any artists, designers, musicians and programmers whose work was used to train AI without their consent tried to manipulate anyone or hijack anything. Care to elaborate?
alex_suzuki | 21 hours ago
TheTaytay | 13 hours ago
(Keep posting please. Downvotes due to mentioning LLMs will be perceived as a quaint historic artifact in the not so distant future…)
wartywhoa23 | 10 hours ago
TeodorDyakov | 6 hours ago
johnfn | an hour ago
yeasku | 11 hours ago
johnfn | an hour ago
dangoodmanUT | 6 hours ago
johnfn | an hour ago
simonw | 3 minutes ago
simonw | 2 minutes ago
jmull | an hour ago
I would guess people don't know how you expect them to evaluate this, so it comes off as spamming us with a bunch of AI slop.
(That C can be compiled to WASM or wrapped as a python library isn't really something that needs a proof-of-concept, so again it could be understood as an excuse to spam us with AI slop.)
MobiusHorizons | 23 hours ago
simonw | 22 hours ago
Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.
Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.
If there's a better option than that I'd love to hear about it!
MobiusHorizons | 21 hours ago
simonw | 21 hours ago
The web is littered with libraries that half do that and then have a note in the README that says "do not rely on this as a secure sandbox".
MobiusHorizons | 20 hours ago
It's a little less clear how you would do this from node, but the v8 embedding instructions should work https://v8.dev/docs/embed even if nodejs is already a copy of v8.
[0]: https://github.com/cloudflare/stpyv8 [1]: https://docs.pythonmonkey.io
simonw | 20 hours ago
I'd seen PyV8 and ruled it out as very unmaintained.
One of my requirements for a sandbox is that it needs to me maintained by a team of professionals who are running a multi-million dollar business on it. Cloudflare certainly count! I wonder what they use STPyV8 for themselves?
... on closer inspection it doesn't seem to have the feature I care most about: the ability to constrain memory usage (see comment here https://github.com/cloudflare/stpyv8/blob/57e881c7fbe178c598...) - and there's no built-in support for time limits either, you have to cancel tasks from a separate thread: https://github.com/cloudflare/stpyv8/issues/112
PythonMonkey looks promising too: the documentation says "MVP as of September 2024" so clearly it's intended to be stable for production use at this point.
MobiusHorizons | 13 hours ago
simonw | 13 hours ago
1. The ability to restrict the amount of memory that the sandboxed code can use
2. The ability to set a reliable time limit on execution after which the code will be terminated
My third requirement is that a company with real money on the line and a professional security team is actively maintaining the library. I don't want to be the first person to find out about any exploits!
santadays | 18 hours ago
https://www.graalvm.org/latest/security-guide/sandboxing/
simonw | 4 hours ago
One catch: the sandboxing feature isn't in the "community edition", so only available under the non-open-source (but still sometimes free, I think?) Oracle GraalVM.
kettlecorn | 17 hours ago
In a browser environment it's much easier to sandbox Wasm successfully than to sandbox JS.
MobiusHorizons | 13 hours ago
EDIT: found this from your other comment: https://www.figma.com/blog/an-update-on-plugin-security/ they do not address any alternatives considered.
incognito124 | 20 hours ago
sublimefire | 8 hours ago
But there are other ways, e.g. run the logic isolated within gvisor/firecracker/kata.
[1] github.com/microsoft/CCF under src/js/core
MattGrommes | a day ago
hebejebelus | a day ago
niutech | 4 hours ago
halfmatthalfcat | a day ago
* espruino (https://www.espruino.com/)
* elk (https://github.com/cesanta/elk)
* DeviceScript (Microsoft Research's now defunct effort, https://github.com/microsoft/devicescript)
niutech | 4 hours ago
15155 | a day ago
cxr | 23 hours ago
<https://www.moddable.com/faq#comparison>
If you take a look at the MicroQuickJS README, you can see that it's not a full implementation of even ES5, and it's incompatible in several ways.
Just being able to run JS also isn't going to automatically give you any bindings for the environment.
chunkles | a day ago
self_awareness | a day ago
One strategy is to wait for US to wake up, then post, during their morning.
Other strategy is to post the same thing periodically until there is response.
Wowfunhappy | 20 hours ago
pizlonator | a day ago
You can’t restrict JS that way on the web because of compatibility. But I totally buy that restricting it this way for embedded systems will result in something that sparks joy
groundzeros2015 | a day ago
pizlonator | 23 hours ago
I bet MQJS will also be very popular. Quite impressive that bro is going to have two JS engines to brag about in addition to a lot of other very useful things!
alexdowad | 21 hours ago
Yes, quite! Monsieur Bellard is a legend of computer programming. It would be hard to think of another programmer whose body of public work is more impressive than FB.
Unfortunate that he doesn't seem to write publicly about how he thinks about software. I've never seen him as a guest on any podcast either.
I have long wondered who the "Charlie Gordon" who seems to collaborate with him on everything is. Googling the name brings up a young ballet dancer from England, but I doubt that's the person in question.
Uehreka | 18 hours ago
Not many, but these do come to mind: Linus Torvalds, Ken Thompson, Dennis Ritchie, Donald Knuth, Rob Pike. But yeah, it’s rarefied air up there.
rramadass | 15 hours ago
See also - https://news.ycombinator.com/item?id=46372370
rramadass | 15 hours ago
I am of the firm belief that "Monsieur Fabrice Bellard" is not one person but a group of programmers writing under this nom de plume like "Nicolas Bourbaki" was in Mathematics ;-)
I don't know of any other programmer who has similar breadth and depth in so many varied domains. Just look at his website - https://bellard.org/ and https://en.wikipedia.org/wiki/Fabrice_Bellard No self-aggrandizing stuff etc. but only tech. He is an ideal for all of us to strive for.
Watson's comment on how Sherlock Holmes made him feel can be rephrased in this context as;
"I trust that I am not more dense than my neighbours [i.e. fellow programmers], but I was [and am] always oppressed with a sense of my own stupidity in my dealings with [the works of Fabrice Bellard]."
PS: Fabrice Bellard: Portrait of a Super-Productive Programmer - https://web.archive.org/web/20210128085300/https://smartbear...
PPS: Fabrice Bellard: A Computer Science Pioneer - https://www.scribd.com/document/511765517/Fabrice-Bellard-In... (pretty good long article)
lioeters | 14 hours ago
rramadass | 13 hours ago
TimTheTinker | an hour ago
virtualwhys | 13 hours ago
Maybe Bellard identifies with the genius, but fears the loss of it.
larodi | 7 hours ago
dotandimet | 4 hours ago
jacobp100 | 23 hours ago
pizlonator | 23 hours ago
achierius | 14 hours ago
sophiebits | 3 hours ago
andai | 8 hours ago
Well, now we can run this thing in WASM and get, I imagine, sane runtime errors :)
ddtaylor | a day ago
- FFmpeg: https://bellard.org
- QEMU: https://bellard.org/qemu/
- JSLinux: https://bellard.org/jslinux/
- TCC: https://bellard.org/tcc/
- QuickJS: https://bellard.org/quickjs/
Legendary.
justmarc | a day ago
sedatk | a day ago
lioeters | 21 hours ago
It's similar to the respect I have for the work of Anders Hejlsberg, who created Turbo Pascal, with which I learned to program; and also C# and TypeScript.
vatsachak | a day ago
Guy is a genius. I hope he tries Rust someday
hsaliak | a day ago
vatsachak | 21 hours ago
elevation | 23 hours ago
The design intent of Rust is a powerful idea, and Rust is the best of its class, but the language itself is under-specified[1] which prevents basic, provably-correct optimizations[0]. At a technical level, Rust could be amended to address these problems, but at a social level, there are now too many people who can block the change, and there's a growing body of backwards compatibility to preserve. This leads reasonable people to give up on Rust and use something else[0], which compounds situations like [2] where projects that need it drop it because it's hard to find people to work on it.
Having written low-level high-performance programs, Fabrice Bellard has the experience to write a memory safe language that allows hardware control. And he has the faculties to assess design changes without tying them up in committee. I covet his attentions in this space.
[0]: https://databento.com/blog/why-we-didnt-rewrite-our-feed-han...
[1]: https://blog.polybdenum.com/2024/06/07/the-inconceivable-typ...
[2]: https://daniel.haxx.se/blog/2024/12/21/dropping-hyper/
Lerc | 23 hours ago
The principle of zero cost abstractions avoids a slow slide of compromising abstraction cost, but I think there could be small cost abstractions that would make for a more pragmatic language. Having Rust to point at to show what performance you could be achieving would aid in avoiding bloating abstractions.
saagarjha | 18 hours ago
I don’t think it can, no.
userbinator | 15 hours ago
d4rkp4ttern | 3 hours ago
c0brac0bra | a day ago
ddtaylor | a day ago
simonw | a day ago
ronsor | a day ago
brabel | 22 hours ago
groundzeros2015 | a day ago
drschwabe | a day ago
That was a sort of defining moment in my personal coding; a lot of my websites and apps are now single file source wherever possible/practical.
zdragnar | 23 hours ago
I've had the inverse experience dealing with a many thousand line "core.php" file way back in the day helping debug an expressionengine site (back in the php 5.2ish days) and it was awful.
Unless you have an editor which can create short links in a hierarchical tree from semantic comments to let you organize your thoughts, digging through thousands of lines of code all in the same scope can be exceptionally painful.
antirez | 23 hours ago
The reason FB (and myself, for what it is worth) often write single file large programs (Redis was split after N years of being a single file) is because with enough programming experience you know one very simple thing: complexity is not about how many files you have, but about the internal structure and conceptually separated modules boundaries.
At some point you mainly split for compilation time and to better orient yourself into the file, instead of having to seek a very large mega-file. Pointing the finger to some program that is well written because it's a single file, strlongly correlates to being not a very expert programmer.
uecker | 22 hours ago
lelanthran | 20 hours ago
You probably don't need this, but ...
https://www.lelanthran.com/chap13/content.html
neomantra | 17 hours ago
It reminded me how important organization is to a project and certainly influenced me, especially applied in areas like Golang package design. Deeply appreciate it all, thank you.
sfpotter | 22 hours ago
It may not be immediately obvious how to approach modularity since it isn't directly accomplished by explicit language features. But, once you know what you're doing, it's possible to write very large programs with good encapsulation, that span many files, and which nevertheless compile quite rapidly (more or less instantaneously for an incremental build).
I'm not saying other languages don't have better modularity, but to say that C's is bad misses the mark.
drschwabe | 20 hours ago
You can do a huge website entirely in a single file with NodeJS; you can stick re-usable templates in vars and absue multi-line strings (template literals) for all your various content and markup. If you get crafty you can embed clientside code in your 'server.js' too or take it to the next level and use C++ multi-line string literals to wrap all your JS ie- client.js, server.js and package.json in a single .cpp file
lelanthran | 20 hours ago
You don't program much in C, do you?
kvemkon | 22 hours ago
qznc | 22 hours ago
https://sqlite.org/src/doc/trunk/README.md
kvemkon | 22 hours ago
nine_k | 22 hours ago
kvemkon | 22 hours ago
kzrdude | 20 hours ago
nine_k | 22 hours ago
benterix | 23 hours ago
groundzeros2015 | 22 hours ago
We underestimate how inefficient working in teams is compared with individuals. We don't value skill and experience and how someone who understands a problem well can be orders of magnitude more productive.
nxobject | 23 hours ago
Honestly, it's a reminder that, for the time it takes, it's incredibly fun to build from scratch and understand through-and-through your own system.
Although you have to take detours from, say, writing a bytecode VM, to writing FP printing and parsing routines...
pjc50 | 23 hours ago
It doesn't necessarily translate to people who are less brilliant.
chrisweekly | 23 hours ago
rramadass | 15 hours ago
You are absolutely wrong here. Most of us wish that somebody would get him to sit for an in-depth interview and/or get him to write a book on his thinking, problem-solving approach, advice etc. i.e. "we want to pick his brain".
But he is not interested and seems to live on a different plane :-(
noufalibrahim | 12 hours ago
Can you elaborate a little about the methods you mention and how you analysed them?
groundzeros2015 | 3 hours ago
So much of the discussion here is about professional practice around software. You can become an expert in this stuff without actually ever learning to write code. We need to remember that most of these tools are a cost that only benefits for managing collaboration between teams. The smaller the team the less stuff you need.
I also have insights from reading his C style but they may be of less interest.
I think it’s also impressive that he identifies a big and interesting problem to go after that’s usually impactful.
MontyCarloHall | a day ago
rossant | 23 hours ago
attractivechaos | 21 hours ago
thechao | 21 hours ago
zipy124 | 9 hours ago
It's kind of crazy it ever became some accepted world view, given how every field has a 10xer that is rather famous for it, whether it be someone who dominates in sport, an academic like Paul Erdős or Euler, a programmer like Fabrice or Linus Torvalds, a leader like Napoleon , or any number of famous inventors throughout history.
didip | a day ago
encom | 23 hours ago
The math checks out.
mightybyte | 22 hours ago
https://www.ioccc.org/authors.html#Fabrice_Bellard
rasz | 22 hours ago
played with implementing analog modem DSP in software in 1999 (linmodem is ~50-80% there, sadly never finished)
probably leading to
played with implementing SDR (again DSP) using VGA output to transmit DVB-T/NTSC/PAL in 2005
probably leading to
Amarisoft SDR 5G base station, commercial product started in 2012 - his current job https://www.amarisoft.com/company/about-us
userbinator | 22 hours ago
https://bellard.org/ffasn1/
cryptonector | 10 hours ago
PaulDavisThe1st | 20 hours ago
I would not want to dismiss or diminish by any amount the incredible work he has done. It's just interesting to me that the problems he appears to pick generally take the form of "user sets up the parameters, the program runs to completion".
fullstackchris | 13 hours ago
tracker1 | 2 hours ago
kallistisoft | 19 hours ago
Real people have to sleep at some point!
xgkickt | 18 hours ago
bborud | 7 hours ago
mobilio | 4 hours ago
Reubend | a day ago
MobiusHorizons | 23 hours ago
- Date: only Date.now() is supported. [0]
I certainly understand not shipping the js date library especially in an embedded environment both for code-size, and practicality reasons (it's not a great date library), but that would be an issue in many projects (even if you don't use it, libraries yo use almost certainly do.
https://github.com/bellard/mquickjs/blob/main/README.md#:~:t...
Reubend | 20 hours ago
MobiusHorizons | 13 hours ago
As I read it, these are supported es5 extensions, not missing as part of stricter mode.
eichin | a day ago
ea016 | a day ago
[0] https://en.wikipedia.org/wiki/Jeff_Atwood
tacone | 22 hours ago
arendtio | 21 hours ago
https://bellard.org/jslinux/vm.html?cpu=riscv64&url=fedora33...
kzrdude | 20 hours ago
tombert | 19 hours ago
I am envious that I will never anywhere near his level of productivity.
umvi | 17 hours ago
avaer | 16 hours ago
I forgot about FFmpeg (thanks for the reminder), but my first thought was "yup that makes perfect sense".
tombert | 15 hours ago
keepamovin | 15 hours ago
AI will let 10,000 Bellards bloom - or more.
anthk | 11 hours ago
underdeserver | 20 hours ago
https://www.destroyallsoftware.com/talks/the-birth-and-death...
mtlynch | a day ago
Here's the commit history for this project
b700a4d (2025-12-22T1420) - Creates an empty project with an MIT license
295a36b (2025-12-22T1432) - Implements the JavaScript engine, the C API, the REPL, and all documentation
He went from zero to a complete JS implementation in just 12 minutes!
I couldn't do that even if you gave me twice as much time.
Okay, but seriously, this is super cool, and I continue to be amazed by Fabrice. I honestly do think it would be interesting to do an analysis of a day or week of Fabrice's commits to see if there's something about his approach that others can apply besides just being a hardworking genius.
tremon | 23 hours ago
mordnis | 22 hours ago
voxelghost | 19 hours ago
https://www.xkcd.com/378/
andoando | 23 hours ago
ronsor | 23 hours ago
schappim | a day ago
antirez | a day ago
cxr | 23 hours ago
People go through all this effort to separate parsing and lexing, but never exploit the ability to just plug in a different lexer that allows for e.g. "{" and "}" tokens instead of "then" and "end", or vice versa.
1. <https://hn.algolia.com/?type=comment&prefix=true&query=cxr%2...>
2. <https://old.reddit.com/r/Oberon/comments/1pcmw8n/is_this_sac...>
nine_k | 23 hours ago
The problem with "skins" is that they create variety where people strive for uniformity to lower the cognitive load. OTOH transparent switching between skins (about as easy as changing the tab sizes) would alleviate that.
cibyr | 23 hours ago
dbdr | 22 hours ago
The idea of "skins" is apparently to push that even further by abstracting the concrete syntax.
lucketone | 22 hours ago
This has limits.
Files produced with tab=2 and others with tab=8, might have quite different result regarding nesting.
(pain is still on the menu)
philsnow | 18 hours ago
fc417fc802 | 14 hours ago
More than that, in the general case for common C like languages things should almost never be nested more than a few levels deep. That's usually a sign of poorly designed and difficult to maintain code.
Lisps are a notable exception here, but due to limitations (arguably poor design) with how the most common editors handle lines that contain a mix of tabs and spaces you're pretty much forced to use only spaces when writing in that family of languages. If anything that language family serves as case in point - code written with an indentation width that isn't to one's preference becomes much more tedious to adapt due to alternating levels of alignment and indentation all being encoded as spaces (ie loss of information which automated tools could otherwise use).
somat | 14 hours ago
fc417fc802 | 13 hours ago
Unfortunately the discussion tends to be somewhat complicated by the occasional (usually automated) code formatting convention that (imo mistakenly) attempts to change the level of indentation in scenarios where you might reasonably want to align an element with the preceding line. For example, IDEs for C like languages that will add an additional tab when splitting function arguments across multiple lines. Fortunately such cases are easily resolved but their mere existence lends itself to objections.
RHSeeger | 5 hours ago
All that being said, I've very much a "as long as everyone working on the code does it the same, I'll be fine" sort of person. We use spaces for everything, with defined indent levels, where I am, and it works just fine.
brabel | 23 hours ago
That's one of my hopes for the future of the industry: people will be able to just choose the code style and even syntax family (which you're calling skin) they prefer when editing code, and it will be saved in whatever is the "default" for the language (or even something like the Unison Language: store the AST directly which allows cool stuff like de-duplicating definitions and content-addressable code - an idea I first found out on the amazing talk by Joe Armstrong, "The mess we're in" [1]).
Rust, in particular, would perhaps benefit a lot given how a lot of people hate its syntax... but also Lua for people who just can't stand the Pascal-like syntax and really need their C-like braces to be happy.
[1] https://www.youtube.com/watch?v=lKXe3HUG2l4
nine_k | 22 hours ago
Some languages have tools for more or less straightforward skinning.
Clojure to Tamil: https://github.com/echeran/clj-thamil/blob/master/src/clj_th...
C++ to distorted Russian: https://sizeof.livejournal.com/23169.html
dualogy | 10 hours ago
One of my pet "not today but some day" project ideas. In my case, I wanted to give Python/Gdscript syntax to any & all the curly languages (a potential boon to all users of non-Anglo keyboard layouts), one by one, via VSCode extension that implements a virtual filesystem over the real one which translates back & forth the syntaxes during the load/edit/save cycle. Then the whole live LSP background running for the underlying real source files and resurfacing that in the same extension with line-number matchings etc.
Anyone, please steal this idea and run with it, I'm too short on time for it for now =)
Orygin | 7 hours ago
procaryote | 22 hours ago
rao-v | 22 hours ago
twic | 19 hours ago
kevin_thibedeau | 21 hours ago
IshKebab | 23 hours ago
idle_zealot | 23 hours ago
krackers | 22 hours ago
idle_zealot | 22 hours ago
aidenn0 | 21 hours ago
[edit]
minitech | 20 hours ago
- Every property access in JavaScript is semantically coerced to a string (or a symbol, as of ES6). All property keys are semantically either strings or symbols.
- Property names that are the ToString() of a 31-bit unsigned integer are considered indexes for the purposes of the following two behaviours:
- For arrays, indexes are the elements of the array. They’re the properties that can affect its `length` and are acted on by array methods.
- Indexes are ordered in numeric order before other properties. Other properties are in creation order. (In some even nicher cases, property order is implementation-defined.)
nvlled | 16 hours ago
chrisweekly | 23 hours ago
coliveira | 23 hours ago
jancsika | 14 hours ago
nine_k | 22 hours ago
IshKebab | 21 hours ago
cmrdporcupine | 20 hours ago
Lately, yes, Julia and R.
Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.
I learned the Wirth languages first (and then later did a lot of programming in MOO, a prototype OO 1-indexed scripting language). Because of that early experience I still slip up and make off by 1 errors occasionally w/ 0 indexed languages.
(Actually both Modula-2 and Ada aren't strictly 1 indexed since you can redefine the indexing range.)
It's funny how orthodoxies grow.
nine_k | 20 hours ago
teo_zero | 20 hours ago
cmrdporcupine | 19 hours ago
fc417fc802 | 13 hours ago
IshKebab | 10 hours ago
There are only a few languages that are purely for beginners (LOGO and BASIC?) so it's a high cost to annoy experienced programmers for something that probably isn't a big deal anyway.
fc417fc802 | 12 hours ago
pklausler | 3 hours ago
bsder | 12 hours ago
The problem is that Lua is effectively an embedded language for C.
If Lua never interacted with C, 1-based indexing would merely be a weird quirk. Because you are constantly shifting across the C/Lua barrier, 1-based indices becomes a disaster.
bigstrat2003 | 18 hours ago
IshKebab | 6 hours ago
sltkr | 3 hours ago
rweichler | 23 hours ago
aidenn0 | 21 hours ago
le-mark | 19 hours ago
brabel | 23 hours ago
Lua was first released in 1993. I think that it's pretty conventional for the time, though yeah it did not follow Algol syntax but Pascal's and Ada's (which were more popular in Brazil at the time than C, which is why that is the case)!
Ruby, which appeared just 2 years later, departs a lot more, arguably without good reasons either? Perl, which is 5 years older and was very popular at the time, is much more "different" than Lua from what we now consider mainstream.
rwmj | 22 hours ago
Perl, Python, OCaml, Lua and Rust were all fine (Rust wasn't around in 2010 of course).
rurban | 11 hours ago
But since syck uses the ruby hashtable internally, I got stuck in the gem for a while. It fell out of their stdlib, and is not really maintained neither. PHP had the latest updates for it. And perl (me) extended it to be more recursion safe, and added more policies (what to do on duplicate keys: skip or overwrite).
So the ruby bindings are troublesome because of its GC, which with threading requires now7 a global vm instance. And using the ruby alloc/free pairs.
PHP, perl, python, Lua, IO, cocoa, all no problem. Just ruby, because of its too tight coupling. Looks I have to decouple it finally from ruby.
zeckalpha | 21 hours ago
rapind | 20 hours ago
I doubt we ever would have heard about Ruby without it's syntax decisions. From my understanding it's entire raison d'être was readability.
rwmj | 10 hours ago
vidarh | 6 hours ago
nurettin | 15 hours ago
a-french-anon | 10 hours ago
jhgb | an hour ago
Now quite sure what you mean by that; all of Lua, Pascal, and Ada follow Algol's syntax much more closely than C does.
rwmj | 22 hours ago
andrewshadura | 22 hours ago
dontdoxxme | 20 hours ago
The Redis test suite is still written in Tcl: https://news.ycombinator.com/item?id=9963162 (although more recently antirez said somewhere he wished he'd written it in C for speed)
rogerbinns | 21 hours ago
For those not familiar with TCL, the C API is flavoured like main. Callbacks take a list of strings argv style and an argc count. TCL is stringly typed which sounds bad, but the data comes from strings in the HTML and script blocks, and the page HTML is also text, so it fits nicely and the C callbacks are easy to write.
[1] Mosaic Netscape 0.9 was released the week before
spacechild1 | 22 hours ago
Lua is a pretty old language. In 1993 the world had not really settled on C style syntax. Compared to Perl or Tcl, Lua's syntax seems rather conventional.
Some design decisions might be a bit unusual, but overall the language feels very consistent and predictable. JS is a mess in comparison.
> because it departs from a more Algol-like syntax
Huh? Lua's syntax is actually very Algol-like since it uses keywords to delimit blocks (e.g. if ... then ... end)
lioeters | 22 hours ago
That's what matters to me, not how similar Lua is to other languages, but that the language is well-designed in its own system of rules and conventions. It makes sense, every part of it contributes to a harmonious whole. JavaScript on the other hand.
When speaking of Algol or C-style syntax, it makes me imagine a "Common C" syntax, like taking the best, or the least common denominator, of all C-like languages. A minimal subset that fits in your head, instead of what modern C is turning out to be, not to mention C++ or Rust.
procaryote | 21 hours ago
lioeters | 21 hours ago
procaryote | 11 hours ago
They did add some optional sections like bounds checking that seem to have flopped, partly for being optional, partly for being half-baked. Having optional sections in general seems like a bad idea.
whou | 8 hours ago
uecker | 7 hours ago
anthk | 11 hours ago
https://en.wikipedia.org/wiki/C99
C++ by comparison it's a behemoth. If C++ died and, for instance, the FLTK guys rebased their libraries into C (and Boost for instance) it would be a big loss at first but Chromium and the like rewritten in C would slim down a bit, the complexity would plummet down and similar projects would use far less CPU and RAM.
It's not just about the binary size; C++ today makes even the Common Lisp standard (even with UIOP and some de facto standard libraries from QuickLisp) pretty much human-manageable, and CL always has been a one-thousand pages thick standard with tons of bloat compared to Scheme or it's sibling Emacs Lisp. Go figure.
procaryote | 11 hours ago
anthk | 6 hours ago
Also, the Limbo language it's basically pre-Go.
bmacho | 2 hours ago
anthk | an hour ago
https://en.wikipedia.org/wiki/Alef_(programming_language)
https://en.wikipedia.org/wiki/Limbo_(programming_language)
https://en.wikipedia.org/wiki/Newsqueak
https://en.wikipedia.org/wiki/Communicating_sequential_proce...
https://doc.cat-v.org/bell_labs/new_c_compilers/new_c_compil...
It was amalgamated at Google.
Originally Go used the Ken C compilers for Plan9. It still uses CSP. The syntax it's from Limbo/Inferno, and probably the GC came from Limbo too.
If any, Golang was created for Google by reusing a big chunk of plan9 and Inferno's design, in some cases even straightly, as it shows with the concurrency model. Or the cross-compiling suite.
A bit like MacOS X under Apple. We all know it wasn't born in a vacuum. It borrowed Mach, the NeXTStep API and the FreeBSD userland and they put the Carbon API on top for compatibility.
Before that, the classic MacOS had nothing to do with Unix, C, Objective C, NeXT or the Mach kernel.
Mac OS X is to NeXT what Go is for Alef/Inferno/Plan9 C. As every MacOS user it's using something like NeXTStep with the Macintosh UI design for the 21th century, Go users are like using a similar, futuristic version of the Limbo/Alef programming languages with a bit of the Plan9 concurrency and automatic crosscompilation.
lioeters | 35 minutes ago
lucketone | 22 hours ago
But only after long time I tried to check what Algol actually looked like. To my surprise, Algol does not look anything like C to me.
I would be quite interested in the expanded version of “C has inherited syntax from Algol”
Edit: apparently the inheritance from Algol is a formula: lexical scoping + value returning functions (expression based) - parenthesitis. Only last item is about visual part of the syntax.
Algol alternatives were: cobol, fortan, lisp, apl.
spacechild1 | 21 hours ago
Of course, C also inherited syntax from Algol, but so did most languages.
norir | 22 hours ago
Personally though, I think the distinctive choices are a boon. You are never confused about what language you are writing because Lua code is so obviously Lua. There is value in this. Once you have written enough Lua, your mind easily switches in and out of Lua mode. Javascript, on the other hand, is filled with poor semantic decisions which for me, cancel out any benefits from syntactic familiarity.
More importantly, Lua has a crucial feature that Javascript lacks: tail call optimization. There are programs that I can easily write in Lua, in spite of its syntactic verbosity, that I cannot write in Javascript because of this limitation. Perhaps this particular JS implementation has tco, but I doubt it reading the release notes.
I have learned as much from Lua as I have Forth (SmallTalk doesn't interest me) and my programming skill has increased significantly since I switched to it as my primary language. Lua is the only lightweight language that I am aware of with TCO. In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
In particular, Lua is an exceptional language for writing compilers. Compilers are inherently recursive and thus languages lacking TCO are a poor fit (even if people have been valiantly forcing that square peg through a round hole for all this time).
Having said all that, perhaps as a scripting language for Redis, JS is a better fit. For me though Lua is clearly better than JS on many different dimensions and I don't appreciate the needless denigration of Lua, especially from someone as influential as you.
lioeters | 22 hours ago
I'd love to hear more how it is, the state of the library ecosystem, language evolution (wasn't there a new major version recently?), pros/cons, reasons to use it compared to other languages.
About tail-calls, in other languages I've found sometimes a conversion of recursive algorithm to a flat iterative loop with stack/queue to be effective. But it can be a pain, less elegant or intuitive than TCO.
alexdowad | 21 hours ago
It's definitely smaller than many languages, and this is something to consider before selecting Lua for a project. But, on the positive side: With some 'other' languages I might find 5 or 10 libraries all doing more or less the same thing, many of them bloated and over-engineered. But with Lua I would often find just one library available, and it would be small and clean enough that I could easily read through its source code and know exactly how it worked.
Another nice thing about Lua when run on LuaJIT: extremely high CPU performance for a scripting language.
In summary: A better choice than it might appear at first, but with trade-offs which need serious consideration.
tracker1 | 3 hours ago
Also worth noting that some features in JS may rely on application/environment support and may raise errors that you cannot catch in JS code. This is often fun to discover and painful to try to work around.
xonix | 22 hours ago
Does the language give any guarantee that TCO was applied? In other words can it give you an error that the recursion is not of tail call form? Because I imagine a probability of writing a recursion and relying on it being TCO-optimized, where it's not. I would prefer if a language had some form of explicit TCO modifier for a function. Is there any language that has this?
stellartux | 21 hours ago
ZiiS | 21 hours ago
draven | 21 hours ago
alexisread | 21 hours ago
https://github.com/ablevm/able-forth/blob/current/forth.scr
I do prefer this as it keeps the language more regular (fewer surprises)
garaetjjte | 17 hours ago
shawn_w | 21 hours ago
Scheme is pretty lightweight.
bch | 12 hours ago
[0] https://wiki.tcl-lang.org/page/NRE
shawn_w | 3 hours ago
fnord123 | 8 hours ago
NuclearPM | 4 hours ago
unclad5968 | 21 hours ago
I'm fairly certain antirez is the author of redis
rapind | 20 hours ago
pedroza_alex | 20 hours ago
bakkoting | 20 hours ago
tracker1 | 3 hours ago
teo_zero | 20 hours ago
I'm not familiar with Lua, but I expect tco to be a feature of the compiler, not of the language. Am I wrong?
naasking | 20 hours ago
teo_zero | 18 hours ago
C's "register" variables used to have the same issue, and even "inline" has been downgraded to a mere hint for the compiler (which can ignore it and still be a C compiler).
201984 | 15 hours ago
tracker1 | 3 hours ago
kerkeslager | 18 hours ago
A more useful way to understand the situation is that a language's major implementations are more important than the language itself. If the spec of the language says something, but nobody implements it, you can't write code against the spec. And on the flip side, if the major implementations of a language implement a feature that's not in the spec, you can write code that uses that feature.
A minor historical example of this was Python dictionaries. Maybe a decade ago, the Python spec didn't specify that dictionary keys would be retrieved in insertion order, so in theory, implementations of the Python language could do something like:
But the CPython implementation did return all the keys in insertion order, and very few people were using anything other than the CPython implementation, so some codebases started depending on the keys being returned in insertion order without even knowing that they were depending on it. You could say that they weren't writing Python, but that seems a bit pedantic to me.In any case, Python later standardized that as a feature, so now the ambiguity is solved.
It's all very tricky though, because for example, I wrote some code a decade that used GCC's compare-and-swap extensions, and at least at that time, it didn't compile on Clang. I think you'd have a stronger argument there that I wasn't writing C--not because what I wrote wasn't standard C, but because the code I wrote didn't compile on the most commonly used C compiler. The better approach to communication in this case, I think, is to simply use phrases that communicate what you're doing: instead of saying "C", say "ANSI C", "GCC C", "Portable C", etc.--phrases that communicate what implementations of the language you're supporting. Saying you're writing "C" isn't wrong, it's just not communicating a very important detail: what implementations of the compiler can compile your code. I'm much more interested in effectively communicating what compilers can compile a piece of code than pedantically gatekeeping what's C and what's not.
kristianp | 16 hours ago
kerkeslager | 15 hours ago
I'm saying it isn't very useful argue about whether a feature is a feature of the language or a feature of the implementation, because the language is pretty useless independent of its implementation(s).
heavyset_go | 12 hours ago
You could not reliably depend on that implementation detail until much later, when optimizations were implemented in CPython that just so happened to preserve dictionary key insertion order. Once that was realized, it was PEP'd and made part of the spec.
beagle3 | 10 hours ago
After the 3.6 changed, they were returned in order. And people started relying on that - so at a later stage, this became part of the spec.
mananaysiempre | 18 hours ago
> A Scheme implementation is properly tail-recursive if it supports an unbounded number of active tail calls.
The issue here is that, in every language that has a detailed enough specification, there is some provision saying that a program that makes an unbounded number of nested calls at runtime is not legal. Support for proper tail calls means that tail calls (a well-defined subgrammar of the language) do not ever count as nested, which expands the set of legal programs. That’s a language feature, not (merely) a compiler feature.
[1] https://standards.scheme.org/corrected-r5rs/r5rs-Z-H-6.html#...
teo_zero | 17 hours ago
I still think that the language property (or requirement, or behavior as seen by within the language itself) that we're talking about in this case is "unbounded nested calls" and that the language specs doesn't (shouldn't) assume that such property will be satisfied in a specific way, e.g. switching the call to a branch, as TCO usually means.
mananaysiempre | 10 hours ago
Otherwise yes. For instance, Scheme implementations that translate the Scheme program into portable C code (not just into bytecode interpreted by C code) cannot assume that the C compiler will translate C-level tail calls into jumps and thus take special measures to make them work correctly, from trampolines to the very confusingly named “Cheney on the M.T.A.”[1], and people will, colloquially, say those implementations do TCO too. Whether that’s correct usage... I don’t think really matters here, other than to demonstrate why the term “TCO” as encountered in the wild is a confusing one.
[1] https://www.plover.com/misc/hbaker-archive/CheneyMTA.html
Y_Y | an hour ago
NuclearPM | 4 hours ago
IgorPartola | 15 hours ago
If I have a program that based on the input given to it runs some number of recursions of a function and two compilers of the language, can I compile the program using both of them if compiler A has PTC and compiler B does not no matter what the actual program is? As in, is the only difference that you won’t get a runtime error if you exceed the max stack size?
Zacharias030 | 11 hours ago
mananaysiempre | 9 hours ago
(The usual caveats about TCO randomly not working are due to constraints imposed by preexisting ABIs or VMs; if you don’t need to care about those, then the whole thing is quite straightforward.)
mananaysiempre | 10 hours ago
kbenson | 19 hours ago
Is it needless? It's useful specifically because he is someone influential, and someone might say "Lua was antirez's choice when making redis, and I trust and respect his engineering, so I'm going to keep Lua as a top contender for use in my project because of that" and him being clear on his choices and reasoning is useful in that respect. In any case where you think he has a responsibility to be careful what he says because of that influence, that can also be used in this case as a reason he should definitely explain his thoughts on it then and now.
kerkeslager | 18 hours ago
> [...] In my programs, I have banned the use of loops. This is a liberation that is not possible in JS or even c, where TCO cannot be relied upon.
This is not a great language feature, IMO. There are two ways to go here:
1. You can go the Python way, and have no TCO, not ever. Guido van Rossum's reasoning on this is outlined here[1] and here[2], but the high level summary is that TCO makes it impossible to provide acceptably-clear tracebacks.
2. You can go the Chicken Scheme way, and do TCO, and ALSO do CPS conversion, which makes EVERY call into a tail call, without language user having to restructure their code to make sure their recursion happens at the tail.
Either of these approaches has its upsides and downsides, but TCO WITHOUT CPS conversion gives you the worst of both worlds. The only upside is that you can write most of your loops as recursion, but as van Rossum points out, most cases that can be handled with tail recursion, can AND SHOULD be handled with higher-order functions. This is just a much cleaner way to do it in most cases.
And the downsides to TCO without CPS conversion are:
1. Poor tracebacks.
2. Having to restructure your code awkwardly to make recursive calls into tail calls.
3. Easy to make a tail call into not a tail call, resulting in stack overflows.
I'll also add that the main reason recursion is preferable to looping is that it enables all sorts of formal verification. There's some tooling around formal verification for Scheme, but the benefits to eliminating loops are felt most in static, strongly typed languages like Haskell or OCaml. As far as I know Lua has no mature tooling whatsoever that benefits from preferring recursion over looping. It may be that the author of the post I am responding to finds recursion more intuitive than looping, but my experience contains no evidence that recursion is inherently more intuitive than looping: which is more intuitive appears to me to be entirely a function of the programmer's past experience.
In short, treating TCO without CPS conversion as a killer feature seems to me to be a fetishization of functional programming without understanding why functional programming is effective, embracing the madness with none of the method.
EDIT: To point out a weakness to my own argument: there are a bunch of functional programming language implementations that implement TCO without CPS conversion. I'd counter by saying that this is a function of when they were implemented/standardized. Requiring CPS conversion in the Scheme standard would pretty clearly make Scheme an easier to use language, but it would be unreasonable in 2025 to require CPS conversion because so many Scheme implementations don't have it and don't have the resources to implement it.
EDIT 2: I didn't mean for this post to come across as negative on Lua: I love Lua, and in my hobby language interpreter I've been writing, I have spent countless hours implementing ideas I got from Lua. Lua has many strengths--TCO just isn't one of them. When I'm writing Scheme and can't use a higher-order function, I use TCO. When I'm writing Lua and can't use a higher order function, I use loops. And in both languages I'd prefer to use a higher order function.
[1] https://neopythonic.blogspot.com/2009/04/tail-recursion-elim...
[2] https://neopythonic.blogspot.com/2009/04/final-words-on-tail...
kerkeslager | 12 hours ago
I don't know why Lua implemented TCO, but if I had to guess, it's not because it enables you to replace loops with recursion, it's because it... optimizes tail calls. It causes tail calls to use less memory, and this is particularly effective in Lua's implementation because it reuses the stack memory that was just used by the parent call, meaning it uses memory which is already in the processor's cache.
The thing is, a loop is still going to be slightly faster than TCOed recursion, because you don't need to move the arguments to the tail call function into the previous stack frame. In a loop your counters and whatnot are just always using the same memory location, no copying needed.
Where TCO really shines is in all the tail calls that aren't replacements for loops: an optimized tail call is faster than a non-optimized tail call. And in real world applications, a lot of your calls are tail calls!
I don't necessarily love the feature, for the reasons that I detailed in the previous post. But it's not a terrible problem, and I think it at makes sense as an optimization within the context of Lua's design goals of being lightweight and fast.
ksec | 12 hours ago
This. And not just Lua , but having different kind of syntax for scripting languages or very high level languages signal it is something entirely different, and not C as in system programming language.
The syntax is also easier for people who dont intend to make programming as their profession, but simply want something done. It used to be the case in the old days people would design simple PL for new beginners, ActionScript / Flash era and even Hypercard before that. Unfortunately the industry is no longer interested in it, and if anything intend to make every as complicated as possible.
vanviegen | 11 hours ago
Do you really need to write compilers with limitless nesting? Or is nesting, say, 100.000 deep enough, perhaps?
Also, you'll usually want to allocate some data structure to create an AST for each level. So that means you'll have some finite limit anyway. And that limit is a lot easier to hit in the real world, as it applies not just to nesting depth, but to the entire size of your compilation unit.
lpribis | 10 hours ago
znpy | 10 hours ago
I scrolled most of this sub thread and gp seem to not be replying to any of the replies they got.
justin66 | 2 hours ago
Rather, you no longer see what they're doing clearly.
TimTheTinker | an hour ago
Some highly recursive programming styles are really just using the call stack as a data structure... which is valid but can be restrictive.
jimbokun | an hour ago
I suppose if you don’t understand recursion.
garganzol | 22 hours ago
So, even if an implementation like MicroQuickJS existed in 2010, it's unlikely that too many people would have chosen JS over Lua, given all the shortcomings that JavaScript had at the time.
kybernetikos | 8 hours ago
zeckalpha | 21 hours ago
debugnik | 21 hours ago
andrewstuart | 20 hours ago
rurban | 11 hours ago
vegabook | 18 hours ago
Come to think of it I don't think I can name a single mainstream language other than Lua that wasn't invented in the G7.
s3graham | 14 hours ago
jabl | 12 hours ago
CapsAdmin | 18 hours ago
I see this argument a lot with Lua. People simply don't like its syntax because we live in a world where C style syntax is more common, and the departure from that seem unnecessary. So going "well actually, in 1992 when Lua was made, C style syntax was more unfamiliar" won't help, because in the current year, C syntax is more familiar.
The first language I learned was Lua, and because of that it seems to have a special place in my heart or something. The reason for this is because in around 2006, the sandbox game "Garry's Mod" was extended with scripting support and chose Lua for seemingly the same reasons as Redis.
The game's author famously didn't like Lua, its unfamiliarity, its syntax, etc. He even modified it to add C style comments and operators. His new sandbox game "s&box" is based on C#, which is the language closest to his heart I think.
The point I'm trying to make is just that Lua is familiar to me and not to you for seemingly no objective reason. Had Garry chosen a different language, I would likely have a different favorite language, and Lua would feel unfamiliar and strange to me.
junon | 18 hours ago
CapsAdmin | 17 hours ago
I haven't read antirez'/redis' opinions about Lua, so I'm just going off of his post.
In contrast I do know more about what Garry's opinion on Lua is as I've read his thoughts on it over many years. It ultimately boils down to what antirez said. He just doesn't like it, it's too unfamiliar for seemingly no intentional reason.
But Lua is very much an intentionally designed language, driven in cathedral-style development by a bunch of professors who seem to obsess about language design. Some people like it, some people don't, but over 15 years of talking about Lua to other developers, "I don't like the syntax" is ultimately the fundamental reason I hear from developers.
So my main point is that it just feels arbitrary. I'm confident the main reason I like Lua is because garry's mod chose to implement it. Had it been "MicroQuickJS", Lua would likely feel unfamiliar to me as well.
n42 | 5 hours ago
I’m not sure it’s still the case but he modified Lua to be zero indexed and some other tweaks because they annoyed him so much, so it’s possible if you learned GMod Lua you learned Garry’s Lua.
Of course his heart has been with C# for many years now.
dustbunny | 18 hours ago
junon | 18 hours ago
silisili | 12 hours ago
Initially I agreed, just because so many other languages do it that way.
But if you ignore that and clean slate it, IMO, 1 based makes more sense. I feel like 0 based mainly gained foothold because of C's bastardization of arrays vs pointers and associated tricks. But most other languages don't even support that.
You can only see :len(x)-1 so many times before you realize how ridiculous it is.
bvrmn | 10 hours ago
junon | 9 hours ago
Having written a game in it (via LÖVE), the 1-indexing was a continued source of problems. On the other hand, I rarely need to use len-1, especially since most languages expose more readable methods such as `last()`.
wzdd | 7 hours ago
atdt | 16 hours ago
https://lists.wikimedia.org/hyperkitty/list/wikitech-l@lists...
avaer | 16 hours ago
petters | 10 hours ago
Hendrikto | 8 hours ago
Thank god it wasn’t then.
kiririn | 5 hours ago
https://www.compuphase.com/pawn/pawn.htm
bnolsen | 3 hours ago
nxobject | 4 hours ago
I did once manage to compile Lua 5.4 on a Macintosh SE with 4MB of RAM, and THINK C 5.0 (circa 1991), which was a sick trick. Unfortunately, it took about 30 seconds for the VM to fully initialize, and it couldn't play well with the classic MacOS MMU-less handle-based memory management scheme.
c-smile | 3 hours ago
For example Premake[1] uses Lua as it is - without custom syntax parser but with set of domain specific functions.
This is pure Lua:
In that sense Premake looks significantly better than CMake with its esoteric constructs. Having regular and robust PL to implement those 10% of configuration cases that cannot be defined with "standard" declarations is the way to go, IMO.[1] https://premake.github.io/docs/What-Is-Premake
TimTheTinker | 3 hours ago
[0] https://redbean.dev/ - the single-file distributable web server built with Cosmopolitan as an αcτµαlly pδrταblε εxεcµταblε
lacoolj | 2 hours ago
Good to see you alive and kicking. Happy holidays
MangoToupe | an hour ago
This would have been a catastrophic loss. Lua is better than javascript in every single way except for ordinal indexing
pmarreck | 27 minutes ago
it also helps that it has ridiculously high performance for a scripting language
p0w3n3d | a day ago
diimdeep | a day ago
foresto | a day ago
https://github.com/yt-dlp/yt-dlp/wiki/EJS
(Note that Bellard's QuickJS is already a supported option.)
silverwind | 23 hours ago
qbane | 23 hours ago
> It only supports a subset of Javascript close to ES5 [...]
I have not read the code of the solver, but solving YouTube's JS challenge is so demanding that the team behind yt-dlp ditched their JS emulator written in Python.
AndyKelley | 17 hours ago
foresto | 12 hours ago
leptons | 13 hours ago
notorandit | 23 hours ago
dangoodmanUT | 23 hours ago
idle_zealot | 23 hours ago
halfmatthalfcat | 21 hours ago
curtisblaine | 20 hours ago
simonw | 22 hours ago
It's a variant of my QuickJS playground here: https://tools.simonwillison.net/quickjs
The QuickJS page loads 2.28 MB (675 KB transferred). The MicroQuickJS one loads 303 KB (120 KB transferred).
azakai | 19 hours ago
emcc -O3
(and maybe even adding --closure 1 )
edit: actually the QuickJS playground looks already optimized - just the MicroQuickJS one could be improved.
simonw | 18 hours ago
https://github.com/simonw/research/pull/5
Thats now live on https://tools.simonwillison.net/microquickjs
julenx | 11 hours ago
kamranjon | 18 hours ago
simonw | 15 hours ago
throwaway290 | 16 hours ago
yeasku | 21 hours ago
theandrewbailey | 21 hours ago
Just in time for RAM to become super expensive. How easy would it be to shove this into Chromium and Electron?
jraph | 21 hours ago
The good news is that it would probably not matter much for chromium's memory footprint anyway...
nmz | 15 hours ago
jgrizou | 20 hours ago
sedatk | 19 hours ago
niutech | 4 hours ago
mrmagoo17 | 19 hours ago
bmc7505 | 19 hours ago
[1]: https://github.com/SamGinzburg/VectorVisor
[2]: https://github.com/beehive-lab/ProtonVM
noduerme | 12 hours ago
Guess I'm a bit fuzzy on this, I wouldn't use numeric keys to populate a "sparse array", but why would it be a problem to just treat it as an iterable with missing values undefined? Something to do with how memory is being reserved in C...? If someone jumps from defining arr[0] to arr[3] why not just reserve 1 and 2 and inform that there's a memory penalty (ie that you don't get the benefit of sparseness)?
aapoalas | 6 hours ago
aiddun | 11 hours ago
rounce | 11 hours ago
0: https://www.espruino.com/ESP32
rcarmo | 9 hours ago
g947o | 7 hours ago
aapoalas | 6 hours ago
In my engine Arrays are always dense from a memory perspective and Objects don't special case indexes, so we're on the same page in that sense. I haven't gotten around to creating the "no holes" version of Array semantics yet, and now that we have an existing version of it I believe I'll fully copy out Bellard's semantics: I personally mildly disagree with throwing errors on over-indexing since it doesn't align with TypedArrays, but I'd rather copy an existing semantic than make a nearly identical but slightly different semantic of my own.
t43562 | 5 hours ago
keepamovin | 5 hours ago
nektro | 2 hours ago
franze | 4 hours ago
Demo Page https://mquickjs-claude-code.franzai.com/
Show HN https://news.ycombinator.com/item?id=46376296
polyrand | 3 hours ago
https://x.com/itszn13/status/2003707921679679563
https://x.com/itszn13/status/2003808443761938602
conoro | 3 hours ago
At first glance Espruino has broader coverage including quite a bit of ES6 and even up to parts of ES2020. (https://www.espruino.com/Features). And obviously has a ton of libraries and support for a wide range of hardware.
For a laugh, and to further annoy the people annoyed by @simonw's experiments, I got Cursor to butcher it and run as a REPL on an ESP32-S3 over USB-Serial using ESP-IDF.
Blink is now running so my work here is done :-)
tracker1 | 2 hours ago
noreplydev | 2 hours ago
itsangaris | 2 hours ago
stackedinserter | an hour ago