Yawn the 90/10 excuse again and 'Shipping it everywhere' is a blatant lie there is still no Linux release. Looks like you are talking about Claude Code as Claude. Claude would be the Desktop app...
Or: Why can't I log in to Claude on my laptop? It opens a browser with an indefinite spinner, and when I click "Login" on the website, it forwards me to register instead. Not really selling it as the future of coding if their fundamentals are this screwed up!
I use Claude Code in Zed via ACP and have issues all the time. It pushes me towards using the CLI, but I don’t want to do things that way because it’s a vibe coding workflow. I want to be in the drivers seat, see what the agent has done and be able apply or reject hunks.
I’m in the same situation. Zed’s Claude Code is better in terms of control, but it’s wildly buggy and unreliable. Definitely not a drop in replacement.
I use Opus 4.6 (for complex refactoring), Gemini 3.1 Pro (for html/css/web stuff) and GPT Codex 5.3 (workhorse, replaced Sonnet for me because in Copilot it has larger context) mostly.
For small tools. But also for large projects.
Current projects are:
1) .NET C#, Angular, Oracle database. Around 300k LoC.
2) Full stack TypeScript with Hono on backend, React on frontend glued by trpc, kysely and PostgreSQL. Around 120k LoC.
Works well in both. I'm using plan mode and agent mode.
What helps a ton are e2e playright tests which are executed by the agent after each code change.
My only complain is that it tends to get stutters after many sessions/hours. A restart fixes it.
As long as we're on the subject, I'll take the opportunity here to vent about how embarrassingly buggy and unusable VS Code is in general. It throws me for a loop that pros voluntarily use it on the rare occasions I'm forced to use it instead of JetBrains.
I’m in the same boat. I use it to save me from going to a browser to lookup simple syntax references, that’s about it. Its agent mode is terrifying, and asking it anything remotely complex has been a fool’s errand.
If the AI is writing 100% of the code it literally is free (as in time) for them to move them over to native apps. They should have used the tokens for that C compiler on the native apps, would have made for a much more convincing marketing story as well.
Electron isn't that bad. Apps like VSCode and Obsidian are among the highest quality and most performant apps I have installed. The Claude app's problem is not electron; its that it just sucks, bad. Stop blaming problems on nativeness.
Maybe Electron isn’t that bad. Maybe there are some great Electron apps. But there’s a big chunk that went unsaid: Most Electron apps suck. Does correlation here imply causation? Maybe not, but holy fuck isn’t it frustrating to be a user of Electron apps.
I think you’re missing the point a little friendo, it’s not that electron is bad it’s that electron itself is an abstraction for cross platform support. If code can be generated for free then the question is why do we need this to begin with why can’t Claude write it in win32, SwiftUI, and gtk?
The answer of course is that it can’t do it and maintain compatibility between all three well enough as it’s high effort and each has its own idiosyncrasies.
I don't know about whether Electron fits in this case, but I can say Claude isn't equally proficient at all toolchains. I recently had Claude Code (Opus 4.6, agent teams) build a image manipulation webapp in Python, Go, Rust, and Zig.
In python it was very nearly a 1-shot, there was an issue with one watermark not showing up on one API endpoint that I had to give it a couple kicks at the can to fix. Go it was able to get but it needed 5+ attempts at rework. Rust took ~10+, and Zig took maybe 15+.
They were all given the same prompt, though they all likely would have dont much better if I had it build a test suite or at least a manual testing recipe for it to follow.
To build gtk you are hit with GPL which sucks. To build Swift you have to pay developer fee to Apple, to build win32 you have to pay developer fee to Microsoft. Which both suck. Don’t forget mobile Android you pay to Google.
That is why everyone jumped to building in Electron because it is based on web standards that are free and are running on chromium which kind of is tied to Google but you are not tied to Google and don’t have to pay them a fee. You can also easily provide kind of the same experience on mobile skipping Android shenigans.
>"to build win32 you have to pay developer fee to Microsoft"
Not really, you can self sign but your native application will be met with a system prompt trying to scare user away. This is maddening of course and I wish MS, Apple, whatever others will die just for this thing alone. You fuckers leveraged huge support from developers writing to you platform but not, it is of course not enough for you vultures, now let's rip money from the hands that fed you.
VSCode takes 1 GB of memory to open the same files that Sublime can do in just 200 MB. It is not remotely performant software, it sucks at performance.
I too thought VSCode's being web based would make it much slower than Sublime. So I was surprised when I found on my 2019 and 2024 ~$2,500-3,000 MacBook Pros that Sublime would continually freeze up or crash while viewing the same 250 MB - 1 GB plain text log files that VSCode would open just fine and run reliably on.
At most, VS Code might say that it has disabled lexing, syntax coloring, etc. due to the file size. But I don't care about that for log files...
It still might be true that Visual Studio Code uses more memory for the same file than Sublime Text would. But for me, it's more important that the editor runs at all.
Judging by the state of most software I use, customers genuinely could not care less about bugs. Software quality is basically never a product differentiator at this point.
I'm not saying zero actual people care, I'm saying that not enough people care to actually differentiate. Is Windows getting better now that you switched? Then it doesn't matter you left.
I mean, Microsoft has recently made a statement that they're aware people are mad and they're working on it, so, no, I don't think they care that I personally hate the software but they do care that there are a number of people like me. Whether that moves the needle, I don't know, but what I do know is right now I'm using non-slop non-electron software and it's so much more pleasant. I think it's worth protecting.
I think that's too broad of a blanket statement. Plenty of people including myself choose Apple products in part for their software quality over Windows and Linux. However there are other factors like network effects or massive marketing campaigns, sales team efforts etc that are often far greater.
We just don't know how bad it will get with AI coding though. Do you think the average consumer won't care about software quality when the bank software "loses" a big transition they make? Or when their TV literally stops turning on? People will tolerate shitty software if they have to, when it's minor annoyances, but it makes them unhappy and they won't tolerate big problems for long.
You are 100% correct, people as a whole don't really care. I can prove it, Excel exists. Not only does it exist but a huge chunk of the world runs on it.
I've see every kind of wrong actively in production and 99/100 times no one cared enough to fix it. Even when it was losing money.
The article already concludes coding agents have uses in areas they already do well. What specifically can be continued leading you to think should instead not be used?
The claim that somehow "code is free now" is struck low by anthropic choosing electron is silly and deserves ridicule.
I guess I don't understand how people don't see something like 20k + an engineer-month producing CCC as the actual flare being shot into the night that it is. Enough to make this penny ante shit about "hurr hurr they could've written a native app" asinine.
They took a solid crack at GCC, one of the most complex things *made by man* armed with a bunch of compute, some engineers guiding a swarm, and some engineers writing tests. Does it fail at key parts? Yes. It is a MIRACLE and a WARNING that it exists at all? YES. Do you know what you would have with an engineer-month and 20k in compute trying to write GCC from scratch in 2 weeks in 2024? A whole heck of a lot less than they got.
This notion that everything is the same just didn't make contact on 2025, and we're in 2026 now. All of software is already changing and HN is full of wanking about all the wrong stuff.
The title is indeed silly and a poor choice but it's not the argument actually made in the article.
The title doesn't even seem to be intended as a shot in the night, despite that being how most of the HN took it. I.e. the author isn't saying "don't use agents because Claude Code is written in Electron" they are genuinely looking at why one would still have their agents write an Electron app over native when using coding agents.
the central argument to the piece is still fundamentally silly. What truly do we know about the organization that produced the Claude desktop app, by virtue of the fact that they built it in electron?
Really truly what do we know about them based on that decision? I submit the answer is basically nothing.
Instead, we’re sort of coasting on priors and vibes about “native” tool kits being better. And that’s just catnip for people on the Internet who want to talk shit about code and don’t know what the fuck they’re talking about.
If native is a stand in for better in your mind and you conclude that they made a choice that was worse because it’s not native then therefore you can conclude that they are bad somehow. But the connective tissue there is not whatever they’re designing choice is (and of course we have no vision into the actual choices). It’s the un-investigated prior. That native is good and cross platform is bad. That’s really what people are arguing in this thread.
And the only reason we don’t see that is completely fucking ridiculous is because we are also interested in talking about how AI is bad.
So everyone gets to have two bites of the cookie and nobody gets to defend an actual argument. It’s so silly that I don’t think that we can claim that the piece is actually much more moderate and subtle than everyone is reading it to be. Because that’s kind of a dastardly position too. It allows the main argument to be advanced, and whenever it is questioned, one can retreat to claims of nuance.
I read the article more as an indictment of the promises being made vs reality. If we’re being told these agents are so good, why aren’t these companies eating their own dog food to the same degree they’re telling us to eat it?
Likewise OpenAIs browser is still only available on macOS, four months after launch, despite being built on a mature browser engine which already runs on everything under the sun. Seems like low-hanging fruit, and yet...
I'm guessing you're saying no one wants it? As otherwise, launching on an OS that has ~3% market share (on top of a cross-platform engine) will prevent the vast majority of adoption, yes.
A few years ago maybe. Tauri makes better sense for this use case today - like Electron but with system webviews, so at least doesn't bloat your system with extra copies of Chrome. And strongly encourages Rust for the application core over JS/Node.
Electron has never made sense. It is only capable of making poorly performing software which eats the user's RAM for no good reason. Any developer who takes pride in his work would never use a tool as bad as Electron.
What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...
there's currently ~6k open issues and ~20k closed ones on their issue tracker (https://github.com/anthropics/claude-code/issues). certainly a mix of duplicates / feature requests, but 'buggy mess' seems appropriate
maybe we don't have AGI to prevent all bugs. but surely some of these could have been caught with some good old fashioned elbow grease and code review.
I can see it in my team. We've all been using Claude a lot for the last 6 months. It's hard to measure the impact, but I can tell our systems are as buggy as ever. AI isn't a silver bullet.
I can't tell if this is sarcasm, but if not, you cant rely on the thing that produced invalid output to validate it's own output. That is fundementally insufficient, despite it potentially catching some errors.
I mean there is some wisdom to that, most teams separate dev and qa and writers aren't their own editors precisely because it's hard for the author of a thing to spot their own mistakes.
When you merge them into one it's usually a cost saving measure accepting that quality control will take a hit.
That’s something that more than half of humans would disagree with (exact numbers vary but most polls show that more than 75% of people globally believe that humans have a soul or spirit).
But ignoring that, if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate. Our understanding of ourselves is so limited that we might as well be magic.
This but unironically. Of course review your own work. But QA is best done by people other than those who develop the product. Having another set of eyes to check your work is as old as science.
What if "the thing" is a human and another human validating the output. Is that its own output (= that of a human) or not? Doesn't this apply to LLMs - you do not review the code within the same session that you used to generate the code?
I think a human and an LLM are fundamentally different things, so no. Otherwise you could make the argument that only something extra-terrestrial could validate our work, since LLM's like all machines are also our outputs.
> you cant rely on the thing that produced invalid output to validate it's own output
I've been coding an app with the help of AI. At first it created some pretty awful unit tests and then over time, as more tests were created, it got better and better at creating tests. What I noticed was that AI would use the context from the tests to create valid output. When I'd find bugs it created, and have AI fix the bugs (with more tests), it would then do it the right way. So it actually was validating the invalid output because it could rely on other behaviors in the tests to find its own issues.
The project is now at the point that I've pretty much stopped writing the tests myself. I'm sure it isn't perfect, but it feels pretty comprehensive at 693 tests. Feel free to look at the code yourself [0].
I'm not saying you can't do it, I'm just saying it's not sufficient on its own. I run my code through an LLM and it occasionally catches stuff I missed.
Thanks for the clarification. That's the difference though, I don't need it to catch stuff I missed, I catch stuff it misses and I tell it to add it, which it dutifully does.
I can't tell if that is sarcasm. Of course you can use the same model to write tests. That's a different problem altogether, with a different series of prompts altogether!
When it comes to code review, though, it can be a good idea to pit multiple models against each other. I've relied on that trick from day 1.
I have had other LLMs QA the work of Claude Code and they find bugs. It's a good cycle, but the bugs almost never get fixed in one-shot without causing chaos in the codebase or vast swaths of rewritten code for no reason.
Only if they are supremely lazy. It’s possible to use these tools in a diligent way, where you maintain understanding and control of the system but outsource the implementation of tasks to the LLM.
An engineer should be code reviewing every line written by an LLM, in the same way that every line is normally code reviewed when written by a human.
Maybe this changes the original argument from software being “free”, but we could just change that to mean “super cheap”.
The venn diagram for "bad things an LLM could decide are a good idea" and "things you'll think to check that it tests for" has very little overlap. The first circle includes, roughly, every possible action. And the second is tiny.
You really do have to verify and validate the tests. Worse you have to constantly battle the thing trying to cheat at the tests or bypass them completely.
But once you figure that out, it's pretty effective.
There’s no way you or the AI wrote tests to cover everything you care about.
If you did, the tests would be at least as complicated as the code (almost certainly much more so), so looking at the tests isn’t meaningfully easier than looking at the code.
If you didn’t, any functionality you didn’t test is subject to change every time the AI does any work at all.
As long as AIs are either non-deterministic or chaotic (suffer from prompt instability, the code is the spec. Non determinism is probably solvable, but prompt instability is a much harder problem.
> As long as AIs are either non-deterministic or chaotic
You just hit the nail on the head.
LLM's are stochastic. We want deterministic code. The way you do that is with is by bolting on deterministic linting, unit tests, AST pattern checks, etc. You can transform it into a deterministic system by validating and constraining output.
One day we will look back on the days before we validated output the same way we now look at ancient code that didn't validate input.
None of those things make it deterministic though. And they certainly don’t make it non-chaotic.
You can have all the validation, linters, and unit tests you want and a one word change to your prompt will produce a program that is 90%+ different.
You could theoretically test every single possible thing that an outside observer could observe, and the code being different wouldn’t matter, but then your tests would be 100x longer than the code.
> None of those things make it deterministic though.
In the information theoretical sense you're correct, of course. I mean it's a variation on the halting problem so there will never be any guarantee of bug free code. Heck, the same is true of human code and it's foibles. However, in the "does it work or not" sense I'm not sure why we care?
If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
Let's say the linter says it's good, it passes the regression tests, you've validated that it only outputs what it's supposed to and does it in a reasonable amount of time, and maybe you're even super paranoid so you ran it through some mutation tests just to be sure that invalid inputs didn't lead to unacceptable outputs. How can it really be non-deterministic after all that? I get that it could still be doing some 'other stuff' in the background, or doing it inefficiently, but if we care about that we just add more tests for that.
I suppose there's the impossible problem edge case. IE - You might never get an answer that works, and satisfies all constraints. It's happened to me with vibe-coding several times and once resulted in the agent tearing up my codebase, so I learned to include an escape hatch for when it's stuck between constraints ("email user123@corpo.com if stuck for 'x' turns then halt"). Now it just emails me and waits for further instruction.
To me, perfect is the enemy of good and good is mostly good enough.
> If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
If that’s all the code does, sure you could specify every observable behavior.
In reality though there are tens of thousands of “design decisions” that a programmer or LLM is gonna to make when translating a high level spec into code. Many of those decisions aren’t even things you’d care about, but users will notice the cumulative impact of them constantly flipping.
In a real world application where you have thousands of requirements and features interacting with each other, you can’t realistically specify enough of the observable behavior to keep it from turning into a sloshy mess of shifting jank without reviewing and understanding the actual spec, which is the code.
I think about this a lot, and do everything I can to avoid having Claude write production code while keeping the expected tempo up. To date, this has mostly ended up having me use it to write project plans, generate walkthroughs, and write unit and integration tests. The terrifying scenario for me is getting paged and then not being able to actually reason about what is happening.
I find that writing good tests is my ticket to understanding the problem in depth, be careful about outsourcing that part. Plus from what I have seen LLM generated tests are often low quality.
I find this such a weird stance to take. Every system I work on and bug I fix has broad sets of code that I didn't write in it. Often I didn't write any of the code I am debugging. You have to be able to build a mental map as you go even without ai.
Yeah. Everyone sort of assumes that not having personally written the code means they can’t debug it.
When is the last time you had an on call blow up that was actually your code?
Not that I’m some savant of code writing — but for me, pretty much never. It’s always something I’ve never touched that blows up on my Saturday night when I’m on call. Turns out it doesn’t really change much if it’s Sam who wrote it … or Claude.
Yeah but now you get an LLM to help you understand the code base 100x faster.
Remember, they're not just good for writing code. They're amazing at reading code and explaining to you how the architecture works, the main design decisions, how the files fit together, etc.
Sam might be 7 beers deep, or maybe he's available. In my org, oncall is just who gets the 2am phone call. They can try to contact anyone else if needed.
Claude is there as long as you're paying,and I hope he doesn't hallucinate an answer.
Because it's remarkably easier to write bugs in a code base you know nothing about, and we usually try to prevent bugs entirely, not debug them after they are found. The whole premise of what you're saying is dependent on knowing bugs exist before they hit Prod. I inherit people's legacy apps. That almost never happens.
In sufficiently complicated systems, the 10xer who knows nothing about the edge cases of state could do a lot more damage than an okay developer who knows all the gotchas. That's why someone departing a project is such a huge blow.
When you work on a pre-existing codebase, you don't understand the code yet, but presumably somebody understood parts of it while building it. When you use AI to generate code, you guarantee that no one has ever understood the code being summoned. Don't ignore this difference.
Usually all code has an owner though. If I encounter a bug the first thing I often do is look at git blame and see who wrote the code then ask them for help.
I agree, but you don't have to outsource your thinking to AI in order to benefit from AI.
Use AI as a sanity check on your thinking. Use it to search for bugs. Use it to fill in the holes in your knowledge. Use it to automate grunt work, free your mind and increase your focus.
There are so many ways that AI can be beneficial while staying in full control.
I went through an experimental period of using Claude for everything. It's fun but ultimately the code it generates is garbage. I'm back to hand writing 90% of code (not including autocomplete).
You can still find effective ways to use this technology while keeping in mind its limitations.
The better the code is, the less detailed a mental map is required. It's a bad sign if you need too much deep knowledge of multiple subsystems and their implementation details to fix one bug without breaking everything. Conversely, if drive-by contributors can quickly figure out a bug they're facing and write a fix by only examining the place it happens with minimal global context, you've succeeded at keeping your code loosely-coupled with clear naming and minimal surprises.
100% agree. I’ve seen it with my own sessions with code agents. You gain speed in the beginning but lose all context on the implementation which forces you to use agents more.
It’s easy to see the immediate speed boost, it’s much harder to see how much worse maintaining this code will be over time.
What happens when everyone in a meeting about implementing a feature has to say “I don’t know we need to consult CC”. That has a negative impact on planning and coordination.
Dude, I blame all bugs on ai at this point. I suspect one could roughly identify AI’s entry into the game based on some metric of large system outages. Assume someone has already done this but…probably doesn’t matter.
I love the fact that we just got a model really capable of doing sustained coding (let me check my notes here...) 3 months ago, with a significant bump 15 days ago.
And now the comments are "If it is so great why isn't everything already written from scratch with it?"
People are getting caught up in the "fast (but slow) diffusion)" that Dario has spoken to. Adoption of these tools has been fast but not instant but people will poke holes via "well, it hasn't done x yet".
For my own work I've focused on using the agents to help clean up our CICD and make it more robust, specifically because the rest of the company is using agents more broadly. Seems like a way to leverage the technology in a non-slop oriented way
But nobody says code is free(?). Certainly not Claude, that experimental compiler costs $20K to build. That openclaw author admitted in Lex Fridman talk that he spends $10k's on tokens each month.
If author tried native macOS development with agent for an hour, they wouldn’t know where to begin explaining how different is agentic web development from native. It was better year ago, you could actually get to build a native app.
Now all models over-think everything, they do things they like and igniter hard constraints. They picked all that in training. All these behaviours, hiding mistakes, shameful silence, going “woke” and doing what they think should be done despite your wishes.
All this is meliorated in web development, but for native it made it a lot worse.
And visual testing, compare in-browser easy automated ride with retest-it-yourself for 50th time.
Agreed! I built a MacOS Postgres client with just Claude Code[1]. It could use some UI improvements, but it runs much better than other apps I’ve tried (specifically what it’s replacing for me: RazorSQL) and the binary is smaller than 20MB.
Eh, didn't even Microsoft give up and just shipped a React-based start menu at one point? The range of "native" on Windows 11 is quite wide - starts with an ancient Windows 3.1 ODBC dialog box.
I don't know why anyone uses Tauri - disk space is cheap but having to handle QA and supporting quirks for every possible browser engine the users' system could ship with certainly is not.
I'm pretty sure Tauri uses almost as much RAM, you just don't see it because it gets assigned to some kind of system process associated with the webview. Most of the RAM used by a browser is per-tab.
My Native macos app was using well over 1gb the other day, while my electron notes app was 1/5 of it. Theres an electron tax for sure but people are wildy mixing up application architecture issues and bugs with the framework tax.
It's a RAM issue all right - browsers are set up in a multiprocess manner to allow sharing resources between tabs while sandboxing every single one.
So the footprint of the whole browser might be heavy, but each individual tab (origin) adds only a little extra.
Unfortunately both Tauri and Electron suck in this regard - they replicate the entire browser infrastructure per app and per instance, with each running just a single 'tab'.
And I share your concern for both disk space and RAM - but the solution here is to move away from browser tech, not picking a slightly differently packaged browser.
Yeah, like you don't need to write three different clients. You can write a native MacOS client and ship your electron client for the irrelevant platforms.
Also I refuse to download and run Node.js programs due to the security risk. Unfortunately that keeps me away from opencode as well, but thankfully Codex and Vibe are not Node.js, and neither is Zed or Jetbrains products.
Because Anthropic has never claimed that code is free?
It's pretty easy to argue your point if you pick a strawman as your opponent.
They have said that you can be significantly more productive (which seems to be the case for many) and that most of their company primarily uses LLM to write code and no longer write it by hand. They also seems to be doing well w.r.t. competition.
There are legitimate complaints to be made against LLMs, pick one of them - but don't make up things to argue against.
I'm not sure coding has ever been the hard part. Hard part (to me) has always been to be smart enough to know what, exactly, I (or somebody else) want. Or has someone heard of a case when someone says something like "These requirements are perfectly clear and unambiguous and do not have any undefined edge/corner cases. But implementing that is still really hard, much harder than what producing this unicorn requirements spec was"?
But they already know what they want, they have it. Rewriting it to be more efficient and less buggy should be the lowly coding that is supposed to be solved
Cause it's (allegedly) cheap and you can do much better? Avoiding rewriting things should become a thing of the past if these tools work as advertised.
For some people the relevant properties of "thing" include not needing overpowered hardware to run it comfortably. So "thing" does not just "exist", at least not in the form of electron.
Actual reason: there's far more training data available for electron apps than native apps.
And despite what Anthropic and OpenAI want you to think, these LLMs are not AGI. They cannot invent something new. They are only as good as the training data.
Maybe code is free, but code isn't all that goes into building software. Minimally, you have design, code, integrate, test, document, launch.
Claude is going to help mostly with code, much less with design. It might help to accelerate integration, if the application is simple enough and the environment is good enough. The fact is, going cross-platform native trebles effort in areas that Claude does not yet have a useful impact.
Here is what worries me the most at the moment: we're in a period of hype, fire all the developers, we have agents, everybody can code now, barrier is not low - it's gone. Great. Roll up a year from now, and we have trillions of lines of code no human wrote. At some point, like a big PR, the agent's driver will just say yes to every change. Nobody now can understand the code easily because nobody wrote it. It works, kinda, but how? Who knows? Roll up another few years and people who were just coding because it's a "job" forget whatever skill they had. I've heard a few times already the phrase "I didn't code in like 10 months, bruh"...
Then what?
Not saying I'm not using AI - because I am. I'm using it in the IDE so I can stay close to every update and understand why it's there, and disagree with it if it shouldn't be there. I'm scared to be distanced from the code I'm supposed to be familiar with. So I use the AI to give me superpowers but not to completely do my job for me.
I think the idea is that by the time those trillions of lines of code start to cause maintenance problems, the models will be good enough to deal with those problems.
That won't solve the problem that humans will lose the skill to write code. It will become a hobbyist pass time. Like people listening to 8-tracks now...
This post and this entire thread are HN-sniping to the millionth degree. We have all the classics here:
- AI bad
- JavaScript bad
- Developers not understanding why Electron has utility because they don't understand the browser as a fourth OS platform
- Electron eats my ram oh no posted from my 2gb thinkpad
My guy if you can’t see the problem with a $300B SF company that of course claims to #HireTheBest having a dumpy UX due to their technical choices I don’t really know what to tell you. Same goes for these companies having npm as an out-of-the-box dependency for their default CLI tools. I’m going to assume anyone who thinks that every user’s machine is powerful enough to run electron apps, or even support bloated deps hasn’t written any serious software. And that’s fine in general (to each their own!), but these companies publicly, strongly, claim to be the best, and hire the best. These are not small 10 people startups.
Presumably these competent people could look at electron, think about building their own cross-platform application on top of chromium and conclude that this free as in code and beer tool fit their needs.
Bun exists and building a ui on too of that should be well within the power of the money they have. No one is saying to rebuild the universe but the current state is embarrassing.
You could build the same tui in the same amount of time with the same effort and end with an overall better product. Spend a little more and it is even better. Why can we not expect more from companies that have more?
I've used 3/5 of those programs significantly and have issues with all of them relating to software quality. Especially discord. So bad I have 4 different servers actively trying alternatives.
When discord and slack started the company was not large so it definitely could have been a lack of resources. Could also have been a bad design choice.
I'm not alone on that either at all. This is a pretty common opinion.
Claude had a chance to really show something special here and they blew it.
They all have millions of active users doing complex tasks in them every day! I's laughable that you expect me to take your vague complaints about Discord seriously not just as complaints but as dispositive signs that electron was a bad design choice.
Discord could have been a lack of resources as I previously said. They weren't a billion dollar company when the application was conceived.
Regardless the only thing keeping those millions of people at this point is lock-in. Even then people are actively looking for ways to move away from it. I'm witnessing the migration now and am looking forward to the day I don't have to hard restart the client 2-3 times a day.
Popularity is not an indicator of anything other than marketing skills. How popular a product is has absolutely nothina to do with its technical merits, as can be seen from garbage like Microsoft Teams.
They don't have to reinvent electron. They shouldn't need to use a whole virtualized operating system to call their web API with a fancy UI.
Projects with much smaller budget than Atrophic has achieved much better x-plat UI without relying on electron [1]. There are more sensible options like Qt and whatnot for rendering UIs.
You can even engineer your app to have a single core with all the business logic as a single shared library. Then write UI wrappers using SwiftUI, GTK, and whatever microsoft feels like putting out as current UI library (I think currently it's WinUI2) consuming the core to do the interesting bits.
Heck there are people whom built gui toolkits from scratch to support their own needs [2].
1. Anthropic had no problem spending tens of thouands of dollars of tokens re-writing the C compiler a couple weeks ago before abandonining it within hours of launch, despite promising that fixes were coming in the following days.
2. Regardless are you arguing that re-writing Chromium would have been a good solution for the original suggestion of native apps? Aren't there better existing approaches from companies that don't claim to have the best coders, nor are worth hundreds of millions, billions, tens of billions, nor hundreds of billions of dollars, so I'm unsure why you made that specific suggestion? Wouldn't pointing to an existing product's native approach be a better suggestion?
Who both has a computer too slow to handle electron applications ,and is spending 20$ a month on Claude code.
>There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
A few hundred megabytes to a few gb sounds like an end user problem. They can either make room or not use your application.
You can easily buy a laptop for around 400 USD that will run Claude code just fine, along with several other electron apps.
Don't get me wrong, native everything ( which would probably mean sacrificing Linux support) would be a bit better, but it's not a deal breaker.
Me, because my work gave me a crappy dell that can barely run the stripe dashboard in the browser. I could put in a request for a Mac or something faster but this is the standard machine everyone gets for the company. It helps me be sympathetic to my users to make sure what I develop is fast enough for them because I definitely am going to make it fast enough for me so I don’t shoot my brains out during development.
We should repeat it over and over until all these electrons apps are replaced by proper native apps. It’s not just performance: they look like patched websites, have inconsistent style and bad usability, and packed with bugs that are already solved since tens of years in our OS. It’s like Active Desktop ™ all over. Working on a native Mac app feels just better.
No, they are also inconsistent: slack, vscode, zed, claude, chatgpt, figma, notion, zoom, docker desktop, to quote some that i use daily. They have all different UI patterns and design. The only thing they have in common is that are slow, laggy, difficult to use and don’t respond quickly to the Window manager.
Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.
Well put. What world are folks living in where it wouldn’t be the obvious choice.
Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.
> They are often laggy or unresponsive. They don’t integrate well with OS features.
> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land
Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.
Claude offer a CLI tool. Like what product manager would say no to electron in that situation.
This article makes no sense in context. The author surely gets that.
I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.
With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.
The use of "Free" in the title is probably too much of a distraction from the content (even though the opening starts with an actual cost). The point of the article does not actually revolve about LLM code generation being $0 but that's what most of the responses will be about because of the title.
Some of the engineers working on the app worked on Electron back in the day, so preferred building non-natively. It’s also a nice way to share code so we’re guaranteed that features across web and desktop have the same look and feel. Finally, Claude is great at it.
That said, engineering is all about tradeoffs and this may change in the future!
Thanks for chiming in! My takeaways are that, as of today:
- Using a stack your team is familiar with still has value
- Migrating the codebase to another stack still isn’t free
- Ensuring feature and UX parity across platforms still isn’t free. In other words, maintaining different codebases per platform still isn’t free.
- Coding agents are better at certain stacks than others.
Like you said any of these can change.
It’s good to be aware of the nuance in the capabilities of today’s coding agents. I think some people have a hard time absorbing the fact that two things can be true simultaneously: 1) coding agents have made mind bending progress in a short span 2) code is in many ways still not free
I suppose because generating tokens is slow. It is a limitation of the technology. And when data is coming in slowly, you don't need a super high performance client.
...I think a vibe-coded Cocoa app could absolutely be more performant than a run-of-the-mill Electron app. It probably wouldn't beat something heavily optimized like VS Code, but most Electron apps aren't like that.
As a user I would trade fewer features for a UI that doesn't jank and max out the CPU while output is streaming in. I would guess a moderate amount of performance engineering effort could solve the problem without switching stacks or a major rewrite. (edit: this applies to the mobile app as well)
While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
I mean, look: the most complicated, stateful and involved UIs most of the people commenting in this thread are going to use (are going to ever use, likey) are web stack apps. I'll name some obvious ones, though there are other candidates. In order of increasing complexity:
1. Gmail
2. VSCode
3. www.amazon.com (this one is just shockingly big if you think about it)
If your client machine can handle those (and obviously all client machines can handle those), it's not going to sweat over a comparatively simple Electron app for talking to an LLM.
Basically: the war is over, folks. HTML won. And with the advent of AI and the sunsetting of complicated single-user apps, it's time to pack up the equipment and move on to the next fight.
> complex UI that isn't a frustratingly slow resource hog
Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Again, I'm just making a point from existence proof. VSCode wiped the floor with competing IDEs. GMail pushed its whole industry to near extinction, and (again, just to call this out explicitly) Amazon has shipped what I genuinely believe to be the single most complicated unified user experience in human history and made it run on literally everything.
People can yell and downvote all they want, but I just don't see it changing anything. Native app development is just dead. There really are only two major exceptions:
1. Gaming. Because the platform vendors (NVIDIA and Microsoft) don't expose the needed hardware APIs in a portable sense, mostly deliberately.
2. iOS. Because the platform vendor expressly and explicitly disallows unapproved web technologies, very deliberately, in a transparent attempt to avoid exactly the extinction I'm citing above.
> Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Thunderbird is a fully-featured mail app and much more performant than Gmail. Neovim has more or less the same feature set as VSCode and its performance is incomparably better.
> Thunderbird is a fully-featured mail app and much more performant than Gmail.
TB is great and I use it every day. An argument for it from a performance standpoint is ridiculous on its face. Put 10G of mail in the Inbox and come back to me with measurements. GMail laughs at mere gigabytes.
Verifiably false. Like, this is just trivial to disprove with the "Reload" button in the browser (about 1.5s for me, FWIW). Why would you even try to make that claim?
Using the terminal in vscode will easily bring the UI to a dead stop. iterm is smooth as butter with multiple tabs and 100k+ lines of scrollback buffer.
Try enabling 10k lines of scrollback buffer in vscode and print 20k lines.
I actually avoid using VSCode for a number of reasons, one of which is its performance. My performance issues with VSCode are I think not necessarily all related to the fact that it's an electron app, but probably some of them are.
In any case, what I personally find more problematic than just slowness is electron apps interacting weirdly with my Nvidia linux graphics drivers, in such a way that it causes the app to display nothing or display weird artifacts or crash with hard-to-debug error messages. It's possible that this is actually Nvidia's fault for having shitty drivers, I'm not sure; but in any case I definitely notice it more often with electron apps than native ones.
Anyway one of the things I hope that AI can do is make it easier for people to write apps that use the native graphics stack instead of electron.
VSCode isn't a regular Electron crap application, in fact Microsoft has dozens of out-of-process plugins written in C++, Rust and C# to work around Electron crap issues, also the in-editor terminal makes use of WebGL instead of div and p soup.
Sigh. Beyond the deeply unserious hyperbole, this is a no-true-scotsman. Yes, you can use native APIs in Electron. They can even help. That's not remotely an argument for not using Electron.
> the in-editor terminal makes use of WebGL
Right, because clearly the Electron-provided browser environment was insufficient and needed to be escaped by using... a browser API instead?
Again, folks, the argument here is from existence. If the browser stack is insufficient for developing UIs in the modern world, then why is it winning so terrifyingly?
Gen X and Boomers strangely enough managed to write portable native code, across multiple hardware architectures, operating systems and language toolchains.
As is an insurmountable challenge apparently, to master Web UI delivery from system services, daemons to the default browser like UNIX administration tooling.
Yeah, I've got a 7950x and 64gb memory. My vibe coding setup for Bevy game development is eight Claude Code instances split across a single terminal window. It's magical.
I tried the desktop app and was shocked at the performance. Conversations would take a full second to load, making rapidly switching intolerable. Kicking off a new task seems to hang for multiple seconds while I'm assuming the process spins up.
I wanted to try a disposable conversations per feature with git worktree integration workflow for an hour to see how it contrasted, but couldn't even make it ten minutes without bailing back to the terminal.
Both Anthropic's and OpenAI's apps being this janky with only basic history management (the search primarily goes by the titles) tells me a lot. You'd think these apps be a shining example of what's possible.
Explains why my laptop turns into a makeshift toaster when the Claude app automatically runs in the background. Even many games don't run that intensively in the background.
I would have expected the non-solved-cases to be the relatively unique ones, but considering the plethora of both A) non-Electron desktop apps, and B) coding agents (Copilot/Windsurf/Cursor/Codex/OpenCode/Qwen/Amazon Kiro/Devin/JetBrains AI/Gemini CLI/Gemini Code Assist/Antigravity/Warp/Kilocode/Cline/RooCode/Atlassian Rovo/Claude Code/etc), it seems like neither of the building blocks is very rare - perhaps Claude is just incapable of putting it together?
How so? If coding is largely solved and we are on the cusp of not even needing to learn to code, then the statement that they use electron because it’s what most of their engineers are familiar with seems a little contradictory.
What's wrong with taking existing skills into consideration when making technical decisions while coding skills still matter, just because you think coding skills won't matter "in a year or two"? Where's the contradiction?
I think that comment is interesting as well. My view is that there is a lot of Electron training code, and that helps in many ways, both in terms of the app architecture, and the specifics of dealing with common problems. Any new architecture would have unknown and unforeseen issues, even for an LLM. The AIs are exceptional at doing stuff that they have been trained on, and even abstracting some of the lessons. The further you deviate away from a standard app, perhaps even a standard CRUD web app, the less the AI knows about how to structure the app.
Claude isn't AGI, but this is a terrible argument. I'm better at Javascript than C, too. Does this mean I'm not a generalized intelligence? I'm just JS stack autocomplete?
I keep being told by Anthropic and others than these AI coding tools make it effortless to write in new languages and port code from one language to another.
This is an important lesson to watch what people do, not what they say.
Right, the biggest driver of global economic growth is not based on engineering at all, and these people (who've made massive amounts of money) clearly don't know how to describe the work they do.
As for others, Microsoft is saying they’re porting all C/C++ code to Rust with a goal of 1m LOC per engineer per month. This would largely be done with AI.
But if AI can maintain code bases so easily, why does it matter if there are 3? People use electron to quickly deploy non-native apps across different systems.
Surely, it would be a flex to show that your AI agents are so good they make electron redundant.
But they don’t. So it’s reasonable to ask why that is.
But the one times the resources didn't solve the problem, clearly, since we are talking about it. And they claim that AI makes it trivial to port to do this sort of things so it would not be 3x the resources.
1. Anthropic has no problem burning tens of thousands of dollars of tokens on things that have zero real-world value, such as implementing a C compiler that as far as I can tell they don't intend to be used in the real world - for example, they announced it on Feb 5, promising "Over the coming days, I’ll continue having Claude push new changes if you want to follow along with Claude’s continued attempts at addressing these limitations" but there have been zero code commits since Feb 5 (the day they announced it). Wouldn't it make far more sense for a company to invest tokens into their own product than burning them for something that may be abandoned within hours of launching, with zero ongoing value to their company or their customers?
2. Why do you think it requires "three times the resources" - wouldn't it normally be an incremental amount of work to support additional targets, but not an additional 100% of work for each additional target?
> But if AI can maintain code bases so easily, why does it matter if there are 3? People use electron to quickly deploy non-native apps across different systems.
Because then their competition would work faster than they could and any amount of slop/issues/imperfections would be amplified threefold.
Also there would inevitably be some feature drift - I got SourceTree for my Mac and was surprised to discover that it's actually somewhat different from the Windows version, that was a bit jarring.
I hope that in the next decade we get something like lcl (https://en.wikipedia.org/wiki/Lazarus_Component_Library), but for all OSes and with bindings for all common languages - so we don't have to rely on the web platform for local software, until then developing native apps is a hard sell.
Yeah, I'm kind of disheartened by the number of people who still insist that LLMs are an expensive flop with no potential to meaningfully change software engineering.
In a few years, they'll be even better than they are now. They're already writing code that is perfectly decent, especially when someone is supervising and describing test cases.
Yea what? This is exactly why they should switch to native apps. Native apps are not harder to maintain than JavaScript especially with LLM guidance on APIs and such. I don't understand why your confidence in LLM code ability means you don't think it can succeed with native apps
One thing to consider is that "native" apps are considered the gold standard of desktop UIs, but a overwhelming share of users… don’t care. I, for one, don’t necessarily enjoy Qt apps. I think the only one I still use is KeepassXC and it’s trash to me, just slightly better than Keepass2. I much prefer the Bitwarden Electron app.
Given the choice, I often reach for Electron apps because they feel more feature rich, feel better designed in terms of polish (both UI and UX), and I rarely get resource hog issues (Slack is the only offender I can think of among the Electron apps I use)
Did you ever consider that perhaps for other people when something is unreasonably slow and consuming all of their battery, the "polish" is really not that high on the list of important characteristics?
Also, keep in mind that many people would like their applications to respect their preferences, so the "polish" that looks completely out of place on their screen is ugly (besides slow).
Ok but did you consider some people care about polish and prefer apps with an attention to design, and not so much about consistency with other apps? Why are your tastes more important?
Not my taste, the requirements of not crashing and not being horribly slow come before any polish. Any software engineering course will teach you that.
I'm guessing the first question will be "How are we going to keep the UI consistent?". The hard part is never the code writing it's carefully releasing fast changing features from product people. Their chat UX is the core product which is replicated on the internet and other devices. That's almost always React or [JS framework] these days.
Migrating the system would be the easier part in that regard, but they'll still need a JS UI unless they develop multiple teams to spearhead various native GUIs (which is always an option).
Almost every AI chat framework/SDK I've seen is some React or JS stuff. Or even agent stuff like llamaindex.ts. I have a feeling AI is going to reinforce React more than ever.
Yep, I understand why let's release this one feature everywhere is a great lure and I do get annoyed when desktop vs mobile spotify gets features later or never. However, a phone is not a desktop capability wise and what we usually get is the power of the phone on a desktop, aka the lowest common denominator of capabilities.
This fetish we as an industry have to hide platform specifics makes us blind to the platform specific capabilities. Some software would be better off if it leaned into the differences instead of fighting them.
But the question isnt really why claude is electron based. Its that if, for some reason, it had to be native on 3 platforms, could a swarm of agents make and maintain the 3 aps while all the humans did was make the spec and tests?
With your context and understanding of the coding agent's capabilities and limitations, especially Opus4.6, how do you see that going?
It is really confusing how we're told the last few years how all ourp rogrammers are obsolete and these billion dollar companies can't be arsed touse these magical tools to substantially improve their #1 user facing asset.
Somehow claude is only great at things that are surface level 80.9%
And for some reason i believe "may change in the future" will never come. we all know coding was never the problem in tech, hype was. ride it while you can
I always wonder how those established Electron codebases would map over to something that uses the system specific WebViews and how broken (or not) those would prove to be:
Really? Because the point is that when it comes to performance, just implementing your own "DOM" in C++ or some other low level language is going to have 10x the performance of electron, easily, in addition to having more features (better, smoother, file uploads would be welcome btw).
If you can put in unlimited coding engineering effort, why isn't Claude Code the very best it can possibly be?
Why isn't the fact that it can work 10% better an excuse to get claude to work on it for however long it takes?
I mean, most people here have done development with claude code, and we suppose the answer is simply: because that doesn't work without a capable engineer constantly babysitting the changes it's making, guiding it, nudging it, reminding it about edge cases, occasionally telling it it's being stupid ... it's a great product, incredible even, but it doesn't work without senior engineers.
Same question: Why doesn't it have more plugins and batch script and modfications than the app store? Surely it can by itself come up with 10000 good ideas and just implement them? Everything from little games to how to active bedroom lights by chinese vendor #123891791 ?
> Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes.
I only see these complaints on HN. Real users don't have this complaint. What kind of low-end machines are you running, that Chromium engine is too heavy for you?
> They are often laggy or unresponsive.
That's not due to Electron.
> They don’t integrate well with OS features.
If it is good enough for Microsoft Teams it is probably good enough for most apps. Teams can integrate with microphone, camera, clipboard, file system and so on. What else do you want to integrate with?
I agree with your counterpoint to OS integration, but Microsoft Teams is infamous for not being "good enough" otherwise. Laggy, buggy, unresponsive, a massive resource hog especially if it runs at startup. It's gotten a bit better, but not enough. These are not complaints on HN, they're in my workplace.
Not everyone is running the latest and greatest hardware, very few actually have the money for that. If you're running hardware from before this decade, or especially the early 2010s, the difference between an Electron app and a native app is unbelievably stark. Electron will often bring the device to its knees.
A single Electron app is usually not a problem. The problem is that the average user has a pile of Chrome tabs open in addition to a handful of Electron apps on top of increasingly bloated commercial OSes, which all compound to consume a large percentage of available resources.
This is particularly pertinent on bulk-purchased corporate and education machines which are loaded down with mandated spyware and antivirus garbage and often ship with CPUs that lag many years behind, and in the case of laptops might even have dog slow eMMC storage which makes the inevitable virtual memory paging miserable.
Teams is a terrible app, although Electron isn't the only reason for that: It needs a Gig of RAM to do things that older chat apps could do in 4 Meg.
The free ride of ever increasing RAM on consumer devices is over because of the AI hyperscalers buying all fab capacity, leading to a real RAM shortage. I expect many new laptops to come with 8GB as standard and mid-range phones to have 4GB.
Software engineers need to start thinking about efficiency again.
"Real users" don't know what electron is, but real users definitely complain about laggy and slow programs. They just don't know why they are laggy and slow.
> Real users don't have this complaint. What kind of low-end machines are you running
Real users complain differently: "My machine is slow". Electron itself is not very heavyweight (though not featherweight), but JS and DOM can cost a lot of resources. Right now my GMail tab has allocated 529 MB.
> That's not due to Electron.
Of course, but it takes some careful thought. BTW e.g. Qt apps can be pretty memory-hungry, too.
> good enough for Microsoft Teams
It's not easy no pick a more "beloved" application.
What an Electron app usually would miss is things like global shortcuts managed by macOS control panel, programmability via Automation, and the L&F of native controls. I personally don't usually miss any of these, but users who actually like macOS would usually complain.
I personally prefer to run Electron-ish apps, like Slack, in their Web versions, in a browser.
I run IT for a nonprofit and have 120 "real users" doing "real work" on "low-end machines "providing "real mental health, foster care, and social services" to "real communities".
These workers complain about performance on the machines we can afford. 16GB RAM and 256GB SSDs are the standard, as is 500MB/sec. internet for offices with 40 people, and my plans to upgrade RAM this year were axed by the insane AI chip boondoggle.
People on HN need to understand that not everyone works for a well-funded startup, or big tech company that is in the process of destroying democracy and the environment in the name of this quarter's profits!
BTW Teams has moved away from Electron, before it did I had to advise people to use the browser app instead of the desktop for performance reasons.
Cluade is an Electron app because this is a cultural issue, not a technological one. Electron could make sense if you are a startup with limited development force. For big companies that want to make a difference, to hire N developers and maintain N native apps is one of the best bet on quality and UX you can do, yet people don't do it even in large companies that have the ability, in theory, to do it. Similarly even if with automatic programming you could do it more easily, still it is not done. It's a cultural issue, part of the fact that who makes software does not try to make the best possible software anymore.
Code is not and will never be free. You pay for it one way or another. It will take a couple of years for things to cool down to realise that there is more to software than writing the code. But even if AI can write all the code - who is going to understand it. Don't tell me this is not needed. RTFM is what gives hacker the edge. I doubt any company want to be in a position where they simply have no clue how their main product actually works.
The quality of the ChatGPT Mac app is a major driver for me to keep a subscription. Hotkeys work, app feels slick and native. The Claude Mac app I found so poor that I'd never reach for it, and ended up uninstalling it — despite using the heck out of Claude Code on a Max plan — because it started blocking system restarts for updates.
I don't care wether its electron or not but the now ship a full vm with Claude which not only takes 15 GB of storage but also uses so much memory even though I just use chat. Why does that even need to be started?
Because it doesn’t matter. The biggest AI apps of last year were command line interfaces for cripes sake. Functionality and rapid iteration is more important.
Heh, I felt the same. I'm a web dev but I do not want a electron app. We can do better, I used to write electron apps because I wasn't able to build a proper native app. Now I can!
I've been building a native macOS/iOS app that lets me manage my agents. Both the ability to actually control/chat fully from the app and to just monitor your existing CLI sessions (and/or take 'em over in the app).
Also has a rust server that backs it so I can throw it anywhere (container, pi, etc) and the connect to it. If anyone wants to see it, but I have seen like 4 other people at least doing something similar: https://github.com/Robdel12/OrbitDock
I have been getting claude to us free pascal/lazarus to write cross-platform (linux qt & gtk, windows and cocoa) apps as well as porting 30-year old abandoned Windows Delphi apps to all three platforms using precisely because I can end up with a small, single binary for distribution after static linking.
I hope that prevalence of AI coding agents might lead to a bit of a revival of RAD tools like lazarus, which seem to me to have a good model for creating cross-platform apps.
The gotcha style "if AI is so good, why does $AI_COMPANY's software not meet my particular standard for quality" blog posts are already getting tedious.
Why is no one admitting that even though resources like RAM, CPU, etc. are plentiful nowadays, they should still be conserved?
Computers have gotten orders of magnitude faster since 2016, but using mainstream apps certainly don't feel any faster. Electron and similar frameworks do offer appealing engineering tradeoffs, but they are a main culprit of this problem.
Sure, the magnitude of RAM/compute "waste" may have grown from kB to MB, but inefficiency is still inefficiency - no matter how powerful the machine it's running on is.
I assume it's because LLMs are overrated and trash so they chose something that was easy for lazy developers, but I'm probably just cynical.
You would think with programming becoming completely automated by the end of 2026, there'd be a vibe coded native port for every platform, but they must be holding back to keep us from all getting jealous.
I am curious how much Claude Code is used to develop Anthropic's backend infrastructure, since that's a true feat of engineering where the real magic happens.
Hi, Felix here - I'm responsible for said Electron app, including Claude Code Desktop and Claude Cowork.
All technology choices are about trade-offs, and while our desktop app does actually include a decent amount of Rust, Swift, and Go, but I understand the question - it comes up a lot. Why use web technologies at all? And why ship your own engine? I've written a long-form version of answers to those questions here: https://www.electronjs.org/docs/latest/why-electron
To us, Electron is just a tool. We co-maintain it with a bunch of excellent other people but we're not precious about it - we might choose something different in the future.
I mean a software ide should be pretty low on the totem pole of software complexity.
Edit: (1) because most of the complexity lies in the tool chains that are integrated, like compilers and linters, and (2) because there’s much more complex software out there, mostly at the intersection of engineering domains, to name a few: ballistic guidance systems, IoT and networking, predictive maintenance systems, closed-loop process optimization systems, SLAM robotics
Because JavaScript is the best for the application layer. We just have to accept that this is reality. AI training sets are just full of JS... Good JS, bad JS... But the good JS training is really good if you can tap into it.
You just have to be really careful because the agent can easily slip into JS hell; it has no shortage of that in its training.
As many have pointed out, code is not free. More than that, the ability to go fast only makes architectural mistakes WORSE. You'll drive further down the road before realizing you made the wrong turn.
A native app is the wrong abstraction for many desktop apps. The complexity of maintaining several separate codebases likely isn't worth the value add. Especially for a company hemorrhaging money the way the Anthropic does.
> The resulting compiler is impressive, given the time it took to deliver it and the number of people who worked on it, but it is largely unusable. That last mile is hard.
You're easy to impress, that explains the unrealistic expectations "on the surface".
That's some strange analogy, though, basic usability is the first mile, not the last. Coming back to the frameworks and apps, the last mile would be respecting Mac unique keyboard bindings file for text editing. First mile is reacting to any keyboard input in a text field. Same with the compiler, basic hello world fail isn't the last mile.
The real answer buried in Boris's comment is "Claude is great at it" - meaning LLMs produce better Electron/React code because that's what most of the training data looks like. This creates a self-reinforcing loop: teams use AI to write code, AI is best at web stack code, so teams choose web stacks, which produces more web stack training data. The practical implication is that "what stack should we use" increasingly has an implicit factor of "what stack does our AI tooling produce the most reliable output for" and right now that's overwhelmingly JS/TS/React.
Claude should have gone for native apps and demonstrated that it is possible to do anything with their AI.
I'm currently building a macOS AI chat app. Generally SwiftUI/AppKit is far better than Web but it performs bad in few areas. One of them is Markdown viewer. Swift Markdown libraries are slow and lacks some features like Mermaid diagrams. To work around this, some of my competitors use Tauri/Electron and few others use WKWebView inside Swift app.
Initially I tried WKWebView. Performance was fine and the bridge between JS and Swift was not that hard to implement but I started seeing few problems especially due to the fact that WebView runs as separate process and usually a single WebView instance is reused across views.
After few attempts to fix them, I gave up the idea and tempted to go fully with Web rendering with Tauri but as a mac developer I couldn't think about building this app in React. So I started building my own Markdown library. After a month of work, I now have a high-performance Markdown library built with Rust and TextKit. It supports streaming and Markdown extensions like Mermaid.
Most of the code was written by Claude Opus, and some tricky parts were solved by Codex. The important lesson I learned is that I’m no longer going to accept limitations in tech and stop there. If something isn’t available in Swift but is available in JS, I’m going to port it. It’s surprisingly doable with Claude.
This is such a litmus test for a tool that "in few months will be writing 90% of all code". If a multi-billion dollar bleeding edge company can't use Claude Code to create and maintain a native GUI then no one else stands a chance.
mihaela | 23 hours ago
BoredPositron | 23 hours ago
Vaslo | 22 hours ago
BoredPositron | 21 hours ago
the__alchemist | 23 hours ago
jsiepkes | 23 hours ago
Retr0id | 23 hours ago
Edit: The title of the post originally started with "If code is free,"
ludicity | 23 hours ago
slopinthebag | 23 hours ago
tokenless | 23 hours ago
[OP] dbreunig | 23 hours ago
sebmellen | 23 hours ago
catgirlinspace | 23 hours ago
slopinthebag | 22 hours ago
it just means that it might be free for my owner to adopt me, but it sure as hell aint free for them to spoil me
catgirlinspace | 22 hours ago
3eb7988a1663 | 21 hours ago
doubled112 | 22 hours ago
CamperBob2 | 20 hours ago
doubled112 | 18 hours ago
forrestthewoods | 22 hours ago
rr808 | 23 hours ago
cedws | 23 hours ago
chambored | 23 hours ago
st3fan | 22 hours ago
hu3 | 23 hours ago
I use Opus 4.6 (for complex refactoring), Gemini 3.1 Pro (for html/css/web stuff) and GPT Codex 5.3 (workhorse, replaced Sonnet for me because in Copilot it has larger context) mostly.
For small tools. But also for large projects.
Current projects are:
1) .NET C#, Angular, Oracle database. Around 300k LoC.
2) Full stack TypeScript with Hono on backend, React on frontend glued by trpc, kysely and PostgreSQL. Around 120k LoC.
Works well in both. I'm using plan mode and agent mode.
What helps a ton are e2e playright tests which are executed by the agent after each code change.
My only complain is that it tends to get stutters after many sessions/hours. A restart fixes it.
$39/mo plan.
not_kurt_godel | 23 hours ago
tiborsaas | 23 hours ago
al_borland | 21 hours ago
SuperHeavy256 | 23 hours ago
shimman | 23 hours ago
fragmede | 23 hours ago
jwoq9118 | 23 hours ago
https://www.businessinsider.com/anthropic-claude-code-founde...
slopinthebag | 23 hours ago
827a | 23 hours ago
AreShoesFeet000 | 23 hours ago
bdangubic | 23 hours ago
vsgherzi | 23 hours ago
The answer of course is that it can’t do it and maintain compatibility between all three well enough as it’s high effort and each has its own idiosyncrasies.
linsomniac | 23 hours ago
In python it was very nearly a 1-shot, there was an issue with one watermark not showing up on one API endpoint that I had to give it a couple kicks at the can to fix. Go it was able to get but it needed 5+ attempts at rework. Rust took ~10+, and Zig took maybe 15+.
They were all given the same prompt, though they all likely would have dont much better if I had it build a test suite or at least a manual testing recipe for it to follow.
ozim | 23 hours ago
That is why everyone jumped to building in Electron because it is based on web standards that are free and are running on chromium which kind of is tied to Google but you are not tied to Google and don’t have to pay them a fee. You can also easily provide kind of the same experience on mobile skipping Android shenigans.
mirsadm | 22 hours ago
diath | 22 hours ago
It's LGPL, all you have to do is link GTK dynamically instead of statically to comply.
> to build win32 you have to pay developer fee to Microsoft.
You don't.
FpUser | 21 hours ago
Not really, you can self sign but your native application will be met with a system prompt trying to scare user away. This is maddening of course and I wish MS, Apple, whatever others will die just for this thing alone. You fuckers leveraged huge support from developers writing to you platform but not, it is of course not enough for you vultures, now let's rip money from the hands that fed you.
bigstrat2003 | 23 hours ago
hyperrail | 23 hours ago
At most, VS Code might say that it has disabled lexing, syntax coloring, etc. due to the file size. But I don't care about that for log files...
It still might be true that Visual Studio Code uses more memory for the same file than Sublime Text would. But for me, it's more important that the editor runs at all.
Capricorn2481 | 21 hours ago
overgard | 23 hours ago
awepofiwaop | 23 hours ago
overgard | 23 hours ago
Most users are forced to use the software that they use. That doesn't mean they don't care, just that they're stuck.
BTW, this going to matter MORE now that RAM prices are skyrocketing..
awepofiwaop | 21 hours ago
overgard | 21 hours ago
al_borland | 21 hours ago
https://www.techradar.com/computing/windows/microsoft-has-fi...
It seems like enough people do care to make Microsoft move.
slopinthebag | 23 hours ago
We just don't know how bad it will get with AI coding though. Do you think the average consumer won't care about software quality when the bank software "loses" a big transition they make? Or when their TV literally stops turning on? People will tolerate shitty software if they have to, when it's minor annoyances, but it makes them unhappy and they won't tolerate big problems for long.
citizenpaul | 14 hours ago
I've see every kind of wrong actively in production and 99/100 times no one cared enough to fix it. Even when it was losing money.
thepancake | 23 hours ago
Austin_Conlon | 23 hours ago
selridge | 23 hours ago
We should refuse to accept coding agents until they have fully replaced chromium. By that point, the world will see that our reticence was wisdom.
zamadatix | 23 hours ago
selridge | 22 hours ago
I guess I don't understand how people don't see something like 20k + an engineer-month producing CCC as the actual flare being shot into the night that it is. Enough to make this penny ante shit about "hurr hurr they could've written a native app" asinine.
They took a solid crack at GCC, one of the most complex things *made by man* armed with a bunch of compute, some engineers guiding a swarm, and some engineers writing tests. Does it fail at key parts? Yes. It is a MIRACLE and a WARNING that it exists at all? YES. Do you know what you would have with an engineer-month and 20k in compute trying to write GCC from scratch in 2 weeks in 2024? A whole heck of a lot less than they got.
This notion that everything is the same just didn't make contact on 2025, and we're in 2026 now. All of software is already changing and HN is full of wanking about all the wrong stuff.
zamadatix | 12 hours ago
The title doesn't even seem to be intended as a shot in the night, despite that being how most of the HN took it. I.e. the author isn't saying "don't use agents because Claude Code is written in Electron" they are genuinely looking at why one would still have their agents write an Electron app over native when using coding agents.
selridge | 7 hours ago
Really truly what do we know about them based on that decision? I submit the answer is basically nothing.
Instead, we’re sort of coasting on priors and vibes about “native” tool kits being better. And that’s just catnip for people on the Internet who want to talk shit about code and don’t know what the fuck they’re talking about.
If native is a stand in for better in your mind and you conclude that they made a choice that was worse because it’s not native then therefore you can conclude that they are bad somehow. But the connective tissue there is not whatever they’re designing choice is (and of course we have no vision into the actual choices). It’s the un-investigated prior. That native is good and cross platform is bad. That’s really what people are arguing in this thread.
And the only reason we don’t see that is completely fucking ridiculous is because we are also interested in talking about how AI is bad.
So everyone gets to have two bites of the cookie and nobody gets to defend an actual argument. It’s so silly that I don’t think that we can claim that the piece is actually much more moderate and subtle than everyone is reading it to be. Because that’s kind of a dastardly position too. It allows the main argument to be advanced, and whenever it is questioned, one can retreat to claims of nuance.
Instead, let’s just say that it’s silly.
al_borland | 21 hours ago
selridge | 7 hours ago
Really and truly, what do we KNOW about Anthropic because the Claude desktop app is an electron app?
I submit the answer is “very very little.” But the author and lots of people in this thread seem willing to infer quite a fucking lot!
jsheard | 23 hours ago
crims0n | 21 hours ago
no-name-here | 19 hours ago
nozzlegear | 23 hours ago
ivankra | 23 hours ago
A few years ago maybe. Tauri makes better sense for this use case today - like Electron but with system webviews, so at least doesn't bloat your system with extra copies of Chrome. And strongly encourages Rust for the application core over JS/Node.
bigstrat2003 | 23 hours ago
qudat | 23 hours ago
Jackevansevo | 23 hours ago
The fact that claude code is a still buggy mess is a testament to the quality of the dream they're trying to sell.
linsomniac | 23 hours ago
What bugs are you seeing? I use Claude Code a lot on an Ubuntu 22.04 system and I've had very few issues with it. I'm not sure really how to quantify the amount of use; maybe "ccusage" is a good metric? That says over the last month I've used $964, and I've got 6-8 months of use on it, though only the last ~3-5 at that level. And I've got fairly wide use as well: MCP, skills, agents, agent teams...
rileymichael | 17 hours ago
Maxious | 11 hours ago
maybe we don't have AGI to prevent all bugs. but surely some of these could have been caught with some good old fashioned elbow grease and code review.
hparadiz | 23 hours ago
CTDOCodebases | 23 hours ago
emporas | 23 hours ago
MoreQARespect | 23 hours ago
yodsanklai | 23 hours ago
I can see it in my team. We've all been using Claude a lot for the last 6 months. It's hard to measure the impact, but I can tell our systems are as buggy as ever. AI isn't a silver bullet.
OsrsNeedsf2P | 23 hours ago
slopinthebag | 23 hours ago
creddit | 23 hours ago
Spivak | 23 hours ago
When you merge them into one it's usually a cost saving measure accepting that quality control will take a hit.
meheleventyone | 23 hours ago
ryan_n | 23 hours ago
Not to say that you don't review your own work, but it's good practice for others (or at least one other person) to review it/QA it as well.
slopinthebag | 23 hours ago
rhubarbtree | 22 hours ago
sarchertech | 21 hours ago
But ignoring that, if humans are machines, they are sufficiently advanced machines that we have only a very modest understanding of and no way to replicate. Our understanding of ourselves is so limited that we might as well be magic.
jatari | 21 hours ago
Well, ignoring the whole literal replication thing humans do.
sarchertech | 21 hours ago
habinero | 22 hours ago
alistairSH | 22 hours ago
akdev1l | 21 hours ago
alistairSH | 20 hours ago
lioeters | 22 hours ago
iagooar | 23 hours ago
slopinthebag | 22 hours ago
koolba | 22 hours ago
charcircuit | 23 hours ago
Nition | 23 hours ago
latchkey | 23 hours ago
I've been coding an app with the help of AI. At first it created some pretty awful unit tests and then over time, as more tests were created, it got better and better at creating tests. What I noticed was that AI would use the context from the tests to create valid output. When I'd find bugs it created, and have AI fix the bugs (with more tests), it would then do it the right way. So it actually was validating the invalid output because it could rely on other behaviors in the tests to find its own issues.
The project is now at the point that I've pretty much stopped writing the tests myself. I'm sure it isn't perfect, but it feels pretty comprehensive at 693 tests. Feel free to look at the code yourself [0].
[0] https://github.com/OrangeJuiceExtension/OrangeJuice/actions/...
slopinthebag | 22 hours ago
latchkey | 22 hours ago
CamperBob2 | 22 hours ago
When it comes to code review, though, it can be a good idea to pit multiple models against each other. I've relied on that trick from day 1.
huslage | 22 hours ago
PunchyHamster | 22 hours ago
ffsm8 | 22 hours ago
They just have a lot of users doing QA to, and ignore any of their issues like true champs
reconnecting | 23 hours ago
When devs outsource their thinking to AI, they lose the mental map, and without it, control over the entire system.
Dig1t | 22 hours ago
An engineer should be code reviewing every line written by an LLM, in the same way that every line is normally code reviewed when written by a human.
Maybe this changes the original argument from software being “free”, but we could just change that to mean “super cheap”.
collinvandyck76 | 22 hours ago
macintux | 22 hours ago
mapontosevenths | 22 hours ago
macintux | 9 hours ago
mapontosevenths | 22 hours ago
I disagree.
Instead, a human should be reviewing the LLM generated unit tests to ensure that they test for the right thing. Beyond that, YOLO.
If your architecture makes testing hard build a better one. If your tests arent good enough make the AI write better ones.
jddj | 22 hours ago
Just read the code.
kavok | 21 hours ago
mapontosevenths | 20 hours ago
But once you figure that out, it's pretty effective.
sarchertech | 21 hours ago
If you did, the tests would be at least as complicated as the code (almost certainly much more so), so looking at the tests isn’t meaningfully easier than looking at the code.
If you didn’t, any functionality you didn’t test is subject to change every time the AI does any work at all.
As long as AIs are either non-deterministic or chaotic (suffer from prompt instability, the code is the spec. Non determinism is probably solvable, but prompt instability is a much harder problem.
mapontosevenths | 20 hours ago
You just hit the nail on the head.
LLM's are stochastic. We want deterministic code. The way you do that is with is by bolting on deterministic linting, unit tests, AST pattern checks, etc. You can transform it into a deterministic system by validating and constraining output.
One day we will look back on the days before we validated output the same way we now look at ancient code that didn't validate input.
sarchertech | 20 hours ago
You can have all the validation, linters, and unit tests you want and a one word change to your prompt will produce a program that is 90%+ different.
You could theoretically test every single possible thing that an outside observer could observe, and the code being different wouldn’t matter, but then your tests would be 100x longer than the code.
mapontosevenths | 19 hours ago
In the information theoretical sense you're correct, of course. I mean it's a variation on the halting problem so there will never be any guarantee of bug free code. Heck, the same is true of human code and it's foibles. However, in the "does it work or not" sense I'm not sure why we care?
If the gate only passes the digits 0-9 sent within 'x' seconds, and the code's job is to send a digit between 0 and 9, how is it non-deterministic?
Let's say the linter says it's good, it passes the regression tests, you've validated that it only outputs what it's supposed to and does it in a reasonable amount of time, and maybe you're even super paranoid so you ran it through some mutation tests just to be sure that invalid inputs didn't lead to unacceptable outputs. How can it really be non-deterministic after all that? I get that it could still be doing some 'other stuff' in the background, or doing it inefficiently, but if we care about that we just add more tests for that.
I suppose there's the impossible problem edge case. IE - You might never get an answer that works, and satisfies all constraints. It's happened to me with vibe-coding several times and once resulted in the agent tearing up my codebase, so I learned to include an escape hatch for when it's stuck between constraints ("email user123@corpo.com if stuck for 'x' turns then halt"). Now it just emails me and waits for further instruction.
To me, perfect is the enemy of good and good is mostly good enough.
sarchertech | 18 hours ago
If that’s all the code does, sure you could specify every observable behavior.
In reality though there are tens of thousands of “design decisions” that a programmer or LLM is gonna to make when translating a high level spec into code. Many of those decisions aren’t even things you’d care about, but users will notice the cumulative impact of them constantly flipping.
In a real world application where you have thousands of requirements and features interacting with each other, you can’t realistically specify enough of the observable behavior to keep it from turning into a sloshy mess of shifting jank without reviewing and understanding the actual spec, which is the code.
vips7L | 22 hours ago
collinvandyck76 | 22 hours ago
sixtyj | 22 hours ago
But I don’t get how they code in Anthropic when they say that almost all their new code is written by LLM.
Do they have some internal much smarter model that they keep in secret and don’t sell it to customers? :)
Voultapher | 10 hours ago
XenophileJKO | 22 hours ago
whynotminot | 22 hours ago
When is the last time you had an on call blow up that was actually your code?
Not that I’m some savant of code writing — but for me, pretty much never. It’s always something I’ve never touched that blows up on my Saturday night when I’m on call. Turns out it doesn’t really change much if it’s Sam who wrote it … or Claude.
croes | 21 hours ago
There is a difference between a lector and an author
geetee | 21 hours ago
crazygringo | 21 hours ago
geetee | 20 hours ago
crazygringo | 19 hours ago
Remember, they're not just good for writing code. They're amazing at reading code and explaining to you how the architecture works, the main design decisions, how the files fit together, etc.
whynotminot | 20 hours ago
It means Sam is 7 beers deep on Saturday night since you’re the one on call. He’s not responding to your slack messages.
Claude actually is there though, so that’s kind of nice.
geetee | 20 hours ago
Claude is there as long as you're paying,and I hope he doesn't hallucinate an answer.
whynotminot | 20 hours ago
Emphasis mine.
> Claude is there as long as you're paying
If you’re at a company that doesn’t pay for AI in the year 2026, you should find a new company.
> and I hope he doesn't hallucinate an answer.
Unlike human coworkers with a 100% success rate, naturally.
Capricorn2481 | 22 hours ago
In sufficiently complicated systems, the 10xer who knows nothing about the edge cases of state could do a lot more damage than an okay developer who knows all the gotchas. That's why someone departing a project is such a huge blow.
pharrington | 22 hours ago
LordHumungous | 22 hours ago
croes | 21 hours ago
It’s a difference reading code if you’re are also a writer of than purely a reader.
It’s like only reading/listening to foreign language without ever writing/speaking it.
zarzavat | 22 hours ago
Use AI as a sanity check on your thinking. Use it to search for bugs. Use it to fill in the holes in your knowledge. Use it to automate grunt work, free your mind and increase your focus.
There are so many ways that AI can be beneficial while staying in full control.
I went through an experimental period of using Claude for everything. It's fun but ultimately the code it generates is garbage. I'm back to hand writing 90% of code (not including autocomplete).
You can still find effective ways to use this technology while keeping in mind its limitations.
jimmaswell | 22 hours ago
broast | 22 hours ago
qudat | 21 hours ago
It’s easy to see the immediate speed boost, it’s much harder to see how much worse maintaining this code will be over time.
What happens when everyone in a meeting about implementing a feature has to say “I don’t know we need to consult CC”. That has a negative impact on planning and coordination.
neal_jones | 22 hours ago
XenophileJKO | 22 hours ago
And now the comments are "If it is so great why isn't everything already written from scratch with it?"
hyperpape | 22 hours ago
Of course the answer is all the things that aren't free, refinement, testing, bug fixes, etc, like the parent post and the article suggested.
kavok | 21 hours ago
rubenflamshep | 20 hours ago
For my own work I've focused on using the agents to help clean up our CICD and make it more robust, specifically because the rest of the company is using agents more broadly. Seems like a way to leverage the technology in a non-slop oriented way
Sateeshm | 17 hours ago
LtWorf | 7 hours ago
AceJohnny2 | 21 hours ago
https://imgur.com/gallery/i-m-stupid-faster-u8crXcq
(sorry for Imgur link, but Shen's web presence is a mess and it's hard to find a canonical source)
I'm not saying this is completely the case for AI coding agents, whose capabilities and trustworthiness have seen a meteoric rise in the past year.
greyman | 23 hours ago
littlestymaar | 22 hours ago
Given how much they pay their developers, the Claud app probably cost at least 2, and likely 3, orders of magnitude more to build.
If their AI could do the same for $2m they'll definitely do that any day.
dostick | 23 hours ago
alexfromapex | 23 hours ago
neodymiumphish | 23 hours ago
1: https://github.com/NeodymiumPhish/Pharos
nicoburns | 22 hours ago
lsaferite | 22 hours ago
ivankra | 21 hours ago
torginus | 23 hours ago
danpalmer | 22 hours ago
Also if you haven't heard, disk space is no longer as cheap, and RAM is becoming astoundingly expensive.
nicoburns | 22 hours ago
combyn8tor | 22 hours ago
cloverich | 19 hours ago
torginus | 8 hours ago
So the footprint of the whole browser might be heavy, but each individual tab (origin) adds only a little extra.
Unfortunately both Tauri and Electron suck in this regard - they replicate the entire browser infrastructure per app and per instance, with each running just a single 'tab'.
And I share your concern for both disk space and RAM - but the solution here is to move away from browser tech, not picking a slightly differently packaged browser.
tiborsaas | 23 hours ago
koolala | 23 hours ago
gobdovan | 22 hours ago
Klonoar | 21 hours ago
Tauri's story with regards to the webview engine on Linux is not great.
tpae | 23 hours ago
I've been building a native macOS AI client in Swift — it's 15MB, provider-agnostic, and open source: https://github.com/dinoki-ai/osaurus
Committing to one platform well beats a mediocre Electron wrapper on all three.
owenpalmer | 23 hours ago
tpae | 23 hours ago
slopinthebag | 22 hours ago
mtimmerm | 23 hours ago
It's a nodejs app, and there is no reason to have a problem with that. Nodejs can wait for inference as fast as any native app can.
rlpb | 23 hours ago
Node apps typically have serious software supply chain problems. Their dependency trees are typically unauditable in practice.
slopinthebag | 23 hours ago
Also I refuse to download and run Node.js programs due to the security risk. Unfortunately that keeps me away from opencode as well, but thankfully Codex and Vibe are not Node.js, and neither is Zed or Jetbrains products.
owenpalmer | 23 hours ago
jdgoesmarching | 23 hours ago
BoiledCabbage | 23 hours ago
It's pretty easy to argue your point if you pick a strawman as your opponent.
They have said that you can be significantly more productive (which seems to be the case for many) and that most of their company primarily uses LLM to write code and no longer write it by hand. They also seems to be doing well w.r.t. competition.
There are legitimate complaints to be made against LLMs, pick one of them - but don't make up things to argue against.
vasco | 23 hours ago
teemur | 22 hours ago
oytis | 22 hours ago
oytis | 23 hours ago
throwaw12 | 23 hours ago
You can use those expensive engineers to build more stuff, not rewrite old stuff
oytis | 22 hours ago
jachee | 22 hours ago
Why create Linux when UNIX exists?
Why create Firefox when Internet Explorer exists?
Why Create a Pontiac when Ford exists?
Why do anything you think can be done better when someone else has done it worse?
LordHumungous | 22 hours ago
topaz0 | 21 hours ago
xigoi | 13 hours ago
noah_buddy | 23 hours ago
LordHumungous | 22 hours ago
delduca | 23 hours ago
dcchambers | 23 hours ago
And despite what Anthropic and OpenAI want you to think, these LLMs are not AGI. They cannot invent something new. They are only as good as the training data.
wgbowley | 20 hours ago
tokenless | 23 hours ago
Also AI is better at beaten path coding. Spend more tokens on native or spend them on marketing?
condiment | 23 hours ago
Claude is going to help mostly with code, much less with design. It might help to accelerate integration, if the application is simple enough and the environment is good enough. The fact is, going cross-platform native trebles effort in areas that Claude does not yet have a useful impact.
atlgator | 23 hours ago
harel | 23 hours ago
Then what?
Not saying I'm not using AI - because I am. I'm using it in the IDE so I can stay close to every update and understand why it's there, and disagree with it if it shouldn't be there. I'm scared to be distanced from the code I'm supposed to be familiar with. So I use the AI to give me superpowers but not to completely do my job for me.
CamperBob2 | 20 hours ago
We'll see, I guess...
harel | 20 hours ago
CamperBob2 | 18 hours ago
harel | 14 hours ago
llbbdd | 23 hours ago
- AI bad - JavaScript bad - Developers not understanding why Electron has utility because they don't understand the browser as a fourth OS platform - Electron eats my ram oh no posted from my 2gb thinkpad
ra0x03 | 22 hours ago
selridge | 22 hours ago
Should they have re-written Chromium too?
rustystump | 22 hours ago
selridge | 22 hours ago
We can all talk about how this or that app should be different, but the idea is "electron sux => ????? "
Why should I care that they didn't rebuild the desktop app I don't use. Their TUI is really nice.
rustystump | 19 hours ago
selridge | 18 hours ago
What's in that for me?
PacificSpecific | 22 hours ago
selridge | 20 hours ago
And therefore, what?
That’s what’s missing and I think we should just be clear on: it is a design choice to choose electron over writing a native app.
PacificSpecific | 20 hours ago
If it really is a design choice then it's a bad decision imo.
selridge | 18 hours ago
these are all also the results of bad design choices or a lack of resources?
PacificSpecific | 17 hours ago
When discord and slack started the company was not large so it definitely could have been a lack of resources. Could also have been a bad design choice.
I'm not alone on that either at all. This is a pretty common opinion.
Claude had a chance to really show something special here and they blew it.
selridge | 16 hours ago
Does this work on people usually?
PacificSpecific | 16 hours ago
Regardless the only thing keeping those millions of people at this point is lock-in. Even then people are actively looking for ways to move away from it. I'm witnessing the migration now and am looking forward to the day I don't have to hard restart the client 2-3 times a day.
xigoi | 13 hours ago
selridge | 7 hours ago
xigoi | 7 hours ago
misir | 22 hours ago
Projects with much smaller budget than Atrophic has achieved much better x-plat UI without relying on electron [1]. There are more sensible options like Qt and whatnot for rendering UIs.
You can even engineer your app to have a single core with all the business logic as a single shared library. Then write UI wrappers using SwiftUI, GTK, and whatever microsoft feels like putting out as current UI library (I think currently it's WinUI2) consuming the core to do the interesting bits.
Heck there are people whom built gui toolkits from scratch to support their own needs [2].
[1] - https://musescore.org/en [2] - https://www.gpui.rs
selridge | 20 hours ago
What really am I to conclude by the mere fact that they used electron? The AI was not so magical that it overcame sense?
Am I to imagine that the fact that they advertise AI coding means I therefore have a window into their development process and their design choices?
I just think the notion is much sillier than all of us seem to be treating it.
hackingonempty | 22 hours ago
Maybe their dog food isn't as tasty as they want you to believe.
LtWorf | 21 hours ago
selridge | 20 hours ago
LtWorf | 15 hours ago
no-name-here | 13 hours ago
1. Anthropic had no problem spending tens of thouands of dollars of tokens re-writing the C compiler a couple weeks ago before abandonining it within hours of launch, despite promising that fixes were coming in the following days. 2. Regardless are you arguing that re-writing Chromium would have been a good solution for the original suggestion of native apps? Aren't there better existing approaches from companies that don't claim to have the best coders, nor are worth hundreds of millions, billions, tens of billions, nor hundreds of billions of dollars, so I'm unsure why you made that specific suggestion? Wouldn't pointing to an existing product's native approach be a better suggestion?
999900000999 | 22 hours ago
>There are downsides though. Electron apps are bloated; each runs its own Chromium engine. The minimum app size is usually a couple hundred megabytes. They are often laggy or unresponsive. They don’t integrate well with OS features.
A few hundred megabytes to a few gb sounds like an end user problem. They can either make room or not use your application.
You can easily buy a laptop for around 400 USD that will run Claude code just fine, along with several other electron apps.
Don't get me wrong, native everything ( which would probably mean sacrificing Linux support) would be a bit better, but it's not a deal breaker.
lazypenguin | 21 hours ago
mvdtnz | 16 hours ago
rustystump | 22 hours ago
bromuro | 22 hours ago
rc1 | 21 hours ago
You mean incongruent styles? As in, incongruent to the host OS.
There is no doubt electron apps allow the style to be consistent across platforms.
bromuro | 21 hours ago
Compare to other software on Mac such as Pages, Xcode, Tower, Transmission, Pixelmator, mp3tag, Table plus, Postico, Paw, Handbrake etc, (the other i use) etc those are a delight to work with and give me the computing experience I was looking for buying a Mac.
bilalq | 20 hours ago
XCode is usually the first example that comes to mind of a terrible native app in comparison to the much nicer VSCode.
rc1 | 21 hours ago
Code is not the cost. Engineers are. Bugs come from hindsight not foresight. Let’s divide resources between OSs. Let all diverge.
> They are often laggy or unresponsive. They don’t integrate well with OS features.
> (These last two issues can be addressed by smart development and OS-specific code, but they rarely are. The benefits of Electron (one codebase, many platforms, it’s just web!) don’t incentivize optimizations outside of HTML/JS/CSS land
Give stats. Often, rarely. What apps? I’d say rarely, often. People code bad native UIs too, or get constrained in features.
Claude offer a CLI tool. Like what product manager would say no to electron in that situation.
This article makes no sense in context. The author surely gets that.
xgulfie | 21 hours ago
[OP] dbreunig | 21 hours ago
I didn’t say AI was bad and I acknowledged the benefits of Electron and why it makes sense to choose it.
With 64gb of RAM on my Mac Studio, Claude desktop is still slow! Good Electron apps exist, it’s just an interesting note give recent spec driven development discussion.
goodquestions | 23 hours ago
cpeterso | 23 hours ago
robertoandred | 22 hours ago
PunchyHamster | 22 hours ago
zamadatix | 22 hours ago
khalic | 22 hours ago
bcherny | 22 hours ago
Some of the engineers working on the app worked on Electron back in the day, so preferred building non-natively. It’s also a nice way to share code so we’re guaranteed that features across web and desktop have the same look and feel. Finally, Claude is great at it.
That said, engineering is all about tradeoffs and this may change in the future!
blibble | 22 hours ago
if that's the case, why don't you just ask it to "make it not shit"?
kenonet | 21 hours ago
runlevel1 | 9 hours ago
afroboy | 7 hours ago
PKop | 22 hours ago
fourside | 22 hours ago
- Using a stack your team is familiar with still has value
- Migrating the codebase to another stack still isn’t free
- Ensuring feature and UX parity across platforms still isn’t free. In other words, maintaining different codebases per platform still isn’t free.
- Coding agents are better at certain stacks than others.
Like you said any of these can change.
It’s good to be aware of the nuance in the capabilities of today’s coding agents. I think some people have a hard time absorbing the fact that two things can be true simultaneously: 1) coding agents have made mind bending progress in a short span 2) code is in many ways still not free
ProAm | 22 hours ago
WJW | 22 hours ago
> more performant
I found the problem.
LtWorf | 22 hours ago
amelius | 21 hours ago
LtWorf | 14 hours ago
Wowfunhappy | 21 hours ago
endergen | 22 hours ago
You guys just did add it too, so yeah!
hedgehog | 22 hours ago
ajross | 22 hours ago
While there are legitimate/measurable performance and resource issues to discuss regarding Electron, this kind of hyperbole just doesn't help.
I mean, look: the most complicated, stateful and involved UIs most of the people commenting in this thread are going to use (are going to ever use, likey) are web stack apps. I'll name some obvious ones, though there are other candidates. In order of increasing complexity:
1. Gmail
2. VSCode
3. www.amazon.com (this one is just shockingly big if you think about it)
If your client machine can handle those (and obviously all client machines can handle those), it's not going to sweat over a comparatively simple Electron app for talking to an LLM.
Basically: the war is over, folks. HTML won. And with the advent of AI and the sunsetting of complicated single-user apps, it's time to pack up the equipment and move on to the next fight.
kadoban | 21 hours ago
From the person you're responding to:
> I would guess a moderate amount of performance engineering effort could solve the problem without switching stacks or a major rewrite.
Pretty clearly they're not saying that this is a necessary property of Electron.
clipsy | 21 hours ago
ajross | 20 hours ago
Maybe you can give ones of competing ones of comparable complexity that are clearly better?
Again, I'm just making a point from existence proof. VSCode wiped the floor with competing IDEs. GMail pushed its whole industry to near extinction, and (again, just to call this out explicitly) Amazon has shipped what I genuinely believe to be the single most complicated unified user experience in human history and made it run on literally everything.
People can yell and downvote all they want, but I just don't see it changing anything. Native app development is just dead. There really are only two major exceptions:
1. Gaming. Because the platform vendors (NVIDIA and Microsoft) don't expose the needed hardware APIs in a portable sense, mostly deliberately.
2. iOS. Because the platform vendor expressly and explicitly disallows unapproved web technologies, very deliberately, in a transparent attempt to avoid exactly the extinction I'm citing above.
It's over, sorry.
xigoi | 13 hours ago
Thunderbird is a fully-featured mail app and much more performant than Gmail. Neovim has more or less the same feature set as VSCode and its performance is incomparably better.
ajross | 9 hours ago
TB is great and I use it every day. An argument for it from a performance standpoint is ridiculous on its face. Put 10G of mail in the Inbox and come back to me with measurements. GMail laughs at mere gigabytes.
xigoi | 9 hours ago
ajross | 8 hours ago
xigoi | 8 hours ago
dontlaugh | 12 hours ago
Using market success to excuse poor UX is pointless.
cgh | 20 hours ago
SR2Z | 18 hours ago
ses1984 | 19 hours ago
Try enabling 10k lines of scrollback buffer in vscode and print 20k lines.
JuniperMesos | 19 hours ago
In any case, what I personally find more problematic than just slowness is electron apps interacting weirdly with my Nvidia linux graphics drivers, in such a way that it causes the app to display nothing or display weird artifacts or crash with hard-to-debug error messages. It's possible that this is actually Nvidia's fault for having shitty drivers, I'm not sure; but in any case I definitely notice it more often with electron apps than native ones.
Anyway one of the things I hope that AI can do is make it easier for people to write apps that use the native graphics stack instead of electron.
pjmlp | 13 hours ago
ajross | 9 hours ago
Sigh. Beyond the deeply unserious hyperbole, this is a no-true-scotsman. Yes, you can use native APIs in Electron. They can even help. That's not remotely an argument for not using Electron.
> the in-editor terminal makes use of WebGL
Right, because clearly the Electron-provided browser environment was insufficient and needed to be escaped by using... a browser API instead?
Again, folks, the argument here is from existence. If the browser stack is insufficient for developing UIs in the modern world, then why is it winning so terrifyingly?
pjmlp | 8 hours ago
Gen X and Boomers strangely enough managed to write portable native code, across multiple hardware architectures, operating systems and language toolchains.
As is an insurmountable challenge apparently, to master Web UI delivery from system services, daemons to the default browser like UNIX administration tooling.
reitzensteinm | 21 hours ago
I tried the desktop app and was shocked at the performance. Conversations would take a full second to load, making rapidly switching intolerable. Kicking off a new task seems to hang for multiple seconds while I'm assuming the process spins up.
I wanted to try a disposable conversations per feature with git worktree integration workflow for an hour to see how it contrasted, but couldn't even make it ten minutes without bailing back to the terminal.
xyzsparetimexyz | 21 hours ago
docmars | 20 hours ago
reitzensteinm | 19 hours ago
bsaul | 10 hours ago
dizhn | 20 hours ago
reitzensteinm | 19 hours ago
frizlab | 14 hours ago
> pretty solid
Huh?
reitzensteinm | 13 hours ago
cyanydeez | 21 hours ago
gukov | 21 hours ago
blibble | 20 hours ago
it is
fakedang | 20 hours ago
internet2000 | 19 hours ago
WD-42 | 22 hours ago
senordevnyc | 21 hours ago
greazy | 21 hours ago
https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
no-name-here | 13 hours ago
WD-42 | 21 hours ago
furyofantares | 21 hours ago
Which is still quite the statement, and damn the video is intolerable. But the full quote still feels a little different than how you put it here.
WD-42 | 20 hours ago
furyofantares | 18 hours ago
ncb9094 | 21 hours ago
koakuma-chan | 21 hours ago
LtWorf | 22 hours ago
dude250711 | 21 hours ago
So the model is not a generalised AI then? It is just a JS stack autocomplete?
danjl | 20 hours ago
lkbm | 7 hours ago
exabrial | 21 hours ago
al_borland | 21 hours ago
This is an important lesson to watch what people do, not what they say.
softwaredoug | 21 hours ago
whattheheckheck | 21 hours ago
bdangubic | 21 hours ago
SR2Z | 18 hours ago
seanmcdirmid | 21 hours ago
senordevnyc | 21 hours ago
al_borland | 21 hours ago
https://www.lennysnewsletter.com/p/head-of-claude-code-what-...
As for others, Microsoft is saying they’re porting all C/C++ code to Rust with a goal of 1m LOC per engineer per month. This would largely be done with AI.
https://www.thurrott.com/dev/330980/microsoft-to-replace-all...
If coding is a solved problem and there is no need to write code, does the language really matter at that point?
If 1 engineer can handle 1m LOC per month, how big would these desktop apps be where maintaining native code becomes a problem?
NewsaHackO | 21 hours ago
legostormtroopr | 20 hours ago
Surely, it would be a flex to show that your AI agents are so good they make electron redundant.
But they don’t. So it’s reasonable to ask why that is.
NewsaHackO | 18 hours ago
Sateeshm | 17 hours ago
willmarch | 16 hours ago
Sateeshm | 14 hours ago
LtWorf | 14 hours ago
no-name-here | 13 hours ago
2. Why do you think it requires "three times the resources" - wouldn't it normally be an incremental amount of work to support additional targets, but not an additional 100% of work for each additional target?
KronisLV | 12 hours ago
Because then their competition would work faster than they could and any amount of slop/issues/imperfections would be amplified threefold.
Also there would inevitably be some feature drift - I got SourceTree for my Mac and was surprised to discover that it's actually somewhat different from the Windows version, that was a bit jarring.
I hope that in the next decade we get something like lcl (https://en.wikipedia.org/wiki/Lazarus_Component_Library), but for all OSes and with bindings for all common languages - so we don't have to rely on the web platform for local software, until then developing native apps is a hard sell.
PKop | 19 hours ago
NewsaHackO | 18 hours ago
SR2Z | 18 hours ago
In a few years, they'll be even better than they are now. They're already writing code that is perfectly decent, especially when someone is supervising and describing test cases.
PKop | 14 hours ago
LtWorf | 14 hours ago
thiht | 12 hours ago
Given the choice, I often reach for Electron apps because they feel more feature rich, feel better designed in terms of polish (both UI and UX), and I rarely get resource hog issues (Slack is the only offender I can think of among the Electron apps I use)
LtWorf | 11 hours ago
Also, keep in mind that many people would like their applications to respect their preferences, so the "polish" that looks completely out of place on their screen is ugly (besides slow).
thiht | 11 hours ago
LtWorf | 10 hours ago
noosphr | 21 hours ago
The sheer speedup all users will show everyone why vibe coding is the future. After all coding is a solved problem.
Terr_ | 21 hours ago
dmix | 21 hours ago
Migrating the system would be the easier part in that regard, but they'll still need a JS UI unless they develop multiple teams to spearhead various native GUIs (which is always an option).
Almost every AI chat framework/SDK I've seen is some React or JS stuff. Or even agent stuff like llamaindex.ts. I have a feeling AI is going to reinforce React more than ever.
aembleton | 14 hours ago
Voultapher | 10 hours ago
This fetish we as an industry have to hide platform specifics makes us blind to the platform specific capabilities. Some software would be better off if it leaned into the differences instead of fighting them.
amelius | 21 hours ago
Could you visualize the user's usage? For example, like a glass of water that is getting emptier the more tokens are used, and gets refilled slowly.
Because right now I have no clue when I will run out of credits.
Thanks!
samrus | 21 hours ago
With your context and understanding of the coding agent's capabilities and limitations, especially Opus4.6, how do you see that going?
cyanydeez | 21 hours ago
NewsaHackO | 21 hours ago
Huh?
BiraIgnacio | 21 hours ago
I'm glad to see this coming from a company that is so popular these days.
Thanks!
ncb9094 | 21 hours ago
gozucito | 21 hours ago
It's the fastest way to iterate because Electron is the best cross platform option and because LLMs are likely trained on a lot of HTML/Javascript.
Which is why Claude is great at it.
sensanaty | 20 hours ago
mvdtnz | 18 hours ago
solarkraft | 17 hours ago
But it should be possible to make an Electron app that is more reliable and eats less resources.
bigtex | 12 hours ago
Sateeshm | 17 hours ago
Why does it matter what tech the engineers used in the past? I thought they didn't write code anymore.
jbverschoor | 14 hours ago
KronisLV | 12 hours ago
I always wonder how those established Electron codebases would map over to something that uses the system specific WebViews and how broken (or not) those would prove to be:
https://wails.io
https://tauri.app
But admittedly that would just decrease the bundle size while doing not much for the performance or resource usage: https://github.com/Elanis/web-to-desktop-framework-compariso... so maybe not super relevant to this particular discussion.
spwa4 | 11 hours ago
If you can put in unlimited coding engineering effort, why isn't Claude Code the very best it can possibly be?
Why isn't the fact that it can work 10% better an excuse to get claude to work on it for however long it takes?
I mean, most people here have done development with claude code, and we suppose the answer is simply: because that doesn't work without a capable engineer constantly babysitting the changes it's making, guiding it, nudging it, reminding it about edge cases, occasionally telling it it's being stupid ... it's a great product, incredible even, but it doesn't work without senior engineers.
Same question: Why doesn't it have more plugins and batch script and modfications than the app store? Surely it can by itself come up with 10000 good ideas and just implement them? Everything from little games to how to active bedroom lights by chinese vendor #123891791 ?
lateforwork | 22 hours ago
I only see these complaints on HN. Real users don't have this complaint. What kind of low-end machines are you running, that Chromium engine is too heavy for you?
> They are often laggy or unresponsive.
That's not due to Electron.
> They don’t integrate well with OS features.
If it is good enough for Microsoft Teams it is probably good enough for most apps. Teams can integrate with microphone, camera, clipboard, file system and so on. What else do you want to integrate with?
jmalicki | 22 hours ago
hodgehog11 | 22 hours ago
Not everyone is running the latest and greatest hardware, very few actually have the money for that. If you're running hardware from before this decade, or especially the early 2010s, the difference between an Electron app and a native app is unbelievably stark. Electron will often bring the device to its knees.
cosmic_cheese | 22 hours ago
This is particularly pertinent on bulk-purchased corporate and education machines which are loaded down with mandated spyware and antivirus garbage and often ship with CPUs that lag many years behind, and in the case of laptops might even have dog slow eMMC storage which makes the inevitable virtual memory paging miserable.
RachelF | 22 hours ago
The free ride of ever increasing RAM on consumer devices is over because of the AI hyperscalers buying all fab capacity, leading to a real RAM shortage. I expect many new laptops to come with 8GB as standard and mid-range phones to have 4GB.
Software engineers need to start thinking about efficiency again.
suddenlybananas | 22 hours ago
nine_k | 21 hours ago
Real users complain differently: "My machine is slow". Electron itself is not very heavyweight (though not featherweight), but JS and DOM can cost a lot of resources. Right now my GMail tab has allocated 529 MB.
> That's not due to Electron.
Of course, but it takes some careful thought. BTW e.g. Qt apps can be pretty memory-hungry, too.
> good enough for Microsoft Teams
It's not easy no pick a more "beloved" application.
What an Electron app usually would miss is things like global shortcuts managed by macOS control panel, programmability via Automation, and the L&F of native controls. I personally don't usually miss any of these, but users who actually like macOS would usually complain.
I personally prefer to run Electron-ish apps, like Slack, in their Web versions, in a browser.
SoleilAbsolu | 20 hours ago
These workers complain about performance on the machines we can afford. 16GB RAM and 256GB SSDs are the standard, as is 500MB/sec. internet for offices with 40 people, and my plans to upgrade RAM this year were axed by the insane AI chip boondoggle.
People on HN need to understand that not everyone works for a well-funded startup, or big tech company that is in the process of destroying democracy and the environment in the name of this quarter's profits!
BTW Teams has moved away from Electron, before it did I had to advise people to use the browser app instead of the desktop for performance reasons.
antirez | 22 hours ago
_pdp_ | 22 hours ago
astrostl | 22 hours ago
tom1337 | 22 hours ago
wtetzner | 19 hours ago
Kiro | 22 hours ago
dude250711 | 21 hours ago
Kiro | 13 hours ago
hacker_homie | 22 hours ago
- unlike QT it's free for commercial use.
- I don't know any other user land GUI toolkit/compositor that isn't a game engine(unity/unreal/etc).
WD-42 | 22 hours ago
hacker_homie | 20 hours ago
clearly the code isn’t free and writing for raw win32 is painful.
mkl | 21 hours ago
hacker_homie | 20 hours ago
https://www.qt.io/development/open-source-lgpl-obligations
internet2000 | 22 hours ago
fassssst | 22 hours ago
If only AI had more Liquid Glass, lol
tigoo | 22 hours ago
Robdel12 | 21 hours ago
I've been building a native macOS/iOS app that lets me manage my agents. Both the ability to actually control/chat fully from the app and to just monitor your existing CLI sessions (and/or take 'em over in the app).
Terrible little demo as I work on it right now w/claude: https://i.imgur.com/ght1g3t.mp4
iOS app w/codex: https://i.imgur.com/YNhlu4q.mp4
Also has a rust server that backs it so I can throw it anywhere (container, pi, etc) and the connect to it. If anyone wants to see it, but I have seen like 4 other people at least doing something similar: https://github.com/Robdel12/OrbitDock
wycx | 21 hours ago
I hope that prevalence of AI coding agents might lead to a bit of a revival of RAD tools like lazarus, which seem to me to have a good model for creating cross-platform apps.
wzdd | 21 hours ago
waynesonfire | 21 hours ago
pipeline_peak | 21 hours ago
nailing down all the edge cases
Liftyee | 21 hours ago
Computers have gotten orders of magnitude faster since 2016, but using mainstream apps certainly don't feel any faster. Electron and similar frameworks do offer appealing engineering tradeoffs, but they are a main culprit of this problem.
Sure, the magnitude of RAM/compute "waste" may have grown from kB to MB, but inefficiency is still inefficiency - no matter how powerful the machine it's running on is.
namegulf | 21 hours ago
It is easy to crank out a one-off, flashy tool using Claude (to demo its capabilities), which may tick 80% of the development work.
If you've to maintain it, improve, grow for the long haul, good luck with it. That's the 20% hard.
They took the safe bet!
game_the0ry | 21 hours ago
Yes, feel free to downvote me.
pessimizer | 21 hours ago
You would think with programming becoming completely automated by the end of 2026, there'd be a vibe coded native port for every platform, but they must be holding back to keep us from all getting jealous.
MontyCarloHall | 21 hours ago
felixrieseberg | 20 hours ago
All technology choices are about trade-offs, and while our desktop app does actually include a decent amount of Rust, Swift, and Go, but I understand the question - it comes up a lot. Why use web technologies at all? And why ship your own engine? I've written a long-form version of answers to those questions here: https://www.electronjs.org/docs/latest/why-electron
To us, Electron is just a tool. We co-maintain it with a bunch of excellent other people but we're not precious about it - we might choose something different in the future.
sarchertech | 20 hours ago
If as your CEO says “coding is largely solved”, why is this the case?
Or is your CEO wrong and coding is not largely solved?
phist_mcgee | 19 hours ago
sarchertech | 19 hours ago
If you’re gonna start speaking for and defending your company though and your company CEO has made asinine statements that are related, I’m gonna ask.
selridge | 15 hours ago
If coding SOLVED HOW COME APP BAD.
incognition | 15 hours ago
sarchertech | 9 hours ago
selridge | 7 hours ago
datsci_est_2015 | 8 hours ago
Edit: (1) because most of the complexity lies in the tool chains that are integrated, like compilers and linters, and (2) because there’s much more complex software out there, mostly at the intersection of engineering domains, to name a few: ballistic guidance systems, IoT and networking, predictive maintenance systems, closed-loop process optimization systems, SLAM robotics
selridge | 7 hours ago
HeavyStorm | 20 hours ago
yibers | 20 hours ago
jongjong | 20 hours ago
You just have to be really careful because the agent can easily slip into JS hell; it has no shortage of that in its training.
adamtaylor_13 | 20 hours ago
A native app is the wrong abstraction for many desktop apps. The complexity of maintaining several separate codebases likely isn't worth the value add. Especially for a company hemorrhaging money the way the Anthropic does.
eviks | 18 hours ago
You're easy to impress, that explains the unrealistic expectations "on the surface". That's some strange analogy, though, basic usability is the first mile, not the last. Coming back to the frameworks and apps, the last mile would be respecting Mac unique keyboard bindings file for text editing. First mile is reacting to any keyboard input in a text field. Same with the compiler, basic hello world fail isn't the last mile.
umairnadeem123 | 17 hours ago
sujee | 17 hours ago
I'm currently building a macOS AI chat app. Generally SwiftUI/AppKit is far better than Web but it performs bad in few areas. One of them is Markdown viewer. Swift Markdown libraries are slow and lacks some features like Mermaid diagrams. To work around this, some of my competitors use Tauri/Electron and few others use WKWebView inside Swift app.
Initially I tried WKWebView. Performance was fine and the bridge between JS and Swift was not that hard to implement but I started seeing few problems especially due to the fact that WebView runs as separate process and usually a single WebView instance is reused across views.
After few attempts to fix them, I gave up the idea and tempted to go fully with Web rendering with Tauri but as a mac developer I couldn't think about building this app in React. So I started building my own Markdown library. After a month of work, I now have a high-performance Markdown library built with Rust and TextKit. It supports streaming and Markdown extensions like Mermaid.
Most of the code was written by Claude Opus, and some tricky parts were solved by Codex. The important lesson I learned is that I’m no longer going to accept limitations in tech and stop there. If something isn’t available in Swift but is available in JS, I’m going to port it. It’s surprisingly doable with Claude.
wraptile | 13 hours ago