I enjoyed this essay so much - I read it, then watched the video version because I wanted to absorb it a second time.
It's by far the best commentary I've seen on the growing AI backlash from the general public outside of the tech industry, helping explain why even the people who are using AI on a daily basis actively dislike it.
I do hate legibility and those who act like it’s a good thing. I suspect that the AI backlash is in part because people can no longer ignore what has already happened.
I think this might be true. Over the past decade I increasingly find myself responding to incredulous statements of the form "did you know they're now doing X!!!" With.... They've always been doing that.
Well, just like with some other things, there is a thing promised, with its numerous ugly sides but not exclusively them, and then there is the next iteration keeping the ugly side, dropping even the promises of the good sides, and becomning a pure tool of abuse.
A large part of use of AI manages to keep all the drawbacks of legibility, then make it even worse. For things that already are done via a large organisation, legibility can at least somewhat increase transparency and predictability. One can go the detailed-rule part and get more alianness and more repeatability, or got responsible-decisionmaker-hierarchy path to reduce alianness at the cost of some arbitrariness.
But now one can go with AI and get arbitrariness and opaqueness and alianness and incomprehensible blunders all together, and faster.
Regular people don’t see the opportunity to write code as an opportunity at all.
This has been a big failure since the dawn of computing. So many people slog through terrible mind numbing tasks or just avoid tasks that they want to see done entirely, just because the computer doesn't already do what they need and they don't know how to make it to that. Cutting and pasting hundreds of webpages from one CMS to another manually. copying lines from one spreadsheet to another and then into an email and back into a spreadsheet. The list is endless.
So many people have tried to solve this problem. Low code. No code. Teach everyone to code. Open source. There's an app for that. Coding agents. Nothing works. Because what we have most of all is a failure of imagination. Until people can imagine the computer being a force for good in their work they won't seek out or be able to take advantage of it, no matter what pathways we build.
I knew a guy at IBM who would unroll loops. Well, he didn't know the concept of loops to begin with, so they were never rolled to begin with. Incredibly nice person, personal worth is independent of the ability to code. I read somewhere that only 5-10% of the population has the neural wiring to think mathematically or computationally. I'm not sure I believe that, but it's not impossible it's a mild form of neurodivergence.
This is a separate, but important, problem. The one where we consider jobs essential to life and also consider workers unnecessary if they become too efficient.
Efficiency gains in a sane system would benefit the people doing the work. But we too often find the opposite. I don't think preferring inefficiency is the answer but societal change is hard and I don't pretend to know what to do.
Nothing works. Because what we have most of all is a failure of imagination.
I disagree. I think people's problem is their apathy towards the computers. They don't care about the computer. They don't care about how it works or the interesting algorithms implemented on them. They just want their results. They want those results now, with minimal fuss, and they simply could not care less about how it works. To them the computer is a nuisance, to be interacted with only when it helps. It's a black box, something to be abstracted away.
if that were true then abstracting it away would work, but it hasn't. I maintain we have failed to inspire in most people any idea of what results are possible. So they have no idea what they are missing.
Most people I encounter who are not technically minded see computers the same way as any other appliance. Is necessary, but only in so much as they get to do the thing they need to do. They don't think about them as possibility machines like coders do.
But that's ok!
My kitchen is a possibility machine. I could bake the most wonderous things in that oven! You know what I bake? Frozen pizza. That's ok!
I don't believe the people yearn for automation. Let people be who they are.
I think Silicon Valley believes it can do anything to reshape society, because for the past three decades that was actually true. But I think perhaps AI and automation is a step too far, and it's not a failure of imagination or inspiration or encouragement. It's just people being people.
The kitchen analogy is a great example of what I'm trying to say. Most people are well aware of what kinds of food are possible. So when they go to a restaurant they don't order plain toast, even if that's what they choose to make at home. They can imagine the possibilities, and when we make it easy (and/or cheap) enough they will happily enjoy good food.
That's what computing still lacks. People often don't think to demand more because they just don't know that more is out there. The problem isn't only that cooking is too hard or restaurants are too expensive, we've made computing easier and cheaper than ever, but rather that the capabilities aren't well known.
I yearn for automation, but not expensive, stochastic automation from rent-seeking megacorps.
I think people would be receptive to more automation if e.g. these software companies just made better software with meaningful interoperability and deterministic automation where it makes sense.
Instead the focus is generally on how to extract more dollars out of your user, and relatedly how to keep them in your silo and keep them using your software for as long as possible. The kind of automation being pushed here didn't make sense economically, and there is no mechanism to compel companies to provide it anyway for the greater good.
Now it does pay to provide a version of it, but only because it requires continuously boiling the ocean, and you can rent out access to ocean-boiling machines.
One idea I’ve been paying attention to lately is that you get what you give. If you drive aggressively in congested traffic, the default reaction from other drivers is to block you. I’ve watched myself on both sides of this. It’s a very reasonable default, actually.
And the subtext of AI seems to be: “you will be dominated.”
Nobody wants that. You can’t issue PR statements to counteract what is an almost primal fear.
I agree completely. I remember being told “don’t be left behind” some time ago and my visceral reaction was akin to a desire to play chicken with a jerk in traffic.
That’s why [non-tech] people hate AI [especially gen Z]
My non-tech acquaintances are mostly millennials, but they don't particularly seem to hate AI LLM-based conversational agents. Or maybe they do, but then I don't get why they use them so much, rely on them so much and believe so much about their potential. As the "tech guy", I am the one "hating AI" and most skeptical about these chatbots. I don't mention agents here, because I do not know a single non-tech person who has the remotest idea about what this could be.
Anyway, I found the article interesting but this part reflected very little with my own experience. Maybe I should ask my friends whether they actually do like using chatgpt, or if they feel they have to, among many other chores of their jobs.
It's possible that they're happy to use ChatGPT for personal benefit but they still "hate AI" in that they dislike slop on Facebook, find AI art tacky, worry about the impact of AI on education and are nervous about AI job losses.
I'm beginning to suspect I might "hate AI" myself on those grounds! I've just never been able to admit it because I find so much personal utility in it.
No. Everything has a moral dimension. We all have to make lots of decisions about what we care about, and what we fight for. I don't ask for moral purity, nor do I assert my own.
But I think that responsibility goes up when we promote a product, more so if we do so relentlessly, for years, in spite of the very obvious dangers and problems about which we were publicly informed.
I try very hard to be a trustworthy, reliable source of information on these products. If you want to categorize that as "promotion" then I guess that's in the eye of the beholder.
I've written about their ethical implications 293 times. I collaborated on the earliest widely read report on the ethics of their training. I highlighted Anthropic's pricing misbehavior just a few days ago. I've been the most consistent voice about the security implications of building software on them, including coining the term for the category of attacks. I also helped boost the term slop into wider use.
I don't appreciate the implication that I'm mindlessly promoting them or ignoring their ethical problems. There are a million AI hype boosters out there. I aspire not to be one of them.
A genuine question: do you not see saying that you find something very useful as promotion? Despite all the hedging and critiques of specific business practices, it seems hard to come away from your blog and post history here with any impression other than that you approve of LLMs as a technology, at least for use in writing software, and AI hyperscalers as an industry. Is that wrong?
Based on my own experience I am confident that if you are a software developer LLMs can 2-5x the rate at which you produce high quality code - if you learn how to use them effectively. As such, I think it would be unethical for me not to share what I've learned with others.
Outside of software engineering I've had a significant improvement in my quality of life from learning how to use tools like ChatGPT and Claude. I want to share those benefits with others too.
I still don't think I would wave a wand to uninvent this technology if I could. I'd absolutely wave a wand that would reduce the environmental impact and discourage people from using it to do bad stuff.
As such, I think it would be unethical for me not to share what I've learned with others.
I don’t think this necessarily follows - to take an extreme example, I don’t feel any obligation to tell all my friends about the financial benefits of bank robbery, however real they may be.
That’s not me saying individual LLM use exists on the same ethical order of magnitude as bank robbery! But you are making the decision to encourage people to use LLMs despite their harms, and I don’t think it’s unfair to say you bear some ethical responsibility as a result.
I hope you can see how criticizing that is different from saying you're a mindless shill or anything like that. I don't believe that about you.
This is the second time we've had a similar conversation, so I'll stop engaging you on this. All I'll say is that I really hope you consider being more active about combating the bad outcomes, even if it has an impact on the good ones.
If you ask somebody to describe the most boring, tedious, repetitive task in their day, and listen to them, would they really be reluctant to embrace a tool to eliminate it?
Personally I think we've been missing the listening step. It's a very different dynamic to tell people to use a tool without understanding the problem that the tool exists to solve.
We also have some ugly social challenges. If a task can really be automated to mindless execution, did the task ever add value? Did the person asking for a task perceive it to be important and require care, while the person performing it sees it as low value and mindless? Automation highlights underlying issues and disagreements.
The issue with this is one of frame of reference. What you view as a boring repetitive task, another might view with joy and see importance in it. Whether or not you can automate it or do it more efficiently doesn't really matter to their value proposition.
I have frequently seen individuals performing functions that an organization views as important in ways that are incredibly inefficient (to me). They don't need AI, they don't even need "automation", they just need to be introduced to what "modern tools" like a "web GUI" are. The missing piece isn't automation, it is imagination.
In those cases, yes, the task was adding value to the organization and the individual performing it perceived it to be important. The mindless bits were just the "price of doing business" to produce the product.
Now we're going in the opposite direction. Where before people didn't know enough to try to automate, now they don't know enough to push back on using an LLM for what was previously a deterministic task. What the organization could automate with 100 lines of Python (enabling the worker to produce a better product, faster), now will become a chat bot dependent on an external provider. Error rates will increase, yet everyone will agree the role must not be needed because now there is a chatbot.
I'm more and more convinced of the importance of highlighting all the options to decision makers so that they understand the trade-offs and where automation might bring the gains they want, but in a deterministic and free-after-writing fashion.
What the organization could automate with 100 lines of Python (enabling the worker to produce a better product, faster), now will become a chat bot dependent on an external provider.
My ideal here is that the chatbot helps them write that 100 lines of Python.
The challenge is helping people understand when they should ask for that script as opposed to settling for a solution that calls the model every time.
The incentives for automation align more with the managerial class than the worker class---if I, as a worker, automate part of my job, I don't get more free time (unless I keep very quiet about it). No, I get more work or even no work if my automation is too good. What's the quote? Oh yes, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it." (Upton Sinclair). Or perhaps this conversation will help:
"AI will improve your productivity by 10x!"
"So my salary is increasing 10x?"
"Ha! Of course not!"
"But I get to work only 4 hours a week, right?"
"No, still 40 hours a week!"
"Then things are going to be 10x cheaper?"
"No! What are you, a communist?"
If my salary increased 10x, or my work week got 10x less, then I might look into the use of LLMs. But right now, the incentives aren't there for me.
The challenge is helping people understand when they should ask for that script as opposed to settling for a solution that calls the model every time.
Which is exactly my point about ensuring decision makers are given ALL the information up front. They're already hearing from the AI company about why it should be a chat bot, what the company is not telling them is the long-term cost, business resiliency impact, and introduced error rate. That's where we need to make sure we call out "This is a square peg being shoved in a round hole" and advocate for the deterministic solution that's free-after-first-write.
It's quite telling how different perceptions of this tech are in the west from China. The elephant in the room is that people living under capitalism feel threatened by this tech because they're afraid that it will eventually replace the need for their skills. That's really what it all comes down to. The problem isn't with AI, it's with how the oligarchs will apply it to further repress the working majority.
People living in a socialist society do not share these fears and hence their perception of this tech is overwhelmingly positive because their expectation is that it will help improve their lives the same way all other technological progress they've experienced up to now has.
And so they are racing one another to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.
It operates continuously in the background, building a persistent memory of your preferences and patterns so it can better act on your behalf.
I experienced this for the first time only a few days ago. Shipped a major milestone in my project. Was feeling a little burned out. Boredom started taking hold. So I asked Claude to ingest my project's entire 3 year commit history and tell me interesting facts about it and myself.
What followed was an utterly surreal experience where Claude managed to infer a truly absurd number of facts about me. Better yet: it started committing those facts to memory as context, and it actually started influencing Claude's behavior too.
It was entertaining but in the end it made me feel like I was in Deus Ex.
Morpheus was designed as a large-scale data-miner and intelligent pattern recogniser, and was used by Everett as a parlour trick, in order to treat guests by having it tell them about themselves or any other subject they asked it about.
What followed was an utterly surreal experience where Claude managed to infer a truly absurd number of facts about me.
that's interesting! I tried to do the same for a big project I'm working on (also ~3 years old) but Claude couldn't find anything particularly interesting apart telling me which periods / time of the day I'm most productive and things like that.
What kind of facts did it manage to infer about you?
Claude inferred that my marriage interrupted my project for quite some time. It correctly guessed a few of the books I've read due to a few words I used. Guessed all sites and the blogs too, such as the LWN. It figured out I have the habit of running strace on my language to see what it's doing at the system call level. It guessed the next topic branch I was planning to work on. It figured out I used either vim or emacs.
It even provided a rather convincing psychological analysis of the way I think and approach problems. I did tell it that it could analyze anything it wanted and even be psychological if it wanted to, though.
I can't even mention everything here because at some point it actually got too personal.
Are we talking of "claude code"?
Do you know/think whether claude code has access to memories saved in claude desktops/claude web chats? How did it know that you got married? (Congratulations, btw! :)).
I think that the big difference in my result is that I use chatgpt for my "day to day random questions" and mostly claude code via bedrock for coding, so it cannot access shared memories (and at this point, it might be a good thing!)
Nilay Patel seems like they're beginning to think thoughts that are part of a well-established body of work.
From the author of ELIZA, who (so far as i can tell) knew what was coming 50 years ago:
It may appear at first glance that this is an in-house debate of little
consequence except to a small group of computer technicians. But at bottom, no
matter how it may be disguised by technological jargon, the question is whether
or not every aspect of human thought is reducible to a logical formalism, or,
to put it into the modern idiom, whether or not human thought is entirely
computable. That question has, in one form or another, engaged thinkers in all
ages. Man has always striven for principles that could organize and give sense
and meaning to his existence. But before modern science fathered the
technologies that reified and concretized its otherwise abstract systems, the
systems of thought that defined man's place in the universe were fundamentally
juridicial. They served to define man's obligations to his fellow men and to
nature. The Judaic tradition, for example, rests on the idea of a contractual
relationship between God and man. This relationship must and does leave room
for autonomy for both God and man, for a contract is an agreement willingly
entered into by parties who are free not to agree. Man's autonomy and his
corresponding responsibility is a central issue of all religious systems. The
spiritual cosmologies engendered by modern science, on the other hand, are
infected with the germ of logical necessity. They, except in the hands of the
wisest scientists and philosophers, no longer content themselves with
explanations of appearances, but claim to say how things actually are and must
necessarily be. In short, they convert truth to provability.
As one consequence of this drive of modern science, the question, “What aspects
of life are formalizable?” has been transformed from the moral question, “How
and in what form may man's obligations and responsibilities be known?” to the
question, “Of what technological genus is man a species?” Even some
philosophers whose every instinct rebels against the idea that man is entirely
comprehensible as a machine have succumbed to this spirit of the times. Hubert
Dreyfus, for example, trains the heavy guns of phenomenology on the computer
model of man.[fn:0-7] But he limits his argument to the technical question of what
computers can and cannot do. I would argue that if computers could imitate man
in every respect---which in fact they cannot---even then it would be
appropriate, nay, urgent, to examine the computer in the light of man's
perennial need to find his place in the world. The outcomes of practical
matters that are of vital importance to everyone hinge on how and in what terms
the discussion is carried out.
One position I mean to argue appears deceptively obvious: it is simply that
there are important differences between men and machines as thinkers. I would
argue that, however intelligent machines may be made to be, there are some acts
of thought that ought to be attempted only by humans. One socially significant
question I thus intend to raise is over the proper place of computers in the
social order. But, as we shall see, the issue transcends computers in that it
must ultimately deal with logicality itself---quite apart from whether
logicality is encoded in computer programs or not.
The lay reader may be forgiven for being more than slightly incredulous that
anyone should maintain that human thought is entirely computable. But his very
incredulity may itself be a sign of how marvelously subtly and seductively
modern science has come to influence man's imaginative construction of reality.
Weizenbaum, introduction to "Computer power and human reason", 1976
There's a couple scenes in Star Trek V I like a lot. Kinda silly movie overall but some of these things stick with me. One line relevant here: "I need my pain!"
So much of this talk brings that to mind. Read another article last week (I think it was a link from a link... on twitter? maybe here? i don't recall. i really should implement history in my browser eventually lol but anyway) that argued that what techies call "friction" and corporations call "inefficiencies", humanists can call "community". The example he brought up was going to a stand to buy some syrup and it taking an hour because the guy there wanted to chat, then imagining when that guy bought the stuff needed to tap the trees at the hardware store, he probably took an hour gabbing with the guy there too. And that also resonated with me - especially since I really like going places in person and yapping with the people there.
I avoid so much of this "smart" stuff. I want to get up and press a light switch. I want to not just have all the answers in my pocket. I want to go gab with the workers at the store. I want to go through some repetitive rituals. Even when they aren't paying money, I just kinda like doing it myself.
And then, of course, when it comes to the jobs... to circle back to Star Trek V, you just know you're gonna be on the receiving end of eye-lightning eventually, sigh.
If you want to meaningfully oppose AI in a way that lasts, you should speak loudly with your dollars in the market and your attention on the internet, and you should speak loudly with your votes.
It's fascinating to see how people still believe that "voting with your dollar" or substitutes thereof is the panacea of ecenomic democracy like reality wasn't a match of millions of distributed and incommunicated households with expenses they need to address weekly lest they die of hunger against a few billionaires and a net of investment funds that flat out surpass them in buying power. Patel, it's simple arithmetic! You learnt cross-multiplication in elementary school!
This is a wildly flawed and reductionist analysis. It's disheartening to see this gain so much currency which shows how limited the thinking around these things is.
to spell it out: Patel understands that with law, the proper unit of analysis is the “legal system,” which includes the law itself as well as admin procedures, software, and people like judges, lawyers, etc.
but when he turns to software, all he can see is the software. this is software brain!
my immediate reaction is that Patel has it exactly backwards. he is inadvertently making the case for LLMs against traditional software. finally we can have programs that can handle ambiguity! that adapt themselves to people rather than the other way around!
Interesting thread, thanks.
I don't read it the same way. In my reading, Patel is arguing that AI is "software brain" without the human/social component in the loop. To use an example from the article, it's trying to apply "the law" without involving a human judge.
[OP] simonw | a day ago
I enjoyed this essay so much - I read it, then watched the video version because I wanted to absorb it a second time.
It's by far the best commentary I've seen on the growing AI backlash from the general public outside of the tech industry, helping explain why even the people who are using AI on a daily basis actively dislike it.
Student | a day ago
I do hate legibility and those who act like it’s a good thing. I suspect that the AI backlash is in part because people can no longer ignore what has already happened.
singpolyma | 22 hours ago
I think this might be true. Over the past decade I increasingly find myself responding to incredulous statements of the form "did you know they're now doing X!!!" With.... They've always been doing that.
k749gtnc9l3w | a day ago
Well, just like with some other things, there is a thing promised, with its numerous ugly sides but not exclusively them, and then there is the next iteration keeping the ugly side, dropping even the promises of the good sides, and becomning a pure tool of abuse.
A large part of use of AI manages to keep all the drawbacks of legibility, then make it even worse. For things that already are done via a large organisation, legibility can at least somewhat increase transparency and predictability. One can go the detailed-rule part and get more alianness and more repeatability, or got responsible-decisionmaker-hierarchy path to reduce alianness at the cost of some arbitrariness.
But now one can go with AI and get arbitrariness and opaqueness and alianness and incomprehensible blunders all together, and faster.
matthiasportzel | a day ago
Thanks for sharing! It's encouraging to me to see your reasonable takes simonw. We all know there are way too many unreasonable AI takes right now.
bjeanes | a day ago
Great read. Thanks for sharing
singpolyma | a day ago
This has been a big failure since the dawn of computing. So many people slog through terrible mind numbing tasks or just avoid tasks that they want to see done entirely, just because the computer doesn't already do what they need and they don't know how to make it to that. Cutting and pasting hundreds of webpages from one CMS to another manually. copying lines from one spreadsheet to another and then into an email and back into a spreadsheet. The list is endless.
So many people have tried to solve this problem. Low code. No code. Teach everyone to code. Open source. There's an app for that. Coding agents. Nothing works. Because what we have most of all is a failure of imagination. Until people can imagine the computer being a force for good in their work they won't seek out or be able to take advantage of it, no matter what pathways we build.
txxnano | a day ago
This a thousand times. There's no world where my dentist can think or even take the time to think on how to build an app for their clinic
fazalmajid | a day ago
I knew a guy at IBM who would unroll loops. Well, he didn't know the concept of loops to begin with, so they were never rolled to begin with. Incredibly nice person, personal worth is independent of the ability to code. I read somewhere that only 5-10% of the population has the neural wiring to think mathematically or computationally. I'm not sure I believe that, but it's not impossible it's a mild form of neurodivergence.
hmaddocks | a day ago
I worked with a guy whose job was basically this. I automated it for him. He lost his job.
singpolyma | 22 hours ago
This is a separate, but important, problem. The one where we consider jobs essential to life and also consider workers unnecessary if they become too efficient.
Efficiency gains in a sane system would benefit the people doing the work. But we too often find the opposite. I don't think preferring inefficiency is the answer but societal change is hard and I don't pretend to know what to do.
matheusmoreira | 18 hours ago
I disagree. I think people's problem is their apathy towards the computers. They don't care about the computer. They don't care about how it works or the interesting algorithms implemented on them. They just want their results. They want those results now, with minimal fuss, and they simply could not care less about how it works. To them the computer is a nuisance, to be interacted with only when it helps. It's a black box, something to be abstracted away.
singpolyma | 16 hours ago
if that were true then abstracting it away would work, but it hasn't. I maintain we have failed to inspire in most people any idea of what results are possible. So they have no idea what they are missing.
cflewis | 14 hours ago
Most people I encounter who are not technically minded see computers the same way as any other appliance. Is necessary, but only in so much as they get to do the thing they need to do. They don't think about them as possibility machines like coders do.
But that's ok!
My kitchen is a possibility machine. I could bake the most wonderous things in that oven! You know what I bake? Frozen pizza. That's ok!
I don't believe the people yearn for automation. Let people be who they are.
I think Silicon Valley believes it can do anything to reshape society, because for the past three decades that was actually true. But I think perhaps AI and automation is a step too far, and it's not a failure of imagination or inspiration or encouragement. It's just people being people.
singpolyma | 13 hours ago
The kitchen analogy is a great example of what I'm trying to say. Most people are well aware of what kinds of food are possible. So when they go to a restaurant they don't order plain toast, even if that's what they choose to make at home. They can imagine the possibilities, and when we make it easy (and/or cheap) enough they will happily enjoy good food.
That's what computing still lacks. People often don't think to demand more because they just don't know that more is out there. The problem isn't only that cooking is too hard or restaurants are too expensive, we've made computing easier and cheaper than ever, but rather that the capabilities aren't well known.
stringy | a day ago
I yearn for automation, but not expensive, stochastic automation from rent-seeking megacorps.
I think people would be receptive to more automation if e.g. these software companies just made better software with meaningful interoperability and deterministic automation where it makes sense.
Instead the focus is generally on how to extract more dollars out of your user, and relatedly how to keep them in your silo and keep them using your software for as long as possible. The kind of automation being pushed here didn't make sense economically, and there is no mechanism to compel companies to provide it anyway for the greater good.
Now it does pay to provide a version of it, but only because it requires continuously boiling the ocean, and you can rent out access to ocean-boiling machines.
mattgreenrocks | a day ago
Great essay.
One idea I’ve been paying attention to lately is that you get what you give. If you drive aggressively in congested traffic, the default reaction from other drivers is to block you. I’ve watched myself on both sides of this. It’s a very reasonable default, actually.
And the subtext of AI seems to be: “you will be dominated.”
Nobody wants that. You can’t issue PR statements to counteract what is an almost primal fear.
stringy | 22 hours ago
I agree completely. I remember being told “don’t be left behind” some time ago and my visceral reaction was akin to a desire to play chicken with a jerk in traffic.
mattgreenrocks | 21 hours ago
What I'm trying to teach people is that it's completely okay to say, "no, fuck off already."
Because that's exactly what's being told to us.
nicoco | a day ago
My non-tech acquaintances are mostly millennials, but they don't particularly seem to hate
AILLM-based conversational agents. Or maybe they do, but then I don't get why they use them so much, rely on them so much and believe so much about their potential. As the "tech guy", I am the one "hating AI" and most skeptical about these chatbots. I don't mention agents here, because I do not know a single non-tech person who has the remotest idea about what this could be.Anyway, I found the article interesting but this part reflected very little with my own experience. Maybe I should ask my friends whether they actually do like using chatgpt, or if they feel they have to, among many other chores of their jobs.
[OP] simonw | a day ago
It's possible that they're happy to use ChatGPT for personal benefit but they still "hate AI" in that they dislike slop on Facebook, find AI art tacky, worry about the impact of AI on education and are nervous about AI job losses.
I'm beginning to suspect I might "hate AI" myself on those grounds! I've just never been able to admit it because I find so much personal utility in it.
mtset | 21 hours ago
Promoting the products of these enterprises is not morally neutral and never was. I hope you understand that, now.
[OP] simonw | 21 hours ago
Are there any products where promoting them is "morally neutral"?
mtset | 21 hours ago
No. Everything has a moral dimension. We all have to make lots of decisions about what we care about, and what we fight for. I don't ask for moral purity, nor do I assert my own.
But I think that responsibility goes up when we promote a product, more so if we do so relentlessly, for years, in spite of the very obvious dangers and problems about which we were publicly informed.
[OP] simonw | 21 hours ago
I try very hard to be a trustworthy, reliable source of information on these products. If you want to categorize that as "promotion" then I guess that's in the eye of the beholder.
I've written about their ethical implications 293 times. I collaborated on the earliest widely read report on the ethics of their training. I highlighted Anthropic's pricing misbehavior just a few days ago. I've been the most consistent voice about the security implications of building software on them, including coining the term for the category of attacks. I also helped boost the term slop into wider use.
I don't appreciate the implication that I'm mindlessly promoting them or ignoring their ethical problems. There are a million AI hype boosters out there. I aspire not to be one of them.
mtset | 21 hours ago
That was not the intedended implication, and I apologize for being unclear.
[OP] simonw | 21 hours ago
Thanks. I'm a bit sensitive about this!
mtset | 21 hours ago
A genuine question: do you not see saying that you find something very useful as promotion? Despite all the hedging and critiques of specific business practices, it seems hard to come away from your blog and post history here with any impression other than that you approve of LLMs as a technology, at least for use in writing software, and AI hyperscalers as an industry. Is that wrong?
[OP] simonw | 20 hours ago
Based on my own experience I am confident that if you are a software developer LLMs can 2-5x the rate at which you produce high quality code - if you learn how to use them effectively. As such, I think it would be unethical for me not to share what I've learned with others.
Outside of software engineering I've had a significant improvement in my quality of life from learning how to use tools like ChatGPT and Claude. I want to share those benefits with others too.
I still don't think I would wave a wand to uninvent this technology if I could. I'd absolutely wave a wand that would reduce the environmental impact and discourage people from using it to do bad stuff.
Gaelan | 18 hours ago
I don’t think this necessarily follows - to take an extreme example, I don’t feel any obligation to tell all my friends about the financial benefits of bank robbery, however real they may be.
That’s not me saying individual LLM use exists on the same ethical order of magnitude as bank robbery! But you are making the decision to encourage people to use LLMs despite their harms, and I don’t think it’s unfair to say you bear some ethical responsibility as a result.
mtset | 20 hours ago
I hope you can see how criticizing that is different from saying you're a mindless shill or anything like that. I don't believe that about you.
This is the second time we've had a similar conversation, so I'll stop engaging you on this. All I'll say is that I really hope you consider being more active about combating the bad outcomes, even if it has an impact on the good ones.
gcupc | 13 hours ago
There is no such thing as ethical consumption under capitalism
tome | 4 hours ago
Under which economic systems from human history has there been ethical consumption?
malxau | 21 hours ago
If you ask somebody to describe the most boring, tedious, repetitive task in their day, and listen to them, would they really be reluctant to embrace a tool to eliminate it?
Personally I think we've been missing the listening step. It's a very different dynamic to tell people to use a tool without understanding the problem that the tool exists to solve.
We also have some ugly social challenges. If a task can really be automated to mindless execution, did the task ever add value? Did the person asking for a task perceive it to be important and require care, while the person performing it sees it as low value and mindless? Automation highlights underlying issues and disagreements.
thesnarky1 | 20 hours ago
The issue with this is one of frame of reference. What you view as a boring repetitive task, another might view with joy and see importance in it. Whether or not you can automate it or do it more efficiently doesn't really matter to their value proposition.
I have frequently seen individuals performing functions that an organization views as important in ways that are incredibly inefficient (to me). They don't need AI, they don't even need "automation", they just need to be introduced to what "modern tools" like a "web GUI" are. The missing piece isn't automation, it is imagination.
In those cases, yes, the task was adding value to the organization and the individual performing it perceived it to be important. The mindless bits were just the "price of doing business" to produce the product.
Now we're going in the opposite direction. Where before people didn't know enough to try to automate, now they don't know enough to push back on using an LLM for what was previously a deterministic task. What the organization could automate with 100 lines of Python (enabling the worker to produce a better product, faster), now will become a chat bot dependent on an external provider. Error rates will increase, yet everyone will agree the role must not be needed because now there is a chatbot.
I'm more and more convinced of the importance of highlighting all the options to decision makers so that they understand the trade-offs and where automation might bring the gains they want, but in a deterministic and free-after-writing fashion.
[OP] simonw | 17 hours ago
My ideal here is that the chatbot helps them write that 100 lines of Python.
The challenge is helping people understand when they should ask for that script as opposed to settling for a solution that calls the model every time.
spc476 | 15 hours ago
The incentives for automation align more with the managerial class than the worker class---if I, as a worker, automate part of my job, I don't get more free time (unless I keep very quiet about it). No, I get more work or even no work if my automation is too good. What's the quote? Oh yes, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it." (Upton Sinclair). Or perhaps this conversation will help:
"AI will improve your productivity by 10x!"
"So my salary is increasing 10x?"
"Ha! Of course not!"
"But I get to work only 4 hours a week, right?"
"No, still 40 hours a week!"
"Then things are going to be 10x cheaper?"
"No! What are you, a communist?"
If my salary increased 10x, or my work week got 10x less, then I might look into the use of LLMs. But right now, the incentives aren't there for me.
thesnarky1 | 16 hours ago
Which is exactly my point about ensuring decision makers are given ALL the information up front. They're already hearing from the AI company about why it should be a chat bot, what the company is not telling them is the long-term cost, business resiliency impact, and introduced error rate. That's where we need to make sure we call out "This is a square peg being shoved in a round hole" and advocate for the deterministic solution that's free-after-first-write.
Yogthos | 15 hours ago
It's quite telling how different perceptions of this tech are in the west from China. The elephant in the room is that people living under capitalism feel threatened by this tech because they're afraid that it will eventually replace the need for their skills. That's really what it all comes down to. The problem isn't with AI, it's with how the oligarchs will apply it to further repress the working majority.
People living in a socialist society do not share these fears and hence their perception of this tech is overwhelmingly positive because their expectation is that it will help improve their lives the same way all other technological progress they've experienced up to now has.
https://www.aljazeera.com/economy/2025/11/19/trust-in-ai-far-higher-in-china-than-west-poll-shows
matheusmoreira | a day ago
I experienced this for the first time only a few days ago. Shipped a major milestone in my project. Was feeling a little burned out. Boredom started taking hold. So I asked Claude to ingest my project's entire 3 year commit history and tell me interesting facts about it and myself.
What followed was an utterly surreal experience where Claude managed to infer a truly absurd number of facts about me. Better yet: it started committing those facts to memory as context, and it actually started influencing Claude's behavior too.
It was entertaining but in the end it made me feel like I was in Deus Ex.
antocuni | 13 hours ago
that's interesting! I tried to do the same for a big project I'm working on (also ~3 years old) but Claude couldn't find anything particularly interesting apart telling me which periods / time of the day I'm most productive and things like that.
What kind of facts did it manage to infer about you?
matheusmoreira | 11 hours ago
Claude inferred that my marriage interrupted my project for quite some time. It correctly guessed a few of the books I've read due to a few words I used. Guessed all sites and the blogs too, such as the LWN. It figured out I have the habit of running
straceon my language to see what it's doing at the system call level. It guessed the next topic branch I was planning to work on. It figured out I used either vim or emacs.It even provided a rather convincing psychological analysis of the way I think and approach problems. I did tell it that it could analyze anything it wanted and even be psychological if it wanted to, though.
I can't even mention everything here because at some point it actually got too personal.
antocuni | 4 hours ago
wow, that's fascinating.
Are we talking of "claude code"? Do you know/think whether claude code has access to memories saved in claude desktops/claude web chats? How did it know that you got married? (Congratulations, btw! :)).
I think that the big difference in my result is that I use chatgpt for my "day to day random questions" and mostly claude code via bedrock for coding, so it cannot access shared memories (and at this point, it might be a good thing!)
grym | 23 hours ago
Nilay Patel seems like they're beginning to think thoughts that are part of a well-established body of work.
From the author of ELIZA, who (so far as i can tell) knew what was coming 50 years ago:
Weizenbaum, introduction to "Computer power and human reason", 1976
adam_d_ruppe | 19 hours ago
There's a couple scenes in Star Trek V I like a lot. Kinda silly movie overall but some of these things stick with me. One line relevant here: "I need my pain!"
So much of this talk brings that to mind. Read another article last week (I think it was a link from a link... on twitter? maybe here? i don't recall. i really should implement history in my browser eventually lol but anyway) that argued that what techies call "friction" and corporations call "inefficiencies", humanists can call "community". The example he brought up was going to a stand to buy some syrup and it taking an hour because the guy there wanted to chat, then imagining when that guy bought the stuff needed to tap the trees at the hardware store, he probably took an hour gabbing with the guy there too. And that also resonated with me - especially since I really like going places in person and yapping with the people there.
I avoid so much of this "smart" stuff. I want to get up and press a light switch. I want to not just have all the answers in my pocket. I want to go gab with the workers at the store. I want to go through some repetitive rituals. Even when they aren't paying money, I just kinda like doing it myself.
And then, of course, when it comes to the jobs... to circle back to Star Trek V, you just know you're gonna be on the receiving end of eye-lightning eventually, sigh.
twotwotwo | 13 hours ago
There's a spectrum where on one end you have:
At the other end you have:
I think you can guess which end I like.
groctel | 5 hours ago
It's fascinating to see how people still believe that "voting with your dollar" or substitutes thereof is the panacea of ecenomic democracy like reality wasn't a match of millions of distributed and incommunicated households with expenses they need to address weekly lest they die of hunger against a few billionaires and a net of investment funds that flat out surpass them in buying power. Patel, it's simple arithmetic! You learnt cross-multiplication in elementary school!
alper | 15 hours ago
This is a wildly flawed and reductionist analysis. It's disheartening to see this gain so much currency which shows how limited the thinking around these things is.
David Crespo has a thread here:
owent | 2 hours ago
Interesting thread, thanks.
I don't read it the same way. In my reading, Patel is arguing that AI is "software brain" without the human/social component in the loop. To use an example from the article, it's trying to apply "the law" without involving a human judge.
EvanHahn | a day ago
If you liked the parts about "software brain", you might like "Falsehoods Programmers Believe About Reality".
Corbin | 20 hours ago
Discussed previously, on Lobsters.
Student | 21 hours ago
Is there a written version?
Update: YouTube has a moderately hidden way to reveal a transcript.
gnyeki | 20 hours ago
There's this GitHub repo although not a direct counterpart to the talk:
https://github.com/kdeldycke/awesome-falsehood