My five stages of AI grief

20 points by mijustin 13 hours ago on hackernews | 33 comments

aurareturn | 13 hours ago

Many HN commentators went through the same thing over the last 3 years. You'd find plenty of skeptics in 2023 and 2024 comments. First half of 2025 was the anger stage. Later half of 2025 was full on bargaining stage when models like GPT5.2 and Opus 4.5 were released. In 2026, people are in depression stage.

I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.

Edit: Based on comments here, it seems like HN is still mostly at the anger stage.

rootnod3 | 13 hours ago

I must be using it wrong then or using the wrong languages. All I have seen it produce so far was mediocre at best and painfully wrong 2-3 prompts in.

Not even starting with how it just “fixes” a hug by introducing a wholly new one and then re-introducing the old one when pointing it out.

aurareturn | 13 hours ago

You most likely are.

Or maybe the LLM just hasn't been trained enough on the language you're using.

EForEndeavour | 10 hours ago

Obligatory but important: which model, application, and programming language was this in?

yomismoaqui | 5 hours ago

Let me guess... GPT 3.5, ChatGPT and Brainfuck?

visarga | 13 hours ago

I went through it twice, once for classical ML engineering work (used to build bespoke models, not just prompt), and second time for coding.

anonymous908213 | 13 hours ago

Edit: The comment I am replying to was rewritten completely, and originally asserted that the quality of LLMs was now undeniable.

"Undeniably"? I will deny that they are good. I try to use LLMs on a near-daily basis and find them unbearably frustrating to use. They cannot even adequately complete instructions like "following the pattern of A, B, and C in the existing code, create X Y and Z functions but with one change" reliably. This is a given; the work I do is outside the training dataset in any meaningful sense, so their next-token-prediction is statistically going to lean away from predicting whatever I'm doing, even if RL training to "follow instructions" is marginally effective.

The conclusion I've come to is that the 10x hypebots fall into two categories. The first is hobbyists who could barely code at all, and now they are 10x productive at producing very bad software that is not worth sharing with the world. The other category is people who use LLMs to launder code from the training dataset to wash it free of its licenses. If your use case is reproducing code it has already been trained on, it can do that quickly.

These claims of "holding it wrong", one of which I already see in the replies, are fundamentally preposterous. This is the revolution that is democraticising software engineering for anyone who can write natural language, yet competent software engineers are using it wrong? No, the reality is that it simply doesn't have that level of utility. If it did, we would be seeing an influx of excellent software worthy of widespread usage that would replace much of the existing flawed software in the world, if not pushing new boundaries altogether. Instead we get flooded with ShowHNs fit for the pig trough.

That's not to say LLMs have zero utility. They can obviously generate a proof-of-concept quickly, and if the task is trivial enough, save a couple of minutes writing a throwaway script that you actually use day-to-day. I find them to be somewhat useful for retrieving information from documentation, although some of this gain is offset by the time wasted from hallucinated APIs. But I would estimate the productivity gains at 5%, maybe. That gain is hardly worth the accelerating AI psychosis gripping society and flooding the internet with garbage that drowns out the worthwhile content.

Addendum: Now that your post has been rewritten to assert that no, LLMs aren't there yet, but surely in the next 6 months, this time for sure it'll be AGI... welcome to the bubble. I've been told that AGI is coming in a couple of months every month for the past two years. We are no closer to it than we were two years ago. The improvements have been modest and there are clearly diminishing returns on investing in exponential scaling, not to mention that more scaling can never solve the fundamental architectural flaws of LLMs.

aurareturn | 12 hours ago

What programming language and what LLM model did you use?

anonymous908213 | 12 hours ago

I write code in C, C#, Typescript, and Python for various use cases, as well as my own language in development. I have used every frontier model, including Opus 4.5 that people won't stop proclaiming is a paradigm shift. They have continuously disappointed me at every turn.

aurareturn | 12 hours ago

Got a concrete example of where Opus 4.5 disappointed you?

Maybe a Github repo for me to try?

anonymous908213 | 12 hours ago

As you can see from the username, this is an anonymous account where I speak freely without concern of it being associated with me or my projects in perpetuity. I will extend the same question to you, though, as I have offered to every person I engage with on this subject on HN: what is your 10x project? Have you produced any software that other people would consider using[1]? I have yet to be shown a single project that is primarily LLM-developed which would indicate to me that LLMs are changing the future of software engineering.

[1] AI psychosis projects like Gas Town, which are only used by other psychosis victims to create more psychosis projects and which altogether in the end never result in a real project that solves a real-world problem for real people do not count.

NitpickLawyer | 12 hours ago

The problems with your take (and others like it) are manyfold.

First, there are some "smells" that I noticed. You say that LLMs hallucinate APIs and in another comment (brief skim of your history to make sure it's worth replying) you say something about chatting with an LLM. If you're "using" them in a chat interface, that's already 1+year old tech, and you should know that noone here talks about that. We're talking about LLM assisted coding using harnesses that make it possible and worth your time. Another smell is that you assert that LLMs only work for languages that are popular. While it's true they work best in those cases, as of ~1 y ago, it's also true that they can work even on invented languages. So I take every "i work in this very niche field" with a grain of salt nowadays.

Second, the overall problem with "it doesn't work for me" is that it's an useless signal. Both in general and in particular. If I see a "positive post", I can immediately test it. If it works, great, I can include it in my toolbox. If it doesn't work, I can skip it. But with posts like yours, I can't do anything. You haven't provided any details, and even if you did, it would still be so dependant on your particular problem, with language, env, etc. that it would make the signal very weak for anyone else that doesn't have your particular problem.

I am actually curious, if you can share, what's your setup. And perhaps an example of things you couldn't do. Perhaps we can help.

The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.

Having said that, here's my take: With small provisions made for extreme niche fields (so extreme that it would place you in 0.0x% of coders, making the overall point moot anyway) I think people reporting 0 success are either wrong or using it wrong. It's impossible for me to believe that everything that I can achieve is so out of phase with whatever you are trying to achieve as to you getting literally 0 success. And I'm sick and tired of hearing this "oh it works for trivial tasks". No. It works reliably and unattended mostly for trivial tasks, but it can also work in very advanced niches. And there's plenty of public examples already for this - things like kernel optimisation, tensor libraries, cuda code, and so on. These are not "amateur" topics by any stretch of the word. And no, juniors can't one shot this either. I say this after 25+years doing this: there are plenty of times where I'm dumbstruck by something working first try. And I can't believe I'm the only one.

anonymous908213 | 12 hours ago

I use the chat interface by default because it is the only way I have felt that I am gaining any productivity at all. Letting LLMs waste time probing for files and executing their atrocities on my codebase has only resulted in lost time. Not for lack of trying; I have set up Codex and Claude Code environments, multiple times. I have wasted entire days trying to configure the setup and get something that provides value to me, three times last year - once with an early release of CC, once with Codex's release, and once again to retry them with GPT 5.2 and Opus 4.5.Every attempt ended in a complete failure to justify the time invested.

> The third problem that I see is that you are "fighting" other deamons, instead of working with people that want to contribute. You bring up hypebots, you bring up AGI, unkept promises and so on. But we, the people here, haven't promised you anything. We're not the ones hyping up agi asi mgi and so on. If you want to learn something, it would be more productive to keep those discussions separate. If your fight is with the hyperbots, fight them on those topics, not here. Or, honestly, don't waste your time. But you do you.

This very thread is about hype. The post I originally replied to suggests that developers are in stages of grief about LLMs. That we are traversing denial, anger, and depression, before our inevitable acceptance. It is utterly tiring to be subjected to this day in, day out, in every avenue of public discourse about the field. Of course I have grievances with the hype. Of course I don't appreciate being told I'm in denial and that everything has changed. The only thing that has changed is that LLM-generated articles are all over HN and ShowHN is polluted with a very high quantity of very low quality content.

> Second, the overall problem with "it doesn't work for me" is that it's an useless signal.

The signal is not for the true believers. People who have not succumbed to the hype may find value in knowing that they are not alone. If one person can't make use of LLMs, while everyone around them is hyping them up, it may make that person feel like they are being doing something wrong and being left behind. But if people push back against the hype, they will know that they are not alone, and that maybe it isn't actually worth investing entire workdays into trying to find the magical configuration of .md files that turns Claude Code from 0.5x productivity to 10x productivity.

To be clear, I'm not really in the market for advice on "holding it right". If I find myself being left behind in reality, I will continue giving the tooling another shot until I get it right. I spend most of my life coding, and have so many ambitious projects I wish to bring into the world and not enough time to do them all; I will relentlessly pursue a productivity increase if and when it becomes available. As it is, though, I have seen zero evidence that I am actually being left behind, and am not currently interested in trying again at the present time.

aurareturn | 14 minutes ago

  This very thread is about hype. 
Hype doesn't explain how everyone on my dev team no longer writes 95% of the code we push to production.

thedevilslawyer | 3 hours ago

Still in Anger. Got it.

palmotea | 12 hours ago

> I don't think most devs will go into acceptance stage until later this year when Blackwell-class models come online and AI undeniably write better code than vast majority of humans. I'm pretty sure GPT5.2 and Opus 4.5 were only trained on H200-class chips.

We can only hope! It's about time all those pompous developers embrace the economic rug-pull, and adopt a lifestyle more in line with their true economic value. It's capitalism people, the best system there is. Deal with it and quit whining.

happytoexplain | 10 hours ago

I'd really like to not see a post that is essentially "u mad" on HN. You can simply disagree with people.

Kapura | 13 hours ago

If the only way to advance my career was to talk into a chatbox that makes shit up and encourages people to kill themselves i would stop using computers to spend my days picking oranges. i guess some people feel differently.

weeznerps | 13 hours ago

Anger stage

Kapura | 9 hours ago

no, i simply have a job that requires me to be good at it. tenths of a millisecond matter. every bit in a structure is carefully considered.

weeznerps | 8 hours ago

Makes sense. Sounds like HFT or embedded/RTOS stuff? I don't know for sure, but I have to imagine coding agents aren't terribly helpful in those domains.

archagon | 7 hours ago

Do you think this is a useful comment?

visarga | 13 hours ago

People do those 2 bad things too. We did it first, and we did it more. Slop too, we invented slop and SEO.

ChipopLeMoral | 13 hours ago

I just tweeted the exact same thought a few days ago, I guess we're all going through the same journey right now.

When GPT3 was opened to researchers 4-5 years ago, a friend of mine had access and we tried some stuff together; I was blown away that it could translate code it hasn't seen between programming languages, but it seemed to be pretty bad at it at the time. I did not expect coding to be the killer app of LLMs but here we are.

catigula | 13 hours ago

>What I came to realize as I began using these tools more is that I was entirely wrong about feeling like my skills would become useless. They don't replace all the experience and knowledge I've accumulated in over two decades as a developer, and instead they enhance what I could do.

FYI this is the denial stage.

aurareturn | 13 hours ago

haha, you might be right

tavavex | 8 hours ago

What you determine to be denial depends only on what you think is inevitable. OP said "my value is in performing more advanced functions that aren't just writing code", and to you it's denial because (from the implication) you think the complete elimination of software engineers as a job is inevitable. If OP said "my value is that I am multifunctional and can pivot to a completely different industry of mental labor" some people would call it denial because they think all those jobs are next in line on the chopping block. If OP said "my value is in being able to perform physical labor for cheap" some people would call it denial because robotics is progressing rapidly. And so on.

julienchastang | 12 hours ago

> Writing code isn't where I bring the most value. Understanding business problems, analyzing trade-offs, and making sure we're building the right things is where I can put all those years to good use. It might sound like an obvious thing, but it took me a while to get to this point.

Reaching this epiphany is a major milestone in the career of an SE even before the days of LLMs. That's basically the crux of it.

aurareturn | 12 hours ago

Based on my 20 years of experience, the vast majority of developers do not possess those skills.

I'd guess that only 10% of them actually do. In order to have those skills, you need good user sense, good business sense, good negotiation skills, good communication skills. These skills align more with the product manager to be frank.

Of course, the best people are still going to be those who have the technical chops and business sense. They'll be amplified more in this era.

happytoexplain | 12 hours ago

The vast majority of developers are not in roles where decisions at that level are being made (except occasionally, on a smaller scope), so their ability in that context is irrelevant. You're describing project leads and department leads.

XenophileJKO | 12 hours ago

Every engineer has this opportunity, whether they use it or not is usually the issue. Almost every decision that you make can make the current and future success of the business more or less likely.

I've said before: "There are no 'staff' projects, only 'staff' execution."

Yossarrian22 | 12 hours ago

Is there any non-pro-AI position that couldn’t be construed as being part of a stage of grief?

tavavex | 8 hours ago

No. Bringing up stages of grief in a debate (rather than an account of personal experience, like in the post) is an argument-killer, because any negative response from the alleged grieving side is instantly taken down by smugly categorizing their negativity as a stage of grief. It's not just reserved to LLM arguments too, this is a common wrapper for the less dignified "you disagree with me which proves I'm right" position.