Who would've thought that bouncing from task to task faster than ever before using tools that constantly break your state of flow and rarely produce what you want them to would lead to burnout? The cognitive burden isn't any lighter with agentic coding if you do your due diligence.
P.S. Another real thing is the fatigue of reading LLM generated blog posts.
If I notice that a blog post or really any text is LLM, I just don’t read it. If for some reason I have to (it’s the boss’s email or something), I might ask an LLM to give me a summary. Because if text doesn’t contain any thoughts, why am I reading it? Reading takes effort; my brain has limited cycles. What is the point of expending my thought effort on something that didn’t take any thought effort to produce?
(Come to think of it, what I really want is not a summary but a good guess as to what prompt was used to generate the article – basically, a reverse engineering of the part of its production where thinking was still involved. But I’m rather dubious that this could usefully be asked of an LLM, so a summary it is.)
I don't get this post at all. The author posts about all these problems with AI, and how it causes burn-out, stress, anxiety and isn't even producing good code. What the author doesn't explain is why one should keep using these shitty systems that also have so many additional societal problems. It's baffling, really.
Because sometimes it's not up to you to use these tools. If the rest of your team is working on 4 PRs at the same time and increasing their (immediate) productivity using Claude Code then you are expected to do the same.
In many companies there are even monitoring AI tool usage and forcing you to increase it.
I have been told in my life that after I add senseless redundancy to my original sticky note the resulting text is better. Although nowadays I often can say «damn those people» and send the sticky note contents anyway, most likely LLMs will get across that sticky note contents (which is the actual expression) better than myself.
(And I have been in a situation where my post got a reply «is this LLM rewriting true to the original intent?», and after my approval a third person said the rewriting is more understandable — so this is not an unfounded «most likely»)
In the other direction, this way of adding purple prose produces a more comfortably skimmable result than the manual one.
Oh good. I’d say that one of the smells that you’re using ai in a way that can harm you is that you’re waiting for it to complete, and watching it. Either switch to some other work or do something for yourself or some else in your life. Text a friend.
Multitasking has been found to increase the production of the stress hormone cortisol as well as the fight-or-flight hormone adrenaline, which can overstimulate your brain and cause mental fog or scrambled thinking. Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation.
Interesting that the author found it useful to accept ~70% of what the AI writes and then write the rest himself. I more or less came to the same conclusion myself. It also has the added benefit of forcing you to actually review and understand the code that the AI wrote.
I ended up reading the article even through the title did not grab me. Many parts really resonated with me:
"They treat AI output like a first draft from a smart but unreliable intern. They expect to rewrite 30% of it. They budget time for that rewriting. They don't get frustrated when the output is wrong because they never expected it to be right. They expected it to be useful. There's a difference."
"I call this the prompt spiral. ... You're optimizing your instructions to a language model instead of solving the actual problem."
"AI rewards a different skill: the ability to extract value from imperfect output quickly, without getting emotionally invested in making it perfect."
"It's like GPS and navigation. Before GPS, you built mental maps. You knew your city. You could reason about routes. After years of GPS, you can't navigate without it. The skill atrophied because you stopped using it."
"The tech industry has a burnout problem that predates AI. AI is making it worse, not better. Not because AI is bad, but because AI removes the natural speed limits that used to protect us."
There are healthy ways to use these things, but you do have to cultivate them. I've got my agent windows reminding me now and then to check my posture, or to eat lunch if it's about that time. I completely miss notifications, sticky notes, Apple Watch stand-up signals, etc. but these reminders are effective for me. Now I have half a mind to mix in checking my stress level, getting up for a minute, stopping work for the day…
AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.
I think that’s true of technology as a whole. In the old days a professional might have started his day reading a newspaper as he ate the breakfast prepared for him, then commuted to work on a train, then physically walked to meetings with those above him and met with those below him, dictated memos to his secretary, read type-up memos and so forth, eaten a formal lunch and maybe even had time for a round of golf or a game of tennis at the club. Nowadays he will nuke a burrito, eat it while reading his phone or in the car, drive to the office (or walk to another room in his house), take every meeting sitting in the same chair, write all his own memos (whether they are emails or Slack messages), nuke another meal (or order something delivered) and eat lunch alone at his computer, until late in the day, and not spend any downtime with anyone else because everyone is doing the same thing, filling every last minute with ‘news’ and text messages on his phone, until he collapses and has a restless sleep, then awakes to do it all over again.
All this efficiency has had wonderful effects. The modern economy feeds and clothes orders of magnitudes of more men, women and children than any earlier one managed. It’s enabled truly amazing things (like this very site). But it does have a cost, and even if that cost is — as I believe it is — worth it, it’s still a cost.
fleebee | a month ago
Who would've thought that bouncing from task to task faster than ever before using tools that constantly break your state of flow and rarely produce what you want them to would lead to burnout? The cognitive burden isn't any lighter with agentic coding if you do your due diligence.
P.S. Another real thing is the fatigue of reading LLM generated blog posts.
ap | a month ago
If I notice that a blog post or really any text is LLM, I just don’t read it. If for some reason I have to (it’s the boss’s email or something), I might ask an LLM to give me a summary. Because if text doesn’t contain any thoughts, why am I reading it? Reading takes effort; my brain has limited cycles. What is the point of expending my thought effort on something that didn’t take any thought effort to produce?
(Come to think of it, what I really want is not a summary but a good guess as to what prompt was used to generate the article – basically, a reverse engineering of the part of its production where thinking was still involved. But I’m rather dubious that this could usefully be asked of an LLM, so a summary it is.)
sjamaan | a month ago
I don't get this post at all. The author posts about all these problems with AI, and how it causes burn-out, stress, anxiety and isn't even producing good code. What the author doesn't explain is why one should keep using these shitty systems that also have so many additional societal problems. It's baffling, really.
dysoco | a month ago
Because sometimes it's not up to you to use these tools. If the rest of your team is working on 4 PRs at the same time and increasing their (immediate) productivity using Claude Code then you are expected to do the same.
In many companies there are even monitoring AI tool usage and forcing you to increase it.
ocramz | a month ago
An LLM-generated blog on AI fatigue is a bit too on the nose I think
Student | a month ago
It’s clear that a lot of people find LLMs helpful in expressing themselves.
gamache | a month ago
It's also clear that a lot of people loathe reading the output of LLMs, because human expression is absent.
ceph | a month ago
Maybe helpful in producing a communication to meet an objective, but not really much more an expression of self than ghost writing from a sticky note.
k749gtnc9l3w | a month ago
I have been told in my life that after I add senseless redundancy to my original sticky note the resulting text is better. Although nowadays I often can say «damn those people» and send the sticky note contents anyway, most likely LLMs will get across that sticky note contents (which is the actual expression) better than myself.
(And I have been in a situation where my post got a reply «is this LLM rewriting true to the original intent?», and after my approval a third person said the rewriting is more understandable — so this is not an unfounded «most likely»)
In the other direction, this way of adding purple prose produces a more comfortably skimmable result than the manual one.
ksynwa | a month ago
Student | a month ago
Oh good. I’d say that one of the smells that you’re using ai in a way that can harm you is that you’re waiting for it to complete, and watching it. Either switch to some other work or do something for yourself or some else in your life. Text a friend.
ocramz | a month ago
Instead I find it interesting (mostly) to see what the AI comes up with. Anyway I like to review everything afterwards so might as well do it online.
jrgtt | a month ago
Source
PestoDiRucola | a month ago
Interesting that the author found it useful to accept ~70% of what the AI writes and then write the rest himself. I more or less came to the same conclusion myself. It also has the added benefit of forcing you to actually review and understand the code that the AI wrote.
MaskRay | a month ago
I ended up reading the article even through the title did not grab me. Many parts really resonated with me:
"They treat AI output like a first draft from a smart but unreliable intern. They expect to rewrite 30% of it. They budget time for that rewriting. They don't get frustrated when the output is wrong because they never expected it to be right. They expected it to be useful. There's a difference."
"I call this the prompt spiral. ... You're optimizing your instructions to a language model instead of solving the actual problem."
"AI rewards a different skill: the ability to extract value from imperfect output quickly, without getting emotionally invested in making it perfect."
"It's like GPS and navigation. Before GPS, you built mental maps. You knew your city. You could reason about routes. After years of GPS, you can't navigate without it. The skill atrophied because you stopped using it."
"The tech industry has a burnout problem that predates AI. AI is making it worse, not better. Not because AI is bad, but because AI removes the natural speed limits that used to protect us."
kevinc | a month ago
There are healthy ways to use these things, but you do have to cultivate them. I've got my agent windows reminding me now and then to check my posture, or to eat lunch if it's about that time. I completely miss notifications, sticky notes, Apple Watch stand-up signals, etc. but these reminders are effective for me. Now I have half a mind to mix in checking my stress level, getting up for a minute, stopping work for the day…
rau | a month ago
I think that’s true of technology as a whole. In the old days a professional might have started his day reading a newspaper as he ate the breakfast prepared for him, then commuted to work on a train, then physically walked to meetings with those above him and met with those below him, dictated memos to his secretary, read type-up memos and so forth, eaten a formal lunch and maybe even had time for a round of golf or a game of tennis at the club. Nowadays he will nuke a burrito, eat it while reading his phone or in the car, drive to the office (or walk to another room in his house), take every meeting sitting in the same chair, write all his own memos (whether they are emails or Slack messages), nuke another meal (or order something delivered) and eat lunch alone at his computer, until late in the day, and not spend any downtime with anyone else because everyone is doing the same thing, filling every last minute with ‘news’ and text messages on his phone, until he collapses and has a restless sleep, then awakes to do it all over again.
All this efficiency has had wonderful effects. The modern economy feeds and clothes orders of magnitudes of more men, women and children than any earlier one managed. It’s enabled truly amazing things (like this very site). But it does have a cost, and even if that cost is — as I believe it is — worth it, it’s still a cost.