Who would've thought that bouncing from task to task faster than ever before using tools that constantly break your state of flow and rarely produce what you want them to would lead to burnout? The cognitive burden isn't any lighter with agentic coding if you do your due diligence.
P.S. Another real thing is the fatigue of reading LLM generated blog posts.
If I notice that a blog post or really any text is LLM, I just don’t read it. If for some reason I have to (it’s the boss’s email or something), I might ask an LLM to give me a summary. Because if text doesn’t contain any thoughts, why am I reading it? Reading takes effort; my brain has limited cycles. What is the point of expending my thought effort on something that didn’t take any thought effort to produce?
(Come to think of it, what I really want is not a summary but a good guess as to what prompt was used to generate the article – basically, a reverse engineering of the part of its production where thinking was still involved. But I’m rather dubious that this could usefully be asked of an LLM, so a summary it is.)
I have been told in my life that after I add senseless redundancy to my original sticky note the resulting text is better. Although nowadays I often can say «damn those people» and send the sticky note contents anyway, most likely LLMs will get across that sticky note contents (which is the actual expression) better than myself.
(And I have been in a situation where my post got a reply «is this LLM rewriting true to the original intent?», and after my approval a third person said the rewriting is more understandable — so this is not an unfounded «most likely»)
In the other direction, this way of adding purple prose produces a more comfortably skimmable result than the manual one.
I don't get this post at all. The author posts about all these problems with AI, and how it causes burn-out, stress, anxiety and isn't even producing good code. What the author doesn't explain is why one should keep using these shitty systems that also have so many additional societal problems. It's baffling, really.
Because sometimes it's not up to you to use these tools. If the rest of your team is working on 4 PRs at the same time and increasing their (immediate) productivity using Claude Code then you are expected to do the same.
In many companies there are even monitoring AI tool usage and forcing you to increase it.
Oh good. I’d say that one of the smells that you’re using ai in a way that can harm you is that you’re waiting for it to complete, and watching it. Either switch to some other work or do something for yourself or some else in your life. Text a friend.
Multitasking has been found to increase the production of the stress hormone cortisol as well as the fight-or-flight hormone adrenaline, which can overstimulate your brain and cause mental fog or scrambled thinking. Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation.
There are healthy ways to use these things, but you do have to cultivate them. I've got my agent windows reminding me now and then to check my posture, or to eat lunch if it's about that time. I completely miss notifications, sticky notes, Apple Watch stand-up signals, etc. but these reminders are effective for me. Now I have half a mind to mix in checking my stress level, getting up for a minute, stopping work for the day…
Interesting that the author found it useful to accept ~70% of what the AI writes and then write the rest himself. I more or less came to the same conclusion myself. It also has the added benefit of forcing you to actually review and understand the code that the AI wrote.
I ended up reading the article even through the title did not grab me. Many parts really resonated with me:
"They treat AI output like a first draft from a smart but unreliable intern. They expect to rewrite 30% of it. They budget time for that rewriting. They don't get frustrated when the output is wrong because they never expected it to be right. They expected it to be useful. There's a difference."
"I call this the prompt spiral. ... You're optimizing your instructions to a language model instead of solving the actual problem."
"AI rewards a different skill: the ability to extract value from imperfect output quickly, without getting emotionally invested in making it perfect."
"It's like GPS and navigation. Before GPS, you built mental maps. You knew your city. You could reason about routes. After years of GPS, you can't navigate without it. The skill atrophied because you stopped using it."
"The tech industry has a burnout problem that predates AI. AI is making it worse, not better. Not because AI is bad, but because AI removes the natural speed limits that used to protect us."
fleebee | a day ago
Who would've thought that bouncing from task to task faster than ever before using tools that constantly break your state of flow and rarely produce what you want them to would lead to burnout? The cognitive burden isn't any lighter with agentic coding if you do your due diligence.
P.S. Another real thing is the fatigue of reading LLM generated blog posts.
ap | a day ago
If I notice that a blog post or really any text is LLM, I just don’t read it. If for some reason I have to (it’s the boss’s email or something), I might ask an LLM to give me a summary. Because if text doesn’t contain any thoughts, why am I reading it? Reading takes effort; my brain has limited cycles. What is the point of expending my thought effort on something that didn’t take any thought effort to produce?
(Come to think of it, what I really want is not a summary but a good guess as to what prompt was used to generate the article – basically, a reverse engineering of the part of its production where thinking was still involved. But I’m rather dubious that this could usefully be asked of an LLM, so a summary it is.)
ocramz | 14 hours ago
An LLM-generated blog on AI fatigue is a bit too on the nose I think
Student | 9 hours ago
It’s clear that a lot of people find LLMs helpful in expressing themselves.
gamache | 7 hours ago
It's also clear that a lot of people loathe reading the output of LLMs, because human expression is absent.
ceph | 7 hours ago
Maybe helpful in producing a communication to meet an objective, but not really much more an expression of self than ghost writing from a sticky note.
k749gtnc9l3w | 5 hours ago
I have been told in my life that after I add senseless redundancy to my original sticky note the resulting text is better. Although nowadays I often can say «damn those people» and send the sticky note contents anyway, most likely LLMs will get across that sticky note contents (which is the actual expression) better than myself.
(And I have been in a situation where my post got a reply «is this LLM rewriting true to the original intent?», and after my approval a third person said the rewriting is more understandable — so this is not an unfounded «most likely»)
In the other direction, this way of adding purple prose produces a more comfortably skimmable result than the manual one.
ksynwa | 4 hours ago
sjamaan | 11 hours ago
I don't get this post at all. The author posts about all these problems with AI, and how it causes burn-out, stress, anxiety and isn't even producing good code. What the author doesn't explain is why one should keep using these shitty systems that also have so many additional societal problems. It's baffling, really.
dysoco | 2 hours ago
Because sometimes it's not up to you to use these tools. If the rest of your team is working on 4 PRs at the same time and increasing their (immediate) productivity using Claude Code then you are expected to do the same.
In many companies there are even monitoring AI tool usage and forcing you to increase it.
Student | a day ago
Oh good. I’d say that one of the smells that you’re using ai in a way that can harm you is that you’re waiting for it to complete, and watching it. Either switch to some other work or do something for yourself or some else in your life. Text a friend.
ocramz | 11 hours ago
Instead I find it interesting (mostly) to see what the AI comes up with. Anyway I like to review everything afterwards so might as well do it online.
jrgtt | 2 hours ago
Source
kevinc | 12 hours ago
There are healthy ways to use these things, but you do have to cultivate them. I've got my agent windows reminding me now and then to check my posture, or to eat lunch if it's about that time. I completely miss notifications, sticky notes, Apple Watch stand-up signals, etc. but these reminders are effective for me. Now I have half a mind to mix in checking my stress level, getting up for a minute, stopping work for the day…
PestoDiRucola | 22 hours ago
Interesting that the author found it useful to accept ~70% of what the AI writes and then write the rest himself. I more or less came to the same conclusion myself. It also has the added benefit of forcing you to actually review and understand the code that the AI wrote.
MaskRay | 19 hours ago
I ended up reading the article even through the title did not grab me. Many parts really resonated with me:
"They treat AI output like a first draft from a smart but unreliable intern. They expect to rewrite 30% of it. They budget time for that rewriting. They don't get frustrated when the output is wrong because they never expected it to be right. They expected it to be useful. There's a difference."
"I call this the prompt spiral. ... You're optimizing your instructions to a language model instead of solving the actual problem."
"AI rewards a different skill: the ability to extract value from imperfect output quickly, without getting emotionally invested in making it perfect."
"It's like GPS and navigation. Before GPS, you built mental maps. You knew your city. You could reason about routes. After years of GPS, you can't navigate without it. The skill atrophied because you stopped using it."
"The tech industry has a burnout problem that predates AI. AI is making it worse, not better. Not because AI is bad, but because AI removes the natural speed limits that used to protect us."