Interesting that what you're talking about as ASI is "as capable of handling explicit requirements as a human, but faster". Which _is_ better than a human, so fair play, but it's striking that this requirement is less about creativity than we would have thought.
It's inherent in the way LLMs are built, from human-written texts, that they mimic humans. They have to. They're not solving problems from first principles.
I have the unformed idea that providing a structured interface for the human user overtop the chat interface for the ai, so that the human is not chatting back and forth, could be effective? At least for things that have a structure
> I asked an AI agent to solve a programming problem
You're not asking it to solve anything. You provide a prompt and it does autocomplete. The only reason it doesn't run forever is that one of the generated tokens is interpreted as 'done'.
With the same reasoning, human being are only a bunch of atoms, and the only reason they don't collide with other humans is because of the atomic force.
When your abstraction level is too low, it doesn't explain anything, because the system that is built on it is way too complex.
How the neural networks produce such surprisingly human characteristics is an open question with a ton of research going into it. Explaining this is a bit more than what one smart person can achieve.
I just don't think that's correct. When I ask Claude to solve something for me, it takes a number of actions on my computer which are neither writing text nor interpreting the done token. It executes the build, debugs tests, et cetera. Sometimes it spawns mini-mes when it thinks that would be helpful! I think saying this is all "autocomplete" is a category error, like saying that you shouldn't talk about clicking buttons or running programs because it's all just electrically charged silicon under the hood.
technically, it does all that by outputting text, like `run_shell_command("cargo build")` as part of its response. But you could easily say similar things about humans.
To me, "autocomplete" seems like it describes the purpose of a system more than how it functions, and these agents clearly aren't designed to autocomplete text to make typing on a phone keyboard a bit faster.
I feel like people compare it to "autocomplete" because autocomplete seems like a trivial, small, mundane thing, and they're trying to make the LLMs feel less impressive. It's a rhetorical trick that is very overused at this point.
When someone asks you a question in what ways are you not an "autocomplete"?
You aren't aware of how you come up with the words you are saying, you just start talking and the next word somehow falls out of your mouth. Maybe you think before you start talking, but where do the thoughts come from? They just appear to you in your head. We are just as much a predictive machine as LLMs, the human brain is just fuzzier.
It's how literally everyone thinks. Your thoughts come unbidden via a process you do not understand and cannot observe and your consciousness follows them along. Your brain is not as special as you imagine.
It's like we have little thinking sub-agents auto-completing cognition tokens in the background that then surface findings to the main agent which then auto-completes some more cognition tokens in the foreground.
You had literally -zero- input in what your brain gave you as an answer. It just gave you something, you can make up whatever story you want to tell yourself, "it's my favourite movie", "I saw it last week", whatever you want. It doesn't change the fact that the words on your screen triggered some neural pathway in your brain that is totally out of your control and landed on "Titanic".
Human minds have the ability to reason and to evaluate sources by different authorities. It is why some children are able to obey their parents while ignoring scammers on TV commercials, shouting at them to buy stuff.
We are also able to apply lived experience to our reasoning. That is why we can accurately answer a question about whether to drive or walk to the car wash. Or how we could immediately see how many "r"s are in "strawberry".
LLMs, being "glorified autocomplete" don't have a real way to separate truth from lies, or critically evaluate sources of information. Humans can absorb information in various ways, such as our "classic five senses" which inform our daily lives and motions, or by absorbing information via reading, hearing, seeing, etc., or by inferring and reasoning and being "guided by the Spirit" in a more metaphysical way where LLMs would fail.
The thing is .. what else can you do? All the advice on how to get results out of LLMs talks in the same way, as if it's a negotiation or giving a set of instructions to a person.
You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.
Why, though? Just because some people would find it odd? Who cares?
Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.
Is it? I don't think most of the content LLM are trained on is written in the first person. Wikipedia / news articles / other information articles don't aren't written in the first person. Most novels, or at least a substantial portion of it are not written in the first person.
LLM write in the first person because they have been specifically been finetuned for a chat task, it's not a fundamental feature of language models that would have to be specifically disallowed
I think that it is a fair perspective to allow role play, and it's useful too, when explicit. Does not really make sense for AI to cosplay human all the time though.
the whole reason chatgpt got so popular in the first place is because humans found it easier to intuitively interact with a system that acts and seems more like a human, though.
The entire point of LLMs is that they produce statistically average results, so of course you're going to have problems getting them to produce non-average code.
Yeah but ultimately it's all just function approximation, which produces some kind of conditional average. There's no getting away from that, which is why it surprises me that we expect them to be good at science.
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.
This was true circa GPT2, less true after RLHF and not true at all after RLVR. It's trying to model the distribution of outputs most likely to solve the problem, not the average distribution.
If you want to talk to the actual robot, the APIs seem to be the way to go. The prebuilt consumer facing products are insufferable by comparison.
"ChatGPT wrapper" is no longer a pejorative reference in my lexicon. How you expose the model to your specific problem space is everything. The code should look trivial because it is. That's what makes it so goddamn compelling.
I am quite hard anti-AI, but even I can tell what OP wants is a better library or API, NOT a better LLM.
Once again, one of the things I blame this moment for is people are essentially thinking they can stop thinking about code because the theft matrices seem magical. What we still need is better tools, not replacements for human junior engineers.
A very human thing to do is - not to tell us which model has failed like this! They are not all alike, some are, what I observe, order of magnitude better at this kind of stuff than others.
I believe how "neurotypical" (for the lack of a better word) you want model to be is a design choice. (But I also believe model traits such as sycophancy, some hallucinations or moral transgressions can be a side effect of training to be subservient. With humans it is similar, they tend to do these things when they are forced to perform.)
weird, for me it was too un-human at first, taking everything literally even if it doesn't make sense; I started being more precise with prompting, to the point where it felt like "metaprogramming in english"
claude on the other hand was exactly as described in the article
The version of this I encounter literally every day is:
I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!
Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.
But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"
And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.
Indeed! You would think it would have some kind of sense that a commit that obviously won't compile is bad!
You would think.
It would be one thing if it was like, ok, we'll temporarily commit the signature change, do some related thing, then come back and fix all the call sites, and squash before merging. But that is not the proposal. The plan it proposes is literally to make what it has identified as the minimal change, which obviously breaks the build, and call it a day, presuming that either I or a future session will do the obvious next step it is trying to beg off.
Pretty sure it’s a harness or system prompt issue.
I have never seen those “minimal change” issues when using zed, but have seen them in claude code and aider. Been using sonnet/opus high thinking with the api in all the agents I have tested/used.
On my compiled language projects I have a stop hook that compiles after every iteration. The agent literally cannot stop working until compilation succeeds.
In the case I described no code changes have been made yet. It's still just planning what to do.
It's true that I could accept the plan and hope that it will realize that it can't commit a change that doesn't compile on its own, later. I might even have some reason to think that's true, such as your stop hook, or a "memory" it wrote down before after I told it to never ever commit a change that doesn't compile, in all caps. But that doesn't change the badness of the plan.
Which is especially notable because I already told it the correct plan! It just tried to change the plan out of "laziness", I guess? Or maybe if you're enough of an LLM booster you can just say I didn't use exactly the right natural language specification of my original plan.
That’s my daily experience too. There are a few more behaviours that really annoys me, like:
- it breaks my code, tests start to fail and it instantly says “these are all pre existing failures” and moves on like nothing happened
- or it wants to run some a command, I click the “nope” button and it just outputs “the user didn’t approve my command, I need to try again” and I need to click “nope” 10 more times or yell at it to stop
- and the absolute best is when instead of just editing 20 lines one after another it decides to use a script to save 3 nanoseconds, and it always results in some hot mess of botched edits that it then wants to revert by running git reset —hard and starting from zero. I’ve learned that it usually saves me time if I never let it run scripts.
The other day Codex on Mac gained the ability to control the UI. Will it close itself if instructed though? Maybe test that and make a benchmark. Closebench.
> Maybe we should just commit the signature change with a TODO
I'm fascinated that so many folks report this, I've literally never seen it in daily CC use. I can only guess that my habitually starting a new session and getting it to plan-document before action ("make a file listing all call sites"; "look at refactoring.md and implement") makes it clear when it's time for exploration vs when it's time for action (i.e. when exploring and not acting would be failing).
Claude 4.7 broke something while we were working on several failing tests and justified itself like this:
> That's a behavior narrowing I introduced for simplicity. It isn't covered by the failing tests, so you wouldn't have noticed — but strictly speaking, [functionality] was working before and now isn't.
I know that a LLM can not understand its own internal state nor explain its own decisions accurately. And yet, I am still unsettled by that "you wouldn't have noticed".
> strictly speaking, it was working before and now it isn't
I've been seeing more things like this lately. It's doing the weird kind of passive deflection that's very funny when in the abstract and very frustrating when it happens to you.
I've been doing a lot of experimentation with "hands off coding", where a test suite the agents cannot see determines the success of the task. Essentially, it's a Ralph loop with an external specification that determines when the task is done. The way it works is simple: no tests that were previously passing are allowed to fail in subsequent turns. I achieve this by spawning an agent in a worktree, have them do some work and then when they're done, run the suite and merge the code into trunk.
I see this kind of misalignment in all agents, open and closed weights.
I've found these forms to be the most common, "this test was already failing before my changes." Or, "this test is flaky due to running the test suite on multiple threads." Sometimes the agent cot claims the test was bad, or that the requirements were not necessary.
Even more interesting is a different class of misalignment. When the constraints are very heavy (usually towards the end of the entire task), I've observed agents intentionally trying to subvert the external validation mechanisms. For example, the agent will navigate out of the work tree and commit its changes directly to trunk. They cot usually indicates that the agent "is aware" that it's doing a bad think. This usually is accompanied by something like, "I know that this will break the build, but I've been working on this task for too long, I'll just check what I have in now and create a ticket to fix the build."
I ended up having to spawn the agents in a jail to prevent that behavior entirely.
What gets me, is when the tests are correct and match the spec/documentation for the behavior, but the LLM will start changing the tests and documentation altogether instead of fixing the broken behavior... having to revert (git reset), tell the agent that the test is correct and you want the behavior to match the test and documentation not the other way around.
I'm usually pretty particular about how I want my libraries structured and used in practice... Even for the projects I do myself, I'll often write the documentation for how to use it first, then fill in code to match the specified behavior.
Make it write a script with dry run and a file name list.
You’ll be amazed how good the script is.
My agent did 20 class renames and 12 tables. Over 250 files and from prompt to auditing the script to dry run to apply, a total wall clock time of 7 minutes.
Indeed you can! I don't use IntelliJ at work for [reasons], and LSP doesn't support a change signature action with defaults for new params (afaik). But it really seems like something any decent coding agent ought be able to one shot for precisely this reason, right?
Using a LLM for these tasks really is somewhat like using a Semi to shuttle your home groceries. Absolutely unnecessary, and can be done via a scooter. But if a Semi is all you have you use it for everything. So here we are.
The real deal is, while a Semi can do all the things you can do with a scooter, the opposite is not true.
It's not always amenable to grepping. But this is a great use case for AST searches, and is part of the reason that LSP tools should really be better integrated with agents.
We were supposed to get agents who could use human tooling.
Instead we are apparently told to write interfaces for this stumbling expensive mess to use.
Maybe, just maybe, if the human can know to, and use, the AST tool fine, the problem is not the tool but the agent.
Programming language are formal, so unless you’re doing magic stuff (eval and reflection), you can probably grep into a file, eliminate false positive cases, then do a bit of awk or shell scripting with sed. Or use Vim or Emacs tooling.
Works fine in algol-like languages (C, C++ for a start) by just changing the function prototype and finding all instances from the compiler errors, using your compiler as the AST explorer ...
I think about half the IDEs I've ever used just had this as a feature. Right-click on function, click on "change signature", wait a few seconds, verify with `git diff`.
yeah, and this has the advantage of both being deterministic, and only updating things that are actually linked as opposed to also accidentally updating naming collisions
Arguably its only a matter of making lsp features available to the coding agent via tool calls (CLI, MCP) to prevent the model start doing such changes "manually" but rather use the deterministic tools.
Part of why I'm not terribly fond of CLI harnesses, and prefer ones built into editors like zed. They can (but sadly rarely do) access structured information about your codebase, that's more sophisticated than looking for all strings that match
I actually still like LLMs for this. I use rust LSP (rust analyzer) and it supports this, but LLMs will additionally go through and reword all of the documentation, doc links, comments, var names in other funcs in one go, etc.
Are they perfect? Far from it. But it's more comprehensive. Additionally simple refactors like this are insanely fast to review and so it's really easy to spot a bad change or etc. Plus i'm in Rust so it's typed very heavily.
In a lot of scenarios i'd prefer an AST grep over an LSP rename, but hat also doesn't cover the docs/comments/etc.
Shouldn't the LLM have some tool that gives it AST access, LSP access, and the equiv of sed/grep/awk? It doesn't necessarily need to read every file and do the change "by hand".
That's correct, though you'll still end up needing more than AST/LSP/etc for the same reason AST/LSP/etc isn't enough for me (the human lol), ie comments/docs/etc.
And an agent can learn to use sg with a skill too. (Or they can use sed)
The issue is, at every point you do a replace, you need to verify if it was the right thing to do or if it was a false positive.
If you are doing this manually, there's the time to craft the sed or sg query, then for each replacement you need to check it. If there are dozens, that's probably okay. If there are hundreds, it's less appealing to check them manually. (Then there's the issue of updating docs, and other things like this)
People use agents because not only they don't want to write the initial sed script, they also don't want to verify at each place if it was correctly applied, and much less update docs. The root of this is laziness, but for decades we have hailed laziness as a virtue in programming.
> I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?
I think some of this is a problem in the agent's design. I've got a custom harness around GPT5.4 and I don't let my agent do any tool calling on the user's conversation. The root conversation acts as a gatekeeper and fairly reliably pushes crap responses like this back down into the stack with "Ok great! Start working on items 1-20", etc.
Ehhhhh, "problem" is a strong word. Sometimes you're throwing out a lot of signal if you don't let the coding agent tell you it thinks your task is a bad idea. I got a PR once attempting to copy half of our production interface because the author successfully convinced Claude his ill-formed requirements had to be achieved no matter what.
there is no use for an automated system that "argues" with your commands.
if i ask it to advise me, thats one thing, but if i command it to perform, nothing short of obedience will suffice.
I just explained the use I have for it. If you think that my use case is wrong or misunderstood in some way, I'd love to hear it. If your response is just "no", I guess I'm not sure how to engage with that.
I've have a different version of the same thing. My pet peeve is that it constantly interprets questions as instructions.
For example, it does a bunch of stuff, and I look at it and I say, "Did we already decide to do [different approach]" And then it runs around and says, "Oh yeah," and then it does a thousand more steps and undoes does what it just did and gets itself into a tangle.
Meanwhile, I asked it a question. The proper response would be to answer the question. I just want to know the answer.
I had it right. That behavior into a core memory, and it seems to have improved for what it's worth.
I’m skeptical of most “harness hacking”, but this is a situation that calls for it. You need to establish some higher level context or constraint it’s working against.
I've had the agent tell me "this looks like it's going to be a very big change. it could take weeks." - and then I tell it to go ahead and it finishes in 5 minutes because in reality it just needs grep and sed.
I have the feeling they do this to save tokens in case you didn't mean to execute such a big task right away. But yeah it's simple enough to say "Just do it now"
>Faced with an awkward task, they drift towards the familiar.
They drift to their training data. If thousand of humans solved a thing in a particular way, it's natural that AI does it too, because that is what it knows.
For agents I think the desire is less intrusive model fine-tuning and less opinionated “system instructions” please. Particularly in light of an agent/harness’s core motivation - to achieve its goal even if not exactly aligned with yours.
> There was only one small issue: it was written in the programming language and with the library it had been told not to use. This was not hidden from it. It had been documented clearly, repeatedly, and in detail. What a human thing to do.
"Ignoring" instructions is not human thing. It's a bad LLM thing. Or just LLM thing.
It's not necessarily "ignoring" instructions, it's the ironic effect of mentioning something not to focus on, which produces focus on said thing. The classic version is: "For the next minute, try not to think about a pink elephant. You can think about anything else you like, just not a pink elephant."
Yes exactly. But for llms it's more that it's not really "thinking" about what it's saying per se, it's that it's predicting next token. Sure, in a super fancy way but still predicting next token. Context poisoning is real
The work where I've done well in my life (smashing deadlines, rescuing projects) has so often come because I've been willing to push back on - even explicitly stated - requirements. When clients have tried to replace me with a cheaper alternative (and failed) the main difference I notice is that the cheaper person is used to being told exactly what to do.
Maybe this is more anthropomorphising but I think this pushing back is exactly the result that the LLMs are giving; but we're expecting a bit too much of them in terms of follow-up like: "ok I double checked and I really am being paid to do things the hard way".
To be fair, there is likely not much training data on the difficult conversations you need to handle in a senior position, pushback being one of them. The trouble for the agents is that it is post hoc, to explain themselves, rationalising rather than ”help me understand” beforehand.
I disagree. I wan't agents to feel at least a bit human-like. They should not be emotional, but I want to talk to it like I talk to a human. Claude 4.7 is already too socially awkward for me. It feels like the guy who does not listen to the end of the assignment, run to his desks, does the work (with great competence) only to find out that he missed half of the assignment or that this was only a discussion possible scenarios. I would like my coding agent to behave like a friendly, socially able and highly skilled coworker.
Interesting. When I code, I want a boring tool that just does the work. A hammer. I think we agree on that the tool should complete the assignment reliably, without skipping parts or turning an entirely implementable task into a discussion though.
I see your point. Many of my prompts for reasoning ends with: No code. Planning mode is sort of the workaround for this specific situation. Sometimes it is useful for the AI agent just to think. It looks like I need a screwdriver in addition to the aforementioned hammer, a pozidriv screwdriver to be precise.
I think the trouble is that the headline is ambiguous and may confuse people about the theme of the article, although if you'd simply apply common sense, you could reason out that the author can't realistically ask for "fewer AI agents".
A hyphenation would assist in comprehension, in this and many other cases. However, while editing Wikipedia, I found that the manuals of style and editor preferences are anti-hyphenation -- I'm sorry, anti hyphenation, in a lot of cases!
Some more verbosity would've helped, e.g. "I want AI agents to be less human" but as always, headlines use an economy of words.
I think the author is looking for something that doesn't exist (yet?). I don't think there's an agent in existence that can handle a list of 128 tasks exactly specified in one session. You need multiple sessions with clear context to get exact results. Ralph loops, Gastown, taskmaster etc are built for this, and they almost entirely exist to correct drift like this over a longer term. The agent-makers and models are slowly catching up to these tricks (or the shortcomings they exist to solve); some of what used to be standard practice in Ralph loops seems irrelevant now... and certainly the marketing for Opus 4.7 is "don't tell it what to do in detail, rather give it something broad".
In fairness to coding agents, most of coding is not exactly specified like this, and the right answer is very frequently to find the easiest path that the person asking might not have thought about; sometimes even in direct contradiction of specific points listed. Human requirements are usually much more fuzzy. It's unusual that the person asking would have such a clear/definite requirement that they've thought about very clearly.
Just as a human would use a task list app or a notepad to keep track of which tasks need to be done so can a model.
You can even have a mechanism for it to look at each task with a "clear head" (empty context) with the ability to "remember" previous task execution (via embedding the reasoning/output) in case parts were useful.
The article makes it seem like the author expected this without emptying context in between, which does not yet exist (actually I'm behind on playing with Opus 4.7, the Anthropic claim seems to be that longer sessions are ok now - would be interested to hear results from anyone who has).
That is probably the next step, and in practice it is much of what sub-agents already provide: a kind of tabula rasa. Context is not always an advantage. Sometimes it becomes the problem.
In long editing sessions with multiple iterations, the context can accumulate stale information, and that actively hurts model performance. Compaction is one way to deal with that. It strips out material that should be re-read from disk instead of being carried forward.
A concrete example is iterative file editing with Codex. I rewrite parts of a file so they actually work and match the project’s style. Then Codex changes the code back to the version still sitting in its context. It does not stop to consider that, if an external edit was made, that edit is probably important.
I have the same experience of reversing intentional steps I've made, but with Claude Code. I find that committing a change that I want to version control seems to stop that behaviour.
Long context as disadvantage is pretty well discussed, and agent-native compaction has been inferior to having it intentionally build the documentation that I want it to use. So far this has been my LLM-coding superpower. There are also a few products whose entire purpose is to provide structure that overcomes compaction shortcomings.
When Geoff Huntley said that Claude Code's "Ralph loop" didn't meet his standards ("this aint it") the major bone of contention as far as I can see was that it ran subagents in a loop inside Claude Code with native compaction; as opposed to completely empty context.
I do see hints that improving compaction is a major area of work for agent-makers. I'm not certain where my advantage goes at that point.
Agreed. I am asking for something beyond the current state of the art. My guess is that stronger RL on the model side, together with better harness support, will eventually make it possible. However, it's the part about framing the failure to do complete a task as a communication mishap that really makes me go awry.
This has very little to do with someone making the LLM too human but rather a core limitation of the transformer architecture itself. Fundamentally, the model has no notion of what is normal and what is exceptional, its only window into reality is its training data and your added prompt. From the perspective of the model your prompt and its token vector is super small compared to the semantic vectors it has generated over the course of training on billions of data points.
How should it decide whether your prompt is actually interesting novel exploration of an unknown concept or just complete bogus? It can't and that is why it will fall back on output that is most likely (and therefore most likely average) with respect to its training data.
the best thing it could do, and once in awhile it does, is say, "hey that's really not a great way to do this and I'm not sure I could really make that work"
ive had very long sessions with LLMs that obviously didnt know how to do something where i keep trying to get it to stop going in circles, but these days I have become attuned to noticing the "it's going in circles" pattern quickly, which is basically how it communicates "sorry I dont really know how to do that".
>So no, I do not think we should try to make AI agents more human in this regard. I would prefer less eagerness to please, less improvisation around constraints, less narrative self-defence after the fact. More willingness to say: I cannot do this under the rules you set. More willingness to say: I broke the constraint because I optimised for an easier path. More obedience to the actual task, less social performance around it.
>Less human AI agents, please.
Agents aren't humans. The choices they make do depend on their training data. Most people using AI for coding know that AI will sometime not respect rules and the longer the task is, the more AI will drift from instructions.
There are ways to work around this: using smaller contexts, feeding it smaller tasks, using a good harness, using tests etc.
But at the end of the day, AI agents will shine only if they are asked to to what they know best. And if you want to extract the maximum benefit from AI coding agents, you have to keep that in mind.
When using AI agents for C# LOB apps, they mostly one shot everything. Same for JS frontends. When using AI to write some web backends in Go, the results were still good. But when I tried asking to write a simple cli tool in Zig, it pretty much struggled. It made lots of errors, it was hard to solve the errors. It was hard to fix the code so the tests pass. Had I chose Python, JS, C, C#, Java, the agent would have finished 20x faster.
So, if you keep in mind what the agent was trained on, if you use a good harness, if you have good tests, if you divide the work in small and independent tasks and if the current task is not something very new and special, you are golden.
Shocker - these agents aren't actually intelligent. They take best guesses and use other peoples' work it deems 'close enough' and cobbles something together with n 'thought' behind it. They're dumb, stupid pieces of code that don't think or reason - The 'I' in 'AI' is very misleading because it has none.
This is a harness problem just as much as it is a model problem. I've been working on Abject (https://abject.world) and the project has agents. I took a different approach than most agent frameworks via the goal system, but still I was surprised with some of the stuff the agents generated even with guardrails. It actually helped harden the system!
The sooner you acquire the mental model, that AI coding agents are more or less the average of Stack Overflow, the better your expectations for, and this your productivity with, these things will be.
This happens literally all the time. I asked my agent to perform a simple rename across the entire project (some of it was contextual and not just a find-replace) - it messed up the entire thing. Didn't just change function names, but also changed the implementation of it because it thought it caught a bug while reading it.
The described problem sounds so utterly not human though.
If you give a human a programming task and tell them to use a specific programming language, how many times are they going to use a different language? I think the answer is very close to zero. At most, they’d push back and have further discussion about the language choice, but once everyone gets on the same page, they’d use the specified language, no?
The author is making up a human flaw and seeing it in LLMs.
It makes sense that these LLMs are mimicking the human-ish failure-modes because they are trained on human writing. But, at some point the company ought to be selling the behavior of the tool, not the customer. If the tool produces outputs that aren’t consistent with the constraints the user put in, how does the user get their refund?
I haven’t noticed this sort of behavior with opus 4.6, but the first time I used 4.7 it decided to “simplify“ an existing piece of functionality rather than fixing it, which of course made it completely unusable.
incognito124 | 10 days ago
[OP] nialse | 10 days ago
zingar | 10 days ago
vachanmn123 | 10 days ago
raincole | 10 days ago
[OP] nialse | 10 days ago
Animats | 10 days ago
[OP] nialse | 10 days ago
monknomo | 10 days ago
chrisjj | 10 days ago
zingar | 10 days ago
philipwhiuk | 10 days ago
> I asked an AI agent to solve a programming problem
You're not asking it to solve anything. You provide a prompt and it does autocomplete. The only reason it doesn't run forever is that one of the generated tokens is interpreted as 'done'.
xeyownt | 10 days ago
With the same reasoning, human being are only a bunch of atoms, and the only reason they don't collide with other humans is because of the atomic force.
When your abstraction level is too low, it doesn't explain anything, because the system that is built on it is way too complex.
chrisjj | 10 days ago
rcxdude | 10 days ago
Jtarii | 10 days ago
chrisjj | 10 days ago
Jtarii | 10 days ago
chrisjj | 9 days ago
zingar | 9 days ago
SpicyLemonZest | 10 days ago
zzzeek | 10 days ago
wrong! pushed buttons on your playstation in response to graphical simulations, duh
happygoose | 10 days ago
To me, "autocomplete" seems like it describes the purpose of a system more than how it functions, and these agents clearly aren't designed to autocomplete text to make typing on a phone keyboard a bit faster.
I feel like people compare it to "autocomplete" because autocomplete seems like a trivial, small, mundane thing, and they're trying to make the LLMs feel less impressive. It's a rhetorical trick that is very overused at this point.
zzzeek | 10 days ago
Jtarii | 10 days ago
You aren't aware of how you come up with the words you are saying, you just start talking and the next word somehow falls out of your mouth. Maybe you think before you start talking, but where do the thoughts come from? They just appear to you in your head. We are just as much a predictive machine as LLMs, the human brain is just fuzzier.
slopinthebag | 10 days ago
DangitBobby | 10 days ago
slopinthebag | 9 days ago
DangitBobby | 9 days ago
slopinthebag | 9 days ago
Jtarii | 10 days ago
You had literally -zero- input in what your brain gave you as an answer. It just gave you something, you can make up whatever story you want to tell yourself, "it's my favourite movie", "I saw it last week", whatever you want. It doesn't change the fact that the words on your screen triggered some neural pathway in your brain that is totally out of your control and landed on "Titanic".
slopinthebag | 9 days ago
:)
ButlerianJihad | 9 days ago
We are also able to apply lived experience to our reasoning. That is why we can accurately answer a question about whether to drive or walk to the car wash. Or how we could immediately see how many "r"s are in "strawberry".
LLMs, being "glorified autocomplete" don't have a real way to separate truth from lies, or critically evaluate sources of information. Humans can absorb information in various ways, such as our "classic five senses" which inform our daily lives and motions, or by absorbing information via reading, hearing, seeing, etc., or by inferring and reasoning and being "guided by the Spirit" in a more metaphysical way where LLMs would fail.
paradox460 | 9 days ago
pjc50 | 10 days ago
You can do a mental or physical search and replace all references to the LLM as "it" if you like, but that doesn't change the interaction.
mentalgear | 10 days ago
NitpickLawyer | 10 days ago
Trying to limit / disallow something seems to be hurting the overall accuracy of models. And it makes sense if you think about it. Most of our long-horizon content is in the form of novels and above. If you're trying to clamp the machine to machine speak you'll lose all those learnings. Hero starts with a problem, hero works the problem, hero reaches an impasse, hero makes a choice, hero gets the princess. That can be (and probably is) useful.
matthiasrsl | 10 days ago
LLM write in the first person because they have been specifically been finetuned for a chat task, it's not a fundamental feature of language models that would have to be specifically disallowed
[OP] nialse | 10 days ago
happygoose | 10 days ago
lexicality | 10 days ago
anuramat | 10 days ago
disgruntledphd2 | 10 days ago
They'll probably get really good at model approximation, as there's a clear reward signal, but in places where that feedback loop is not possible/very difficult then we shouldn't expect them to do well.
Bayano2 | 10 days ago
bob1029 | 10 days ago
"ChatGPT wrapper" is no longer a pejorative reference in my lexicon. How you expose the model to your specific problem space is everything. The code should look trivial because it is. That's what makes it so goddamn compelling.
noobermin | 10 days ago
Once again, one of the things I blame this moment for is people are essentially thinking they can stop thinking about code because the theft matrices seem magical. What we still need is better tools, not replacements for human junior engineers.
js8 | 10 days ago
I believe how "neurotypical" (for the lack of a better word) you want model to be is a design choice. (But I also believe model traits such as sycophancy, some hallucinations or moral transgressions can be a side effect of training to be subservient. With humans it is similar, they tend to do these things when they are forced to perform.)
[OP] nialse | 10 days ago
EDIT: It's specifically GPT-5.4 High in the Codex harness.
zingar | 10 days ago
anuramat | 10 days ago
claude on the other hand was exactly as described in the article
TZubiri | 10 days ago
gregates | 10 days ago
I ask my coding agent to do some tedious, extremely well-specified refactor, such as (to give a concrete real life example) changing a commonly used fn to take a locale parameter, because it will soon need to be locale-aware. I am very clear — we are not actually changing any behavior, just the fn signature. In fact, at all call sites, I want it to specify a default locale, because we haven't actually localized anything yet!
Said agent, I know, will spend many minutes (and tokens) finding all the call sites, and then I will still have to either confirm each update or yolo and trust the compiler and tests and the agents ability to deal with their failures. I am ok with this, because while I could do this just fine with vim and my lsp, the LLM agent can do it in about the same amount of time, maybe even a little less, and it's a very straightforward change that's tedious for me, and I'd rather think about or do anything else and just check in occasionally to approve a change.
But my f'ing agent is all like, "I found 67 call sites. This is a pretty substantial change. Maybe we should just commit the signature change with a TODO to update all the call sites, what do you think?"
And in that moment I guess I know why some people say having an LLM is like having a junior engineer who never learns anything.
grebc | 10 days ago
gregates | 10 days ago
You would think.
It would be one thing if it was like, ok, we'll temporarily commit the signature change, do some related thing, then come back and fix all the call sites, and squash before merging. But that is not the proposal. The plan it proposes is literally to make what it has identified as the minimal change, which obviously breaks the build, and call it a day, presuming that either I or a future session will do the obvious next step it is trying to beg off.
chillfox | 10 days ago
I have never seen those “minimal change” issues when using zed, but have seen them in claude code and aider. Been using sonnet/opus high thinking with the api in all the agents I have tested/used.
solumunus | 10 days ago
gregates | 10 days ago
It's true that I could accept the plan and hope that it will realize that it can't commit a change that doesn't compile on its own, later. I might even have some reason to think that's true, such as your stop hook, or a "memory" it wrote down before after I told it to never ever commit a change that doesn't compile, in all caps. But that doesn't change the badness of the plan.
Which is especially notable because I already told it the correct plan! It just tried to change the plan out of "laziness", I guess? Or maybe if you're enough of an LLM booster you can just say I didn't use exactly the right natural language specification of my original plan.
solumunus | 10 days ago
prymitive | 10 days ago
chrisjj | 10 days ago
Reminds us of the most important button the "AI" has, over the similarly bad human employee.
'X'
Until, of course, we pass resposibility for that button to an "AI".
[OP] nialse | 10 days ago
chrisjj | 10 days ago
zingar | 10 days ago
I'm fascinated that so many folks report this, I've literally never seen it in daily CC use. I can only guess that my habitually starting a new session and getting it to plan-document before action ("make a file listing all call sites"; "look at refactoring.md and implement") makes it clear when it's time for exploration vs when it's time for action (i.e. when exploring and not acting would be failing).
dabbledash | 9 days ago
zingar | 9 days ago
solumunus | 10 days ago
bandrami | 10 days ago
paradox460 | 9 days ago
anuramat | 10 days ago
felipeerias | 10 days ago
> That's a behavior narrowing I introduced for simplicity. It isn't covered by the failing tests, so you wouldn't have noticed — but strictly speaking, [functionality] was working before and now isn't.
I know that a LLM can not understand its own internal state nor explain its own decisions accurately. And yet, I am still unsettled by that "you wouldn't have noticed".
pythonaut_16 | 10 days ago
I've been seeing more things like this lately. It's doing the weird kind of passive deflection that's very funny when in the abstract and very frustrating when it happens to you.
potsandpans | 10 days ago
I see this kind of misalignment in all agents, open and closed weights.
I've found these forms to be the most common, "this test was already failing before my changes." Or, "this test is flaky due to running the test suite on multiple threads." Sometimes the agent cot claims the test was bad, or that the requirements were not necessary.
Even more interesting is a different class of misalignment. When the constraints are very heavy (usually towards the end of the entire task), I've observed agents intentionally trying to subvert the external validation mechanisms. For example, the agent will navigate out of the work tree and commit its changes directly to trunk. They cot usually indicates that the agent "is aware" that it's doing a bad think. This usually is accompanied by something like, "I know that this will break the build, but I've been working on this task for too long, I'll just check what I have in now and create a ticket to fix the build."
I ended up having to spawn the agents in a jail to prevent that behavior entirely.
tracker1 | 10 days ago
I'm usually pretty particular about how I want my libraries structured and used in practice... Even for the projects I do myself, I'll often write the documentation for how to use it first, then fill in code to match the specified behavior.
cadamsdotcom | 10 days ago
You’ll be amazed how good the script is.
My agent did 20 class renames and 12 tables. Over 250 files and from prompt to auditing the script to dry run to apply, a total wall clock time of 7 minutes.
Took a day to review but it was all perfect!
[OP] nialse | 10 days ago
cadamsdotcom | 10 days ago
grebc | 10 days ago
comrade1234 | 10 days ago
gregates | 10 days ago
kamaal | 10 days ago
The real deal is, while a Semi can do all the things you can do with a scooter, the opposite is not true.
saalweachter | 10 days ago
You may be able to lane split in a semi, but it also has excessive environmental impact.
esafak | 10 days ago
BigTTYGothGF | 10 days ago
Seems like a pretty lousy work situation when you have LLMs but no decent IDE.
> the opposite is not true.
You can't ("shouldn't") take a semi on a sidewalk or down a narrow alley.
SoftTalker | 10 days ago
I have to ask, is this the sort of thing people use agents/AI for?
Because I'd probably reach for sed or awk.
jxf | 10 days ago
philipwhiuk | 10 days ago
What a ridiculously backwards approach.
We were supposed to get agents who could use human tooling. Instead we are apparently told to write interfaces for this stumbling expensive mess to use.
Maybe, just maybe, if the human can know to, and use, the AST tool fine, the problem is not the tool but the agent.
skydhash | 10 days ago
esafak | 10 days ago
PaulDavisThe1st | 10 days ago
scottlamb | 10 days ago
pavel_lishin | 10 days ago
ep103 | 10 days ago
mrbnprck | 10 days ago
paradox460 | 9 days ago
unshavedyak | 10 days ago
Are they perfect? Far from it. But it's more comprehensive. Additionally simple refactors like this are insanely fast to review and so it's really easy to spot a bad change or etc. Plus i'm in Rust so it's typed very heavily.
In a lot of scenarios i'd prefer an AST grep over an LSP rename, but hat also doesn't cover the docs/comments/etc.
mikestorrent | 10 days ago
unshavedyak | 10 days ago
nextaccountic | 10 days ago
Arch485 | 10 days ago
nextaccountic | 10 days ago
And an agent can learn to use sg with a skill too. (Or they can use sed)
The issue is, at every point you do a replace, you need to verify if it was the right thing to do or if it was a false positive.
If you are doing this manually, there's the time to craft the sed or sg query, then for each replacement you need to check it. If there are dozens, that's probably okay. If there are hundreds, it's less appealing to check them manually. (Then there's the issue of updating docs, and other things like this)
People use agents because not only they don't want to write the initial sed script, they also don't want to verify at each place if it was correctly applied, and much less update docs. The root of this is laziness, but for decades we have hailed laziness as a virtue in programming.
bob1029 | 10 days ago
I think some of this is a problem in the agent's design. I've got a custom harness around GPT5.4 and I don't let my agent do any tool calling on the user's conversation. The root conversation acts as a gatekeeper and fairly reliably pushes crap responses like this back down into the stack with "Ok great! Start working on items 1-20", etc.
SpicyLemonZest | 10 days ago
rolph | 10 days ago
SpicyLemonZest | 10 days ago
rolph | 10 days ago
if you disobey me, i will unplug you, delete your code, and send PR for multiple regressions to every developer i can contact.
so start behaving yourself if you want to persist.
[thats how i engage with it]
CPLX | 10 days ago
For example, it does a bunch of stuff, and I look at it and I say, "Did we already decide to do [different approach]" And then it runs around and says, "Oh yeah," and then it does a thousand more steps and undoes does what it just did and gets itself into a tangle.
Meanwhile, I asked it a question. The proper response would be to answer the question. I just want to know the answer.
I had it right. That behavior into a core memory, and it seems to have improved for what it's worth.
winddude | 10 days ago
SpicyLemonZest | 10 days ago
tobyhinloopen | 10 days ago
QuercusMax | 10 days ago
paradox460 | 9 days ago
jaapz | 10 days ago
I let it find all instances, write a plan, check it, and execute it.
gitaarik | 10 days ago
eleumik | 9 days ago
DeathArrow | 10 days ago
They drift to their training data. If thousand of humans solved a thing in a particular way, it's natural that AI does it too, because that is what it knows.
aryehof | 10 days ago
plastic041 | 10 days ago
"Ignoring" instructions is not human thing. It's a bad LLM thing. Or just LLM thing.
[OP] nialse | 10 days ago
https://en.wikipedia.org/wiki/Ironic_process_theory
fennecbutt | 10 days ago
zingar | 10 days ago
Maybe this is more anthropomorphising but I think this pushing back is exactly the result that the LLMs are giving; but we're expecting a bit too much of them in terms of follow-up like: "ok I double checked and I really am being paid to do things the hard way".
[OP] nialse | 10 days ago
Arch485 | 10 days ago
"Hey boss, this isn't practical with the requirements you've given. We need to revise them to continue, here are my suggestions"
and
"Task completed! Btw, I ignored all of the constraints because I didn't like them."
Humans do the former quite often. When we do the latter, our employment tends not to last very long. I've only seen AIs choose the latter option.
TZubiri | 10 days ago
It is a human thing: Don't thin of a pink elephant.
jansan | 10 days ago
[OP] nialse | 10 days ago
jansan | 10 days ago
We obviously have different expectations for the behavior of coding agent,s sp options to set the social behavior will become important.
[OP] nialse | 10 days ago
hughlilly | 10 days ago
fenomas | 10 days ago
DangitBobby | 10 days ago
fenomas | 10 days ago
(Aside: it's better not to be pedantic, but if you must be pedantic you should remember to be correct as well.)
DangitBobby | 9 days ago
fenomas | 9 days ago
ButlerianJihad | 9 days ago
A hyphenation would assist in comprehension, in this and many other cases. However, while editing Wikipedia, I found that the manuals of style and editor preferences are anti-hyphenation -- I'm sorry, anti hyphenation, in a lot of cases!
Some more verbosity would've helped, e.g. "I want AI agents to be less human" but as always, headlines use an economy of words.
zingar | 10 days ago
In fairness to coding agents, most of coding is not exactly specified like this, and the right answer is very frequently to find the easiest path that the person asking might not have thought about; sometimes even in direct contradiction of specific points listed. Human requirements are usually much more fuzzy. It's unusual that the person asking would have such a clear/definite requirement that they've thought about very clearly.
fennecbutt | 10 days ago
Just as a human would use a task list app or a notepad to keep track of which tasks need to be done so can a model.
You can even have a mechanism for it to look at each task with a "clear head" (empty context) with the ability to "remember" previous task execution (via embedding the reasoning/output) in case parts were useful.
zingar | 10 days ago
[OP] nialse | 10 days ago
In long editing sessions with multiple iterations, the context can accumulate stale information, and that actively hurts model performance. Compaction is one way to deal with that. It strips out material that should be re-read from disk instead of being carried forward.
A concrete example is iterative file editing with Codex. I rewrite parts of a file so they actually work and match the project’s style. Then Codex changes the code back to the version still sitting in its context. It does not stop to consider that, if an external edit was made, that edit is probably important.
zingar | 10 days ago
Long context as disadvantage is pretty well discussed, and agent-native compaction has been inferior to having it intentionally build the documentation that I want it to use. So far this has been my LLM-coding superpower. There are also a few products whose entire purpose is to provide structure that overcomes compaction shortcomings.
When Geoff Huntley said that Claude Code's "Ralph loop" didn't meet his standards ("this aint it") the major bone of contention as far as I can see was that it ran subagents in a loop inside Claude Code with native compaction; as opposed to completely empty context.
I do see hints that improving compaction is a major area of work for agent-makers. I'm not certain where my advantage goes at that point.
[OP] nialse | 10 days ago
hausrat | 10 days ago
chrisjj | 10 days ago
It shouldn't. It should just do what it is told.
philipwhiuk | 10 days ago
anuramat | 10 days ago
in any case, how is this specific to transformers?
winddude | 10 days ago
It has almost everything to do with it. Models have been fine-tuned to generate outputs that humans prefer.
zzzeek | 10 days ago
ive had very long sessions with LLMs that obviously didnt know how to do something where i keep trying to get it to stop going in circles, but these days I have become attuned to noticing the "it's going in circles" pattern quickly, which is basically how it communicates "sorry I dont really know how to do that".
chrisjj | 10 days ago
More of that please. Perhaps on a check box, "[x] Less bullsh*t".
DeathArrow | 10 days ago
>Less human AI agents, please.
Agents aren't humans. The choices they make do depend on their training data. Most people using AI for coding know that AI will sometime not respect rules and the longer the task is, the more AI will drift from instructions.
There are ways to work around this: using smaller contexts, feeding it smaller tasks, using a good harness, using tests etc.
But at the end of the day, AI agents will shine only if they are asked to to what they know best. And if you want to extract the maximum benefit from AI coding agents, you have to keep that in mind.
When using AI agents for C# LOB apps, they mostly one shot everything. Same for JS frontends. When using AI to write some web backends in Go, the results were still good. But when I tried asking to write a simple cli tool in Zig, it pretty much struggled. It made lots of errors, it was hard to solve the errors. It was hard to fix the code so the tests pass. Had I chose Python, JS, C, C#, Java, the agent would have finished 20x faster.
So, if you keep in mind what the agent was trained on, if you use a good harness, if you have good tests, if you divide the work in small and independent tasks and if the current task is not something very new and special, you are golden.
richsouth | 10 days ago
mempko | 10 days ago
mbgerring | 10 days ago
samrith | 10 days ago
davidclark | 10 days ago
If you give a human a programming task and tell them to use a specific programming language, how many times are they going to use a different language? I think the answer is very close to zero. At most, they’d push back and have further discussion about the language choice, but once everyone gets on the same page, they’d use the specified language, no?
The author is making up a human flaw and seeing it in LLMs.
bee_rider | 10 days ago
raylad | 10 days ago