Alexander Hägele1, 2, Aryo Pradipta Gema1, 3, Henry Sleight4, Ethan Perez5, Jascha Sohl-Dickstein5
1Anthropic Fellows Program 2EPFL 3University of Edinburgh 4Constellation 5Anthropic
February 2026
When AI systems fail, will they fail by systematically pursuing goals we do not intend? Or will they fail by being a hot mess—taking nonsensical actions that do not further any goal?
Research done as part of the first Anthropic Fellows Program during Summer 2025.
tl;dr
When AI systems fail, will they fail by systematically pursuing the wrong goals, or by being a hot mess? We decompose the errors of frontier reasoning models into bias (systematic) and variance (incoherent) components and find that, as tasks get harder and reasoning gets longer, model failures become increasingly dominated by incoherence rather than systematic misalignment. This suggests that future AI failures may look more like industrial accidents than coherent pursuit of a goal we did not train them to pursue.
As AI becomes more capable, we entrust it with increasingly consequential tasks. This makes understanding how these systems might fail even more critical for safety. A central concern in AI alignment is that superintelligent systems might coherently pursue misaligned goals: the classic paperclip maximizer scenario. But there's another possibility: AI might fail not through systematic misalignment, but through incoherence—unpredictable, self-undermining behavior that doesn't optimize for any consistent objective. That is, AI might fail in the same way that humans often fail, by being a hot mess.
This paper builds on the hot mess theory of misalignment (Sohl-Dickstein, 2023), which surveyed experts to rank various entities (including humans, animals, machine learning models, and organizations) by intelligence and coherence independently. It found that smarter entities are subjectively judged to behave less coherently. We take this hypothesis from survey data to empirical measurement across frontier AI systems, asking: As models become more intelligent and tackle harder tasks, do their failures look more like systematic misalignment, or more like a hot mess?
To quantify incoherence we decompose AI errors using the classic bias-variance framework:
$$\text{Error} = \text{Bias}^2 + \text{Variance}$$
We define incoherence as the fraction of error attributable to variance:
$$\text{Incoherence} = \frac{\text{Variance}}{\text{Error}}$$
An incoherence of 0 means all errors are systematic (classic misalignment risk). An incoherence of 1 means all errors are random (the hot mess scenario). Crucially, this metric is independent of overall performance: a model can improve while becoming more or less coherent.

We evaluated frontier
Across all tasks and models, the longer models spend reasoning and taking actions, the more incoherent they become. This holds whether we measure reasoning tokens, agent actions, or optimizer steps.

How does incoherence change with model scale? The answer depends on task difficulty:
This suggests that scaling alone won't eliminate incoherence. As more capable models tackle harder problems, variance-dominated failures persist or worsen.

We find that when models spontaneously reason longer on a problem (compared to their median), incoherence spikes dramatically. Meanwhile, deliberately increasing reasoning budgets through API settings provides only modest coherence improvements. The natural variation dominates.
Aggregating multiple samples reduces variance (as expected from theory), providing a path to more coherent behavior, though this may be impractical for real-world agentic tasks where actions are irreversible.
A key conceptual point: LLMs are dynamical systems, not optimizers. When a language model generates text or takes actions, it traces trajectories through a high-dimensional state space. It has to be trained to act as an optimizer, and trained to align with human intent. It's unclear which of these properties will be more robust as we scale.
Constraining a generic dynamical system to act as a coherent optimizer is extremely difficult. Often the number of constraints required for monotonic progress toward a goal grows exponentially with the dimensionality of the state space. We shouldn't expect AI to act as coherent optimizers without considerable effort, and this difficulty doesn't automatically decrease with scale.
To probe this directly, we designed a controlled experiment: train transformers to explicitly emulate an optimizer. We generate training data from steepest descent on a quadratic loss function, then train models of varying sizes to predict the next optimization step given the current state (essentially: training a "mesa-optimizer").

The results are interesting:
Our results are evidence that future AI failures may look more like industrial accidents than coherent pursuit of goals that were not trained for. (Think: the AI intends to run the nuclear power plant, but gets distracted reading French poetry, and there is a meltdown.) However, coherent pursuit of poorly chosen goals that we trained for remains a problem. Specifically:
We use the bias-variance decomposition to systematically study how AI incoherence scales with model intelligence and task complexity. The evidence suggests that as AI tackles harder problems requiring more reasoning and action, its failures tend to become increasingly dominated by variance rather than bias. This doesn't eliminate AI risk—but it changes what that risk looks like, particularly for problems that are currently hardest for models, and should inform how we prioritize alignment research.
We thank Andrew Saxe, Brian Cheung, Kit Frasier-Taliente, Igor Shilov, Stewart Slocum, Aidan Ewart, David Duvenaud, and Tom Adamczewski for extremely helpful discussions on topics and results in this paper.