Excessive token usage in Claude Code

59 points by behnamoh 20 hours ago on hackernews | 21 comments

selridge | 18 hours ago

This is just a github issue with a vague complaint.

Vaslo | 18 hours ago

A lot of comments and evidence - where there’s smoke…

selridge | 18 hours ago

...there's...evidence of excessive token usage by Claude Code.

Yeah.

What's the 'news'?

stingraycharles | 18 hours ago

It’s silly, it’s a 6 weeks old issue, and there aren’t even any actual facts in there, just screenshots.

I’ll believe it when I see actual facts, e.g. actual token counts (which is relatively easy to capture if you use mitmproxy or something like that).

For all I know this guy has a 5000 line CLAUDE.md

SoftTalker | 18 hours ago

I would have submitted a better report but I ran out of tokens.

hinkley | 17 hours ago

Wow. Six weeks is old news and non-issues now?

I should update my notes.

[OP] behnamoh | 17 hours ago

what kind of evidence would you expect? it's not like everyone has a claude code monitor with detailed logs.

selridge | 7 hours ago

Why is it on hacker news?

That’s the implied question. It’s an issue on a piece of software. There’s more than 10,000 of them.

Why is it news?

sparin9 | 18 hours ago

The recent VS Code extension update has noticeably degraded the experience.

The agent now becomes unresponsive at times and needs a reload, which really breaks flow. More frustratingly, the context limit seems to fill up much faster on the same project that was working fine just days ago. Nothing major changed on my side, so this feels like a backend or token allocation shift.

visarga | 16 hours ago

That agent has a bug that quotes from open files all the time, consuming tokens.

thurn | 17 hours ago

There's a real hobbyist vs professional distinction with Claude Code. For professionals, including when I use it at work, we're generally super happy to have Claude spawn as many subagents as possible and burn more tokens to get a better result. Hobbyist users on a $20/month plan, though, generally want more conservative behavior.

It's hard for Anthropic to cater to both sets of users with one model.

[OP] behnamoh | 17 hours ago

I don't think that's what this issue is talking about. I have the Max $200/mo plan and have noticed starting yesterday that my quota drains much much faster, to the point I'm about to use the $50 credit Anthropic gave away to everyone.

xwowsersx | 17 hours ago

True enough. But to be clear, that's a separate issue from what users are reporting here.

Both hobbyists and professionals are understandably frustrated that tokens are being consumed quickly without justification, or at least in ways that seem entirely avoidable.

pdntspa | 16 hours ago

I have MAX and have been using Opus 4.6 heavily for my day job which is 100% agentic programming, and my usage numbers have not changed meaningfully since Opus 4.6 came out

nurettin | 16 hours ago

Same here. Both $20 and $100 finished fast. Never hit a limit after dishing out the $200. Explore() sometimes prints 90k token usage which scares me, but so far it is consequence free.

pdntspa | 5 hours ago

At 8 hours a day 5 days a week I never hit the limit on my $100 MAX plan. People must be running crazy autonomous workflows or something because I'm nowhere near hitting my limit, ever
Each time I've dug into this for someone, it's because they're filling up their context window with a bunch of tokens before any real work even starts.

Highly encourage people having issues to do /context and start removing heavy things. It's usually some sprawling MCPs they rarely use, or huge CLAUDE.md files they generated or cargo-culted from someone else.

I'm not suggesting these are the only ways to hit the limits, it's just (so far) almost always the answer when someone hits the limits doing something that I wouldn't expect to be problematic.

nickvec | 16 hours ago

I find Sonnet 4.6 usage to be pretty reasonable with the 5x Max plan.

rognjen | 16 hours ago

This issue has the same vibes as the World of Warcraft forum on patch day.

bastard_op | 15 hours ago

My quality of usage with Claude has degraded heavily since last week of December that I've stopped using Claude entirely now and mostly found a Codex suitable, and goes much further. I was maxing out usage on Claude Max 5x in absurd ways once I started using MCP features heavily, and even when not found myself constantly hitting limits through January.

The final nail was them offering a $50 credit toward overage use that within a half hour of enabling maxed out and began digging into. It's become almost predatory now, and I have no way to quantify the actual usage I'm getting from it other than it burns now at an alarming rate.

Since I've stopped using Claude, I ultimately landed on Codex where for my usage, where I'm easily getting 4x less quota usage from it than Claude for the same period of heavy use. I keep it as a backup now if Codex gets stuck on something, but I'm annoyed enough to stop paying all together.

Update Claude, turn on all of the MCPs you've been using, start a new Claude session from scratch in an empty folder.

Run /context.

Observe that your MCPs are killing a sizable chunk of the usable context window.

The utility here is that it'll break down exactly which MCPs are consuming how many tokens, just in tool descriptions. Then you can decide if that's worth it to you, even if you continue using Codex or OpenCode or etc.