> Update - We're currently investigating issues with Claude Code and Claude.ai. Some users may be unable to log in, and others may experience slower than usual performance. The Claude API is not affected.
The issues have been described as login/logout, but I'm not sure that's all that's happening. In today's outage and the last the API stopped and kicked the session out.
I only mention this in case someone from Anthropic perhaps isn't aware that it seems to be a wider issue than login/logout (although I'm sure they are!)
Experienced same... they logged me out on claude code a few minutes ago. And when I login, it makes me wait >15000ms for the auth (which exceeds their cutoff time), so auth fails!
I feel ashamed writing code while Claude does it way better than me... I've found my forte is at designing systems and making project-level decisions, something that Claude still doesn't do very well.
Not sure about LocalLlama, but have you tried LMStudio? If you use Zed it will auto-pickup whatever model you enable on LMStudio. I keep meaning to write a blog post about this for people unaware that you can pair the two pretty easily on a Mac. I mostly use CC but like to test offline models now and then to see how far they've come along.
I use Zed with its Claude Code integration, and if I wanna use any other LLM I use LMStudio which is nice on the Mac, and hosts the APIs for it, Zed knows which models are available which is a plus.
I use Big-AGI [1] as selfhosted open source LLM workspace, and it's quite telling that when adding API keys for Anthropic, it presents a note inbetween reading "Experiencing Issues? Check Anthropic status" that it doesn't for any other model provider.
Slightly ot but I've been using OpenAI's GPT 5.4 on Codex and so far finding it more convincing than Claude with Opus 4.6 at maximum thinking for my use cases.
I'm more interested in helping with design and architecture rather than having it author tons of code.
Keep in mind that OpenAI has a way more generous tier for 20$ than Anthropic's one, and I think you can even use codex for free with the latest models, so give it a shot, you may find it better than you expected and a solid backup to Claude.
I agree it seems better at complex work. However, I find that it often tries to make ALL work complex. I had a simple bug fix where I knew exactly what the 1-2 line fix was. GPT 5.4 added like 200 LOC and started refactoring the entire function of the app. Was the refactor possibly an improvement? Maybe, but I needed the fix quick so I stopped it and switched to Claude, which did exactly what I was expecting.
Are they going to extend my subscription time as a result? It ends today, but I was locked out an hour or so ago, and I'm not sure if that was actually due to this outage.
All the vibe coding is clearly not working out too well.
I don't know about down but I use the VS Code extension on a Pro plan (that I'm considering upgrading from) and it's been slower than molasses flowing uphill in winter for me this afternoon. I'm (a) feeling unwell, and (b) up against a deadline, so this is starting to damage my calm.
Can someone that's worked at one of these big companies honestly explain how it happens that when these guys are down, it's never for like 10-15 mins ... it's always 1-2+ hours? Do they not have mechanisms in place to revert their migrations and deployments? What goes on behind the scenes during these "outages"?
Quick fixes have tendencies to break other stuff and just make matters worse. Better to leave it offline for a little longer, fix the definitive root issue and make sure it comes online nicely. If the issue was just a quirk in a recent deployment then these probably can be reverted easily on the endpoints where they were just deployed (I'm sure they are using staggered roll-outs). These long term downtime things are probably not issues related to a recent release.
Part of it observability bias: longer, more widespread outages are more likely to draw signficant attention. This doesn't mean that there aren't also shorter, smaller-scope outages, it's just that we're much less likely to know about them.
For example, if there's a problem that gets caught at the 1% stage of a staged rollout, we're probably not going to find ourselves discussing it on HN.
You will run into thundering herd/hotspotting/pre-warmed caching issues when you have to restart. There's generally not an easy to way to switch these sorts of systems on and off, especially a relatively new system that isn't battle-hardened.
I got nothing for the github outages this year though, that seems like incompetence.
because if it is, god, I'm done with this industry. just done. I rather sell my toenail clippings for scraps of food than deal with this shitty insanity
benzible | 5 hours ago
rethab | 5 hours ago
Official status is still green: https://status.claude.com/
But downdetector is clear: https://downdetector.com/status/claude-ai/
/edit: there's an official incident now: https://status.claude.com/incidents/jm3b4jjy2jrt
buddhistdude | 4 hours ago
giancarlostoro | 3 hours ago
> Update - We're currently investigating issues with Claude Code and Claude.ai. Some users may be unable to log in, and others may experience slower than usual performance. The Claude API is not affected.
Lionga | an hour ago
eduardogarza | 5 hours ago
sgt | 4 hours ago
avra | 4 hours ago
https://status.claude.com/
[OP] coderbants | 4 hours ago
I only mention this in case someone from Anthropic perhaps isn't aware that it seems to be a wider issue than login/logout (although I'm sure they are!)
verelo | 4 hours ago
Foobar8568 | 2 hours ago
verelo | an hour ago
behnamoh | 4 hours ago
lnenad | 4 hours ago
romanhn | 4 hours ago
erksa | 3 hours ago
They are dogfooding their own tools and causing so much downtime, all in the spirit of "staying a head".
skeledrew | 3 hours ago
Ancalagon | an hour ago
ramon156 | 38 minutes ago
kush0505 | 4 hours ago
izolate | 4 hours ago
thrill | 4 hours ago
pan69 | 36 minutes ago
pixl97 | 18 minutes ago
Hamuko | 4 hours ago
behnamoh | 4 hours ago
siva7 | 3 hours ago
messh | 4 hours ago
xeornet | 4 hours ago
devadjacent | 4 hours ago
UncleOxidant | 4 hours ago
world2vec | 4 hours ago
minton | 4 hours ago
astatine | 4 hours ago
behnamoh | 4 hours ago
claudown
ridiculous_leke | 3 hours ago
giancarlostoro | 3 hours ago
ridiculous_leke | 3 hours ago
giancarlostoro | 2 hours ago
throwaway89201 | 3 hours ago
[1] https://github.com/enricoros/big-AGI (no affiliation)
epolanski | 3 hours ago
I'm more interested in helping with design and architecture rather than having it author tons of code.
Keep in mind that OpenAI has a way more generous tier for 20$ than Anthropic's one, and I think you can even use codex for free with the latest models, so give it a shot, you may find it better than you expected and a solid backup to Claude.
pgm8705 | 3 hours ago
aroman | 3 hours ago
anonfunction | 3 hours ago
prmph | 3 hours ago
All the vibe coding is clearly not working out too well.
bartread | 3 hours ago
aeblyve | 3 hours ago
zurfer | 3 hours ago
valdezm | 3 hours ago
/login
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Login
mervz | 3 hours ago
Dead_Last | 3 hours ago
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
Dead_Last | 3 hours ago
OAuth error: timeout of 15000ms exceeded
Press Enter to retry.
eduardogarza | 3 hours ago
jcfrei | 3 hours ago
aix1 | 3 hours ago
For example, if there's a problem that gets caught at the 1% stage of a staged rollout, we're probably not going to find ourselves discussing it on HN.
Ocerge | an hour ago
I got nothing for the github outages this year though, that seems like incompetence.
mrguyorama | 12 minutes ago
They should probably buy subscriptions to those Chinese agents.
skeledrew | 3 hours ago
Shadow-CTO | 3 hours ago
Shadow-CTO | 3 hours ago
gku | 2 hours ago
odshoifsdhfs | 2 hours ago
because if it is, god, I'm done with this industry. just done. I rather sell my toenail clippings for scraps of food than deal with this shitty insanity
Lionga | 2 hours ago
iamsaitam | 2 hours ago
Good times..
armandososa | 2 hours ago