Yes, for personal projects I just self-host an instance of forgejo with dokploy. Everything else I deploy on codeberg, which is also an instance of forgejo.
Of course they're down while I'm trying to address a "High severity" security bug in Caddy but all I'm getting is a unicorn when loading the report.
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
have you considered moving or having at least an alternative? asking as someone using caddy for personal hosting who likes to have their website secure. :)
> have you considered moving or having at least an alternative
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
We can of course host our code elsewhere, the problem is the community is kind of locked-in. It would be very "expensive" to move, and would have to be very worthwhile. So far the math doesn't support that kind of change.
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.
If you'd have asked me a few years ago if anything could be an existential threat to github's dominance in the tech community I'd have quickly said no.
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
That's probably partly why things have got increasingly flaky - until they finish there'll be constant background cognitive load and surface area for bugs from the fact everything (especially the data) is half-migrated
You'd think so, and we don't know about today's incident yet, but recent Github incidents have been attributed specifically to Azure, and Azure itself has had a lot of downtime recently that lasts for many hours.
I'm sympathetic to ops issues, and particularly sympathetic to ops issues that are caused by brain-dead corporate mandates, but you don't get to be an infrastructure company and have this uptime record.
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
Lots of teams embraced actions to run their CI/CD, and GitHub reviews as part of their merge process. And copilot. Basically their SOC2 (or whatever) says they have to use GitHub.
Does SOC2 itself require that or just yours? I'm not too familiar with SOC2 but I know ISO 27001 quite well, and there's no PR specific "requirements" to speak of. But it is something that could be included in your secure development policy.
And it's pretty common to write in the policy, because its pretty much a gimme, and lets you avoid writing a whole bunch of other equivalent quality measures in the policy.
Every product vendor, especially those that are even within a shouting distance from security, has a wet dream: to have their product explicitly named in corporate policies.
Are you kidding? I need my code to pass CI, and get reviewed, so I can move on, otherwise the PRs just keep piling. You might as well say the lights could go out, you can do paperwork.
Many teams work exclusively in GitHub (ticketing, boards, workflows, dev builds). People also have entire production build systems on GitHub. There's a lot more than git repo hosting.
It's especially painful for anyone who uses Github actions for CI/CD - maybe the release you just cut never actually got deployed to prod because their internal trigger didn't fire... you need to watch it like a hawk.
Any module that is properly tagged and contains an OSS license gets stored in Google's module cache indefinitely. As long as it was go-get-ed once before, you can pull it again without going to GitHub (or any other VCS host).
I think this is being downvoted unfairly. I mean, sure, as a company accepting payment for services, being down for a few hours every few months is notably bad by modern standards.
But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.
There are still some processes that require a waterfall method for development, though. One example would be if you have a designer, and also have a front-end developer that is waiting for a feature to be complete to come in and start their development. I know on HN it's common for people to be full-stack developers, or for front-end developers to be able to work with a mockup and write the code before a designer gets involved, but there are plenty of companies that don't work that way. Even if a company is working in an agile manner, there still may come a time where work stalls until some part of a system is finished by another team/team-member, especially in a monorepo. Of course they could change the organization of their project, but the time suck of doing that (like going with microservices) is probably going to waste quite a bit more time than how often GitHub is down.
> There are still some processes that require a waterfall method for development, though
Not on the 2-4 hour latency scale of a GitHub outage though. I mean, sure, if you have a process that requires the engineering talent to work completely independently on day-plus timescales and/or do all their coordination offline, then you're going to have a ton of trouble staffing[1] that team.
But if your folks can't handle talking with the designers over chat or whatnot to backfill the loss of the issue tracker for an afternoon, then that's on you.
[1] It can obviously be done! But it's isomorphic to "put together a Linux-style development culture", very non-trivial.
Being snapshot-based. Git has some issues being distributed in practice since the patch order matter which means you basically need to have some centralized authoritative server in most cases with more than 2 folks to resolve the order of patches for meaningful uses as the hash is used in so many contexts.
That's... literally what a merge collision is. The tooling for that predates git by decades. The solutions are all varying levels of non-trivial and involve tradeoffs, but none of them require 24/7 cloud service availability.
I'm a firm believer that almost nothing except public services needs that kind of uptime...
We've introduced ridiculous amounts of complexity to our infra to achieve this and we've contributed to the increasing costs of both services and development itself (the barrier of entry for current juniors is insane compared to what I've had to deal with in my early 20s).
All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.
Those companies have decided the extra complexity is worth the reliability.
Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.
By public services I mean only important things like healthcare, law enforcement, fire department. Definitely not stores and food delivery. You can wait an hour or even a couple of hours for that.
> Those companies have decided the extra complexity is worth the reliability.
Companies always want more money and yes it makes sense economically. I'm not disagreeing with that. I'm just saying that nobody needs this.
I grew up in a world where this wasn't a thing and no, life wasn't worse at all.
Eh, if I'm paying someone to host my git webui, and they are as shitty about it as github has been recently, I'd rather pay someone else to host it or go back to hosting it myself. It is not absolutely required, but it's a differentiating feature I'm happy to pay for
I'm pretty sure they don't GAF about GH uptime as long as they can keep training models on it (0.5 /s), but Azure is revenue friction so might be a real problem.
I viscerally dislike github so much at this point. I don't know how how they come back from this. Major opportunity for competitor here to come around and with ai native features like context versioning
Yeah, I'm literally looking at GitLab's "Migrate from GitHub" page on their docs site right now. If there's a way to import issues and projects I could be sold.
Maybe it's be reasonable to script using the glab and gh clis? I've never tried anything like that, but I regularly use the glab cli and it's pretty comprehensive.
This is Microsoft. They forced a move to Azure, and then prioritized AI workfloads higher. I'm sure the traing read workloads on GH are nontrivial.
They literally have the golden goose, the training stream of all software development, dependencies, trending tool usage.
In an age of model providers trying train their models and keep them current, the value of GitHub should easily be in the high tens of billions or more. The CEO of Microsoft should be directly involved at this point, their franchise at risk on multiple fronts now. Windows 11 is extremely bad. GitHub going to lose their foundational role in modern development shortly, and early indications are that they hitched their wagon to the wrong foundational model provider.
The more stable/secure a monopoly is in its position the less incentive it has to deliver high quality services.
If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.
Not sure how having downtime is an anti-competition issue. I'm also not sure how you think you can take things away from people? Do you think someone just gave them GitHub and then take it away? Who are you expecting to take it away? Also, does your system have 100% uptime?
Companies used to be forced to sell parts of their business when antitrust was involved. The issue isn't the downtime, they should never have been allowed to own this in the first place.
There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.
You've been banging on about this for a while, I think this is my third time responding to one of your accounts. There is no antitrust issue, how are they messing with other competitors? You never back up your reasoning. How many accounts do you have active since I bet all the downvotes are from you?
I've had two accounts. I changed because I don't like the history (maybe one other person has the same opinion I did?). Anyways it's pretty obvious why this is an issue. Microsoft has a historical issue with being brutal to competition. There is no oversight as to what they do with the private data on GitHub. It's absolutely an antitrust issue. Do you need more reasoning?
Didn't you just privately tell me it was 4 accounts? Maybe that was someone else hating on Windows 95. But you need an active reason not what they did 20 years ago.
At its core antitrust cases are about monopolies and how companies use anti-competitive conduct to maintain their monopoly.
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".
We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.
Minimal changes have occurred to the concept of “antitrust” since its inception as a form of societal justice against corporations, at least per my understanding.
I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.
Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.
There was a change in the enforcement of antitrust law in the 1970s. Consumer welfare, which came to mean lower prices, is the standard. Effectively normal competition is fine and takes egregious behavior to be violation. It even assumes that big companies are more efficient which makes up for lack of competition.
The other change is reluctance to break up companies. AT&T break up was big deal. Microsoft survived being broken up in its antitrust trial. Tech companies can only be broken up vertically, but maybe the forced competition would be enough.
They should have just scaled a proper Rails monolith instead of this React, Java whatever mixed mess.
But hey probably Microslop is vibecoding everything to Rust now!
I can help you restore from backups if you will tell me where you backed it up.
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
More like Tay.ai and Zoe.ai AIs still arguing amongst themselves not being able to keep the service online for Microsoft after they replaced their human counterparts.
It's really pathetic for however many trillions MSFT is valued.
If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.
pretty clear that companies like microsoft are actually terrible at engineering, their core products were built 30 years ago. any changes now are generally extremely incremental and quickly rolled back with issue. trying to innovate at github shows just how bad they are.
It's not just MSFT, it's all of big tech. They basically run as a cartel, destroy competition through illegal means, engage in regulatory capture, and ensure their fiefdoms reign supreme.
All the more reason why they should be sliced and diced into oblivion.
yeah i have worked at a few FAANG, honestly stunning how entrenched and bad some of the products are. internally, they are completely incapable of making any meaningful product changes, the whole thing will break
It's a general curse of anything that becomes successful at a BigCorp.
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
im not even sure id say they were "top", id more just say its a different type of engineer, that either doesnt get promoted to a big impact role at a place like microsoft, or leaves on their own.
Screw GitHub, seriously. This unreliability is not acceptable. If I’m in a position where I can influence what code forge we use in future I will do everything in my power to steer away from GitHub.
Every company I’ve worked in the last 10 years used GH for the internal codebase hosting , PRs and sometimes CI. Discoverability doesn’t really come into picture for those users and you can still fork things from GitHub even if you don’t host your core code infra on it
It's a funny coincidence - I pushed a commit adding a link to an image in the README.md, opened the repo page, clicked on the said image, and got the unicorn page. The site did not load anymore after that.
Can we please demand that Github provide mirror APIs to competitors? We're just asking for an extinction-level event. "Oops, our AI deleted the world's open source."
Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.
The history of tickets and PRs would be a major loss - but a beauty of git is that if at least one dev has the repo checked out then you can easily rehost the code history.
It would be nice to have some sort of widespread standard for doing issue tracking, reviews, and CI in the repo, synced with the repo to all its clones (and fully from version-managed text-files and scripts) rather than in external, centralized, web tools.
I think it's more likely the introduction of the ability to say "fix this for me" to your LLM + "lgtm" PR reviews. That or MS doing their usual thing to acquired products.
Definitely. The devil is in the details though since it's so damn hard to quantify the $$$ lost when you have a large opinionated customer base that holds tremendous grudges. Doubly so when it's a subscription service with effectively unlimited lifetime for happy accounts.
Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.
Yeah, but who cares about long-term? In the long term we are all dead. CEO only needs to be good for 5-10 max years, pop up stock prices and get applause every where and called as the smartest guy in the world.
They're in the process of moving from "legacy" infra to Azure, so there's a ton of churn happening behind the scenes. That's probably why things keep exploding.
I don't know jack about shit here, but genuinely: why migrate a live production system piecewise? Wouldn't it be far more sane to start building a shadow copy on Azure and let that blow up in isolation while real users keep using the real service on """legacy""" systems that still work?
That’s a safer approach but will cause teams to need to test in two infrastructures (old world and new) til the entire new environment is ready for prime time. They’re hopefully moving fast and definitely breaking things.
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
Because it's significantly harder to isolate problems and you'll end up in this loop
* Deploy everything
* It explodes
* Rollback everything
* Spend two weeks finding problem in one system and then fix it
* Deploy everything
* It explodes
* Rollback everything
* Spend two weeks finding a new problem that was created while you were fixing the last problem
* Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
Of course, you need some way of producing test loads similar to those found in production. One way would be to take a snapshot of production, tap incoming requests for a few weeks, log everything, then replay it at "as fast as we can" speed for testing; another way would be to just mirror production live, running the same operations in test as run in production.
Alternatively, you could take the "chaos monkey" approach (https://www.folklore.org/Monkey_Lives.html), do away with all notions of realism, and just fuzz the heck out of your test system. I'd go with that, first, because it's easy, and tends to catch the more obvious bugs.
If you make it work, migrating piecewise should be less change/risk at each junction than a big jump between here and there of everything at once.
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
rumors I've heard was that github is mostly run by contractors? That might explain the chaos more than simple vibe coding (which probably aggravates this)
I think the last major outage wasn't even two weeks ago. We've got about another 2 weeks to finish our MVP and get it launched and... this really isn't helpful. I'm getting pretty fed up of the unreliability.
Copilot is shown as having policy issues in the latest reports. Oh my, the irony. Satya is like "look ma, our stock is dropping...", Gee I wonder why Mr!!
I know you are joking but I'm sure that there is at least one director or VP inside GitHub pushing a new salvation project that must use AI to solve all the problems, when actually the most likely reason is engineers are drawing in tech debt.
Honestly AI management would probably be better. "You're a competent manager, you're not allowed to break or circumvent workers right laws, you must comply with our CSR and HR policies, provide realistic estimates and deliver stable and reliable products to our customers." Then just watch half the tech sector break down, due to a lack of resources, or watch as profit is just cut in half.
Upper management in Microsoft has been bragging about their high percentage of AI generated code lately - and in the meantime we've had several disastrous Windows 11 updates with the potential to brick your machine and a slew of outages at github. I'm sure it might be something else but it's clear part of their current technical approach is utterly broken.
When I first typed up my comment I said "their current business approach" and then corrected it to technical since - yea, in the short term it probably isn't hurting their pocket books too much. The issue is that it seems like a lot more folks are seriously considering switching off Windows - we'll see if this actually is the year of the linux desktop (it never seems to be in the end) but it certainly seems to be souring their brand reputation in a major way.
All the cool kids move fast and break things. Why not the same for core infrastructure providers? Let's replace our engineers with markdown files named after them.
14 incidents in February! It's February 9th! Glad to see the latest great savior phase of the AI industrial complex [1] is going just as well as all the others!
An interesting thing I notice now is that people do not like companies that only post about outages if half the world have them ... and also not companies that also post about "minor issues", e.g.:
> During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.
That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.
I'm happy that they're being transparent about it. There's no good way to take downtime, but at least they don't try to cover it up. We can adjust and they'll make it better. I'm sure a retro is on its way it's been quite the bumpy month.
I wonder what's the value of having a dedicated X (formerly Twitter) status account post 2023 when people without account will see a mix of entries from 2018, 2024, and 2020 in no particular order upon opening it.
Is it just there so everyone can quickly share their post announcing they're back?
Good thing we have LLM agents now. Before this kind of behavior was tolerable. Now it's pretty easy to switch over to using other providers. The threat of "but it will take them a lot of effort to switch to someone else" is getting less and less every day.
I think this is an indicator of a broader trend where tech companies put less value on quality and stability and more value on shipping new features. It’s basically the enshittification of tech
Issues, CI, and downloads for built binaries aren't part of vanilla Git. CI in particular can be hard if you make a multi-platform project and don't want to have to buy a new mac every few years.
Not really. About average in terms of infrastructure maintenance. Have been running our orgs instance for 5 years or so, half that time with premium and half the time with just the open source version, running on kubernetes... ran it in AWS at first, then migrated to our own infrastructure.
Nah at a small scale it's totally fine, and IME pretty pain-free after you've got it running. The biggest pain points are A) It's slow, B) between auth, storage, and CI runners, you have a lot of unavoidable configuration to do, and C) it has a lot of different features so the docs are MASSIVE.
For me it is their history of high-impact easily avoidable security bugs. I have no idea why "send a reset password link to an address from an unauthenticated source" was possible at all.
i would imagine that's what everyone is doing instead of sitting on their hands. Setup a different remote and have your team push/pull to/from it until Github comes back up. I mean you could probably use ngrok and setup a remote on your laptop in a pinch. You shouldn't be totally blocked except for things like automated deployments or builds tied specifically to github.com
ad hominem isn't a very convincing argument, and as someone who also enjoys forgejo it doesn't make me feel good to see as the justification for another recommender.
From [1] "Forgejo was created in October 2022 after a for profit company took over the Gitea project."
Forgejo became a hard fork in 2024, with both projects diverging. If you're using it for local hosting I don't personally see much of a difference between them, although that may change as the two projects evolve.
I'm not OP, but; Forgejo is much lighterweight than Gitlab for my usecase, and was cited as a more maintained version of Gitea, but that's just anecdote from my brain and I don't have sources, so take that with a truckload of salt.
I'd had a gitea instance before and it was appealing insofar as having the ability to mirror from or to a public repo, it had docker container registry capability, it ties into oauth, etc; I'm sure gitlab has much/all of that too, but forgejo's tiny, tiny footprint was very appealing for my resource-constrained selfhosted environment.
At my last job I ran a GitLab instance on a tiny AWS server and ran workers on old desktop PCs in the corner of the office.
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
One thing that always bothered me about gitea is they wouldn't even dog food for a long time. GitLab has been developing on GitLab since forever, basically.
It probably depends on your scale, but I'd suggest self-hosting a Forgejo instance, if it's within your domain expertise to run a service like that. It's not hard to operate, it will be blazing fast, it provides most of the same capabilities, and you'll be in complete control over the costs and reliability.
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
I've been using https://radicle.xyz/ + https://radicle-ci.liw.fi/ (in combination with my own ci adapter for nix flakes) for about half a year now for (almost) all my public and private repos and so far I really like it.
> What are good alternatives to GitHub for private repos + actions? I'm considering moving my company off of it because of reliablity issues.
Dunno about actions[1], but I've been using a $5/m DO droplet for the last 5 years for my private repo. If it ever runs out of disk space, an additional 100GB of mounted storage is an extra $10/m
I've put something on it (Gitea, I think) that has the web interface for submitting PRs, reviewing them, merging them, etc.
I don't think there is any extra value in paying more to a git hosting SaaS for a single user, than I pay for a DO droplet for (at peak) 20 users.
----------------------
[1] Tried using Jenkins, but alas, a $5/m DO droplet is insufficient to run Jenkins. I mashed up shell scripts + Makefiles in a loop, with a `sleep 60` between iterations.
That pink "Unicorn!" joke is something that should be reconsidered. When your services are down you're probably causing a lot of people a lot of stress ; I don't think it's the time to be cute and funny about it.
One of Reddit's cutesy error pages (presumably for Internal Server Error is similar) is an illustration that says "You broke reddit". I know it's a joke, but have wondered what effect that might have on a particularly anxiety-prone person who takes it literally and thinks they've done something that's taken the site down and inconvenienced millions of other people. Seems a bit dodgy for a mainstream site to assume all of its users have the dev knowledge to identify a joking accusation.
Even if it is their server name, I completely agree with your point. The image is not appropriate when your multi-billion revenue service is yet again failing to meet even a basic level of reliability, preventing people from doing their jobs and generally causing stress and bad feeling all round.
I am personally totally fine with it but I see your point. Github is a bit too big for often braking with a cutsey error message even if it is a reference to their web server.
Github's two biggest selling points were its feature set (Pull Requests, Actions) and its reliability.
With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.
Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.
The saddest part to me is that their status update page and twitter are both out of date. I get a full 500 on github.com and yet all I see on their status page is an "incident with pull requests" and "copilot policy propagation delays."
I get the feeling that most of these GitHub downtimes are during US working hours, since I don't remember being impacted them during work. Only noticed it now as I was looking up a repo on my free time.
GitHub has a long history of being extremely unstable. They were down all the time, much like recently, several years ago. They seemed to stabilize quite a bit around the MS acquisition era, and now seem to be returning to their old instability patterns.
Anyone have alternatives to recommend? We will be switching after this. Already moved to self-hosted action runners and we are early-stage so switching cost is fairly low.
List of company-friendly managed-host alternatives? SSO, auditing, user management, billing controls, etc?
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
GitHub is the new Internet Explorer 6. A Microsoft product so dominant in its category that it's going to hold everyone back for years to come.
Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.
Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.
It feels like GitHub's shift to these "AI writes code for you while you sleep!" features will appeal to a less technical crowd who lack awareness of the overall source code hosting and CI ecosystem and, combined with their operational incompetence of late (calling it how I see it), will see their dominance as the default source code solution for folks using it to maintain production software projects fade away.
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
So what's the moneyline on all these outages being the result of vibe-coded LLM-as-software-engineer/LLM-as-platform-engineer executive cost cutting mandates?
GitHub has had customer visible incidents large enough to warrant status page updates almost every day this year (https://www.githubstatus.com/history).
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
It's 100% because the number of operations happening on Github has likely 100x'd since the introduction of coding agents. They built Github for one kind of scale, and the problem is that they've all of a sudden found themselves with a new kind of scale.
That doesn't normally happen to platforms of this size.
ISTR that the lift-n-shift started like ... 3 years ago? That much of it was already shifted to Azure ... 2 years ago?
The only thing that changed in the last 1 year (if my above two assertions are correct (which they may not be)) is a much-publicised switch to AI-assisted coding.
the incident has now expanded to include webhooks, git operations, actions, general page load + API requests, issues, and pull requests. they're effectively down hard.
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
The biggest thing tying my team to GitHub right now is that we use Graphite to manage stacked diffs, and as far as I can tell, Graphite doesn't support anything but GitHub. What other tools are people using for stacked-diff workflows (especially code review)?
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
On the plus side, it's git, so developers can at least get back to work without too much hassle as long as they don't need the CI/CD side of things immediately.
We've migrated to Forgejo over the last couple of weeks. We position ourselves[0] as an alternative to the big cloud providers, so it seemed very silly that a critical piece of our own infrastructure could be taken out by a GitHub or Azure outage.
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
I made this joke 10 hours ago:
"I wonder if you opened https://github.com/claude in like 1000's of browsers / unique ips would it bring down github since it does seem to try until timeout"
Do I misunderstand or does your page count today's downtime as minor? I would not count the web UI being mostly unusable as minor. Does this mean GitHub understates how bad incidents are? Pr has your page just not yet been updated to include it?
It's interesting to see that copilot has the worst overall. I use copilot completions constantly and rarely notice issues with it. I suspect incidents aren't added until after they resolve.
Completions run using a much simpler model that Github hosts and runs themselves. I think most of the issues with Copilot that I've seen are upstream issues with one or two individual models (e.g. the Twitter API goes down so the Grok model is unavailable, etc)
Nice! I still remember the old GitHub status page which showed and reported on their uptime. No surprises they took it offline and replaced it with the current one when it started reporting the truth.
> It looks like they are down to a single 9 at this point across all services
That's not at all how you measure uptime. The per area measures are cool but the top bar measuring across all services is silly.
I'm unsure what they are targeting, seems across the board it's mostly 99.5+ with the exception of Copilot. Just doing math, 3 (independent, which I'm aware they aren't fully) 99.5 services brings you down to an overall "single 9" 98.5 healthy status but it's not meaningful to anyone.
It depends whether the outages are overlapped or not. If the outages are not overlapped then that is indeed how you do it since some of your services being unavailable means your service is not fully available.
I mean, there's a big difference between primary Git operations being down and Copilot being down. Any SLAs are probably per-service, not as a whole, and I highly doubt that someone just using a subset of services cares that one of the other services is down.
Copilot seems to be the worst offender, and 99% of people using Github likely couldn't care less.
It looks like one of my employees got her whole account deleted or banned without warning during this outage. Hopefully this is resolved as service returns.
In the age of Claude Code et al, my honest biggest bottleneck is GH downtime.
I've got a dozen PRs I'm working on, but it's all frozen up, daily, with GH outages.
Are the other providers offering much better uptime GitLab, CircleCI, Harness?
Saying this as someone that's been GH exclusive sicne 2010.
Github stability worse than ever. Windows 11 and Office stability worse than ever. Features that were useful for decades on computers with low resources are now "too hard" too implement.
[OP] MattIPv4 | 6 hours ago
Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
salmon | 6 hours ago
priteau | 6 hours ago
Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC
But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.
priteau | 6 hours ago
Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC
rozenmd | 5 hours ago
mephos | 5 hours ago
twistedpair | 2 hours ago
tapoxi | 6 hours ago
riffic | 6 hours ago
dsagent | 6 hours ago
Hosting .git is not that complicated of a problem in isolation.
vvilliamperez | 6 hours ago
poilet66 | 6 hours ago
stiaje | 6 hours ago
bigfishrunning | 6 hours ago
tacker2000 | 5 hours ago
onraglanroad | 5 hours ago
jeltz | 5 hours ago
azangru | 5 hours ago
rvz | 5 hours ago
"A better way is to self host". [0]
[0] https://news.ycombinator.com/item?id=22867803
shimman | 3 hours ago
albelfio | 6 hours ago
mholt | 6 hours ago
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
gostsamo | 6 hours ago
indigodaddy | 5 hours ago
Edit- oh you probably meant an alternative to GitHub perhaps..
gostsamo | 5 hours ago
Nextgrid | 5 hours ago
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
jeltz | 5 hours ago
mring33621 | 3 hours ago
Zambyte | an hour ago
Macha | 34 minutes ago
malfist | 5 hours ago
Edit: Nevermind, looks like they migrated to github since the last time I contributed
gostsamo | 5 hours ago
mholt | 5 hours ago
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
gostsamo | 5 hours ago
As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.
cced | 6 hours ago
NewJazz | 3 hours ago
showerst | 6 hours ago
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
panarky | 6 hours ago
arianvanp | 6 hours ago
Krutonium | 6 hours ago
panarky | 5 hours ago
Pages and Packages completed in 2025.
Core platform and databases began in October 2025 and are in progress, with traffic split between the legacy Github data center and Azure.
ifwinterco | 5 hours ago
panarky | 5 hours ago
ifwinterco | 4 hours ago
pipo234 | 5 hours ago
jeffrallen | 3 hours ago
showerst | 5 hours ago
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
sgt | 5 hours ago
nullstyle | 5 hours ago
badgersnake | 5 hours ago
I’m guessing they’re regretting it.
swiftcoder | 5 hours ago
Our SOC2 doesn't specify GitHub by name, but it does require we maintain a record of each PR having been reviewed.
I guess in extremis we could email each other patch diffs, and CC the guy responsible for the audit process with the approval...
onraglanroad | 5 hours ago
sgt | 4 hours ago
badgersnake | 4 hours ago
swiftcoder | 4 hours ago
bostik | 3 hours ago
I have cleaned up more than enough of them.
esafak | 5 hours ago
koreth1 | 5 hours ago
Good news! You can't create new PRs right now anyway, so they won't pile.
munk-a | 3 hours ago
degenerate | 5 hours ago
munk-a | 3 hours ago
jamiecurle | an hour ago
I'm grateful it arrived, but two and half hours feels less than ideal.
theappsecguy | 5 hours ago
babo | 5 hours ago
sethops1 | 4 hours ago
toastal | 2 hours ago
ajross | 4 hours ago
But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.
progmetaldev | 3 hours ago
ajross | 3 hours ago
Not on the 2-4 hour latency scale of a GitHub outage though. I mean, sure, if you have a process that requires the engineering talent to work completely independently on day-plus timescales and/or do all their coordination offline, then you're going to have a ton of trouble staffing[1] that team.
But if your folks can't handle talking with the designers over chat or whatnot to backfill the loss of the issue tracker for an afternoon, then that's on you.
[1] It can obviously be done! But it's isomorphic to "put together a Linux-style development culture", very non-trivial.
toastal | 2 hours ago
ajross | an hour ago
amonith | 4 hours ago
nosman | 4 hours ago
All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.
Those companies have decided the extra complexity is worth the reliability.
Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.
amonith | 38 minutes ago
> Those companies have decided the extra complexity is worth the reliability.
Companies always want more money and yes it makes sense economically. I'm not disagreeing with that. I'm just saying that nobody needs this. I grew up in a world where this wasn't a thing and no, life wasn't worse at all.
zinodaur | 25 minutes ago
imglorp | 4 hours ago
Something this week about "oops we need a quality czar": https://news.ycombinator.com/item?id=46903802
joshstrange | 3 hours ago
Does this mean you are only half-sarcastic/half-joking? Or did I interpret that wrong?
imglorp | 3 hours ago
jbmilgrom | 6 hours ago
bartread | 5 hours ago
Zambyte | 5 hours ago
notpushkin | 4 hours ago
gslepak | 3 hours ago
That is what that feature does. It imports issues and code and more (not sure about "projects", don't use that feature on Github).
stevekemp | 2 hours ago
thinkingtoilet | 4 hours ago
ezst | 3 hours ago
thinkingtoilet | 57 minutes ago
iLoveOncall | 56 minutes ago
throwaway5752 | 3 hours ago
They literally have the golden goose, the training stream of all software development, dependencies, trending tool usage.
In an age of model providers trying train their models and keep them current, the value of GitHub should easily be in the high tens of billions or more. The CEO of Microsoft should be directly involved at this point, their franchise at risk on multiple fronts now. Windows 11 is extremely bad. GitHub going to lose their foundational role in modern development shortly, and early indications are that they hitched their wagon to the wrong foundational model provider.
porise | 6 hours ago
alimbada | 6 hours ago
porise | 6 hours ago
alimbada | 6 hours ago
fsflover | 5 hours ago
abdullahkhalids | 5 hours ago
If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.
otikik | 6 hours ago
that_guy_iain | 6 hours ago
porise | 6 hours ago
There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.
that_guy_iain | 4 hours ago
porise | 4 hours ago
that_guy_iain | 3 hours ago
porise | 3 hours ago
brendanfinan | 6 hours ago
kgwxd | 6 hours ago
cedws | 6 hours ago
kgwxd | 4 hours ago
burningChrome | 6 hours ago
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
porise | 6 hours ago
afavour | 5 hours ago
01HNNWZ0MV43FF | 6 hours ago
Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".
We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.
StilesCrisis | 4 hours ago
datsci_est_2015 | 5 hours ago
I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.
Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.
ianburrell | 3 hours ago
The other change is reluctance to break up companies. AT&T break up was big deal. Microsoft survived being broken up in its antitrust trial. Tech companies can only be broken up vertically, but maybe the forced competition would be enough.
mrweasel | 4 hours ago
palata | 4 hours ago
Simple: the US stopped caring about antitrust decades ago.
hit8run | 6 hours ago
edoceo | 5 hours ago
sama004 | 6 hours ago
gpmcadam | 6 hours ago
Beyond a meme at this point
wrxd | 6 hours ago
thinkindie | 6 hours ago
ilikerashers | 6 hours ago
RomanPushkin | 6 hours ago
alexeiz | 6 hours ago
gunapologist99 | 6 hours ago
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
jpalawaga | 6 hours ago
whatevermom3 | 3 hours ago
guluarte | 3 hours ago
SAI_Peregrinus | 3 hours ago
lelanthran | 3 hours ago
"Whoops, now that one is nuked too. You have any more backups I can practice my shell commands on?"
dmix | 6 hours ago
DeepYogurt | 5 hours ago
ifwinterco | 5 hours ago
rvz | 5 hours ago
Culonavirus | 6 hours ago
esafak | 5 hours ago
coffeebeqn | 4 hours ago
thesmart | 6 hours ago
If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.
[OP] MattIPv4 | 6 hours ago
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
CodingJeebus | 6 hours ago
The new-fangled copilot/agentic stuff I do read about on HN is meaningless to me if the core competency is lost here.
mentalgear | 6 hours ago
alfanick | 6 hours ago
rvz | 5 hours ago
thewhitetulip | 6 hours ago
thesmart | 6 hours ago
thewhitetulip | 5 hours ago
But I don't understand if they're that good why are we getting an outage every other week? AWS had an outage unsolved for about 9+ hrs!
mikert89 | 6 hours ago
shimman | 6 hours ago
All the more reason why they should be sliced and diced into oblivion.
jpalawaga | 6 hours ago
just add a new git remote and push. less so for issues and and pulls, but at least your dev team/ci doesn't end up blocked.
mikert89 | 6 hours ago
swiftcoder | 6 hours ago
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
mikert89 | 5 hours ago
blibble | 6 hours ago
ruined | 6 hours ago
zurfer | 6 hours ago
jcdcflo | 5 hours ago
iamsyr | 6 hours ago
razwall | 6 hours ago
ddtaylor | 6 hours ago
ecshafer | 6 hours ago
Github is down so often now, especially actions, I am not sure how so many companies are still relying on them.
bigfishrunning | 5 hours ago
Zambyte | 5 hours ago
cedws | 6 hours ago
edoceo | 6 hours ago
One solution I see is (eg) internal forge (Gitlab/gitea/etc) and then mirrored to GH for those secondary features.
Which is funny. If GH was better we'd just buy their better plan. But as it stands we buy from elsewhere and just use GH free plans.
regularfry | 5 hours ago
Mirroring is probably the way forward.
coffeebeqn | 4 hours ago
elcapitan | 6 hours ago
ZpJuUuNaQ5 | 6 hours ago
thesmart | 6 hours ago
Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.
munk-a | 6 hours ago
1313ed01 | 5 hours ago
_flux | 6 hours ago
kgwxd | 6 hours ago
small_model | 5 hours ago
jbpadgett | 6 hours ago
hnthrowaway0315 | 6 hours ago
perdomon | 5 hours ago
arccy | 5 hours ago
hnthrowaway0315 | 5 hours ago
collingreen | 4 hours ago
Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.
hnthrowaway0315 | 4 hours ago
jsheard | 5 hours ago
hnthrowaway0315 | 5 hours ago
estimator7292 | 5 hours ago
throwway120385 | 5 hours ago
literallyroy | 5 hours ago
paulddraper | 5 hours ago
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
---
All that said, GitHub is doing something wrong.
chickenpotpie | 5 hours ago
* Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
wizzwizz4 | 5 hours ago
paulddraper | an hour ago
Does it handle queries, trigger CI actions, run jobs?
wizzwizz4 | 14 minutes ago
Of course, you need some way of producing test loads similar to those found in production. One way would be to take a snapshot of production, tap incoming requests for a few weeks, log everything, then replay it at "as fast as we can" speed for testing; another way would be to just mirror production live, running the same operations in test as run in production.
Alternatively, you could take the "chaos monkey" approach (https://www.folklore.org/Monkey_Lives.html), do away with all notions of realism, and just fuzz the heck out of your test system. I'd go with that, first, because it's easy, and tends to catch the more obvious bugs.
toast0 | 4 hours ago
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
helterskelter | 5 hours ago
persedes | 3 hours ago
bartread | 5 hours ago
aloisdg | 5 hours ago
retinaros | 6 hours ago
stefankuehnel | 6 hours ago
hnthrowaway0315 | 6 hours ago
stefankuehnel | 6 hours ago
gowld | 5 hours ago
nozzlegear | 5 hours ago
peartickle | 5 hours ago
jakub_g | 5 hours ago
mrshu | 4 hours ago
https://mrshu.github.io/github-statuses/
dreadnip | 4 hours ago
matt_kantor | 3 hours ago
munk-a | 6 hours ago
akulbe | 6 hours ago
slyzmud | 5 hours ago
OtomotO | 5 hours ago
Computers can produce spreadsheets even better and they can warm the air around you even faster.
chrisjj | 5 hours ago
mrweasel | 5 hours ago
Sharlin | 4 hours ago
* writing endless reports and executive summaries
* pretending to know things that they don't
* not complaining if you present their ideas as yours
* sycophancy and fawning behavior towards superiors
munk-a | 5 hours ago
latchkey | 5 hours ago
LeifCarrotson | 5 hours ago
Aperocky | 4 hours ago
The inertia is not permanent.
moffkalast | 4 hours ago
munk-a | 3 hours ago
riddlemethat | 4 hours ago
risyachka | 5 hours ago
latexr | 4 hours ago
GitHub is under Microsoft’s CoreAI division, so that’s a pretty sure bet.
https://www.geekwire.com/2025/github-will-join-microsofts-co...
alansaber | 5 hours ago
re-thc | 5 hours ago
brookst | 5 hours ago
melodyogonna | 5 hours ago
re-thc | 5 hours ago
ghostly_s | 5 hours ago
bob1029 | 5 hours ago
ludwigvan | 4 hours ago
ygouzerh | 4 hours ago
12_throw_away | 4 hours ago
[1] https://www.theverge.com/tech/865689/microsoft-claude-code-a...
jeffrallen | 2 hours ago
chrisandchris | 2 hours ago
> During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.
That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.
BrouteMinou | 35 minutes ago
elondemirock | 4 hours ago
krrishd | 3 hours ago
danelski | 6 hours ago
iamleppert | 6 hours ago
camdenreslink | 5 hours ago
an0malous | 6 hours ago
DetroitThrow | 6 hours ago
tigerlily | 6 hours ago
throw_m239339 | 6 hours ago
swiftcoder | 6 hours ago
esafak | 5 hours ago
arcologies1985 | 6 hours ago
swiftcoder | 5 hours ago
arcologies1985 | 4 hours ago
swiftcoder | 4 hours ago
nostrapollo | 6 hours ago
It's definitely some extra devops time, but claude code makes it easy to get over the config hurdles.
akshitgaur2005 | 5 hours ago
peab | 6 hours ago
huntertwo | 6 hours ago
JamesTRexx | 6 hours ago
alexeiz | 3 hours ago
nusaru | 6 hours ago
pimpl | 6 hours ago
theredbeard | 6 hours ago
Kelteseth | 6 hours ago
MYEUHD | 5 hours ago
cortesoft | 5 hours ago
Kelteseth | 5 hours ago
throwuxiytayq | 5 hours ago
12_throw_away | 4 hours ago
misnome | 5 hours ago
plagiarist | 4 hours ago
ai-christianson | 6 hours ago
chasd00 | 5 hours ago
Distributed source control is distributable.
peartickle | 5 hours ago
mrweasel | 4 hours ago
ewuhic | 5 hours ago
tenacious_tuna | 5 hours ago
ad hominem isn't a very convincing argument, and as someone who also enjoys forgejo it doesn't make me feel good to see as the justification for another recommender.
Zetaphor | 5 hours ago
I personally use Gitea, so I'd appreciate some additional information.
rhdunn | 4 hours ago
Forgejo became a hard fork in 2024, with both projects diverging. If you're using it for local hosting I don't personally see much of a difference between them, although that may change as the two projects evolve.
[1] https://forgejo.org/compare-to-gitea/
xigoi | 4 hours ago
tenacious_tuna | an hour ago
I'd had a gitea instance before and it was appealing insofar as having the ability to mirror from or to a public repo, it had docker container registry capability, it ties into oauth, etc; I'm sure gitlab has much/all of that too, but forgejo's tiny, tiny footprint was very appealing for my resource-constrained selfhosted environment.
fishgoesblub | 5 hours ago
jruz | 5 hours ago
estimator7292 | 5 hours ago
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
NewJazz | 5 hours ago
yoyohello13 | 5 hours ago
ramon156 | 5 hours ago
mfenniak | 5 hours ago
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
palata | 4 hours ago
Also very happy with SourceHut, though it is quite different (Forgejo looks like a clone of GitHub, really). The SourceHut CI is really cool, too.
guluarte | 5 hours ago
xigoi | 4 hours ago
Defelo | 4 hours ago
rirze | 2 hours ago
lelanthran | 3 hours ago
Dunno about actions[1], but I've been using a $5/m DO droplet for the last 5 years for my private repo. If it ever runs out of disk space, an additional 100GB of mounted storage is an extra $10/m
I've put something on it (Gitea, I think) that has the web interface for submitting PRs, reviewing them, merging them, etc.
I don't think there is any extra value in paying more to a git hosting SaaS for a single user, than I pay for a DO droplet for (at peak) 20 users.
----------------------
[1] Tried using Jenkins, but alas, a $5/m DO droplet is insufficient to run Jenkins. I mashed up shell scripts + Makefiles in a loop, with a `sleep 60` between iterations.
charles_f | 6 hours ago
EDIT: my bad, seems to be their server's name.
aaronbrethorst | 6 hours ago
https://github.blog/news-insights/unicorn/
https://news.ycombinator.com/item?id=4957986
ihumanable | 6 hours ago
https://en.wikipedia.org/wiki/Unicorn_(web_server)
frou_dh | 5 hours ago
Brian_K_White | 5 hours ago
demothrowaway | 5 hours ago
jeltz | 5 hours ago
dbingham | 6 hours ago
With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.
Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.
Tade0 | 6 hours ago
Today, when I was trying to see the contribution timeline of one project, it didn't render.
dmix | 6 hours ago
feverzsj | 6 hours ago
behnamoh | 5 hours ago
oxag3n | 4 hours ago
gtowey | 3 hours ago
aqme28 | 6 hours ago
albelfio | 5 hours ago
Hamuko | 6 hours ago
seneca | 5 hours ago
byte_surgeon | 5 hours ago
parvardegr | 5 hours ago
rvz | 5 hours ago
Self hosting would be a better alternative, as I said 5 years ago. [0]
[0] https://news.ycombinator.com/item?id=22867803
edverma2 | 5 hours ago
akshitgaur2005 | 5 hours ago
Radicle is the most exciting out of these, imo!
EToS | 5 hours ago
0xbadcafebee | 5 hours ago
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
arnvald | 5 hours ago
Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.
Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.
CamT | 5 hours ago
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
ascendantlogic | 5 hours ago
unboxingelf | 5 hours ago
petetnt | 5 hours ago
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
huntaub | 5 hours ago
That doesn't normally happen to platforms of this size.
data-ottawa | 5 hours ago
There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.
lelanthran | 3 hours ago
ISTR that the lift-n-shift started like ... 3 years ago? That much of it was already shifted to Azure ... 2 years ago?
The only thing that changed in the last 1 year (if my above two assertions are correct (which they may not be)) is a much-publicised switch to AI-assisted coding.
BhavdeepSethi | 5 hours ago
rileymichael | 5 hours ago
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
swiftcoder | 5 hours ago
jcdcflo | 5 hours ago
Maybe they need to get more humans involved because GitHub is down at least once a week for a while now.
koreth1 | 5 hours ago
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
satya71 | 5 hours ago
yoyohello13 | 5 hours ago
whalesalad | 5 hours ago
run-run-forever | 5 hours ago
run-run-forever | 5 hours ago
ChrisArchitect | 5 hours ago
Incident with Pull Requests https://www.githubstatus.com/incidents/smf24rvl67v9
Copilot Policy Propagation Delays https://www.githubstatus.com/incidents/t5qmhtg29933
Incident with Actions https://www.githubstatus.com/incidents/tkz0ptx49rl0
Degraded performance for Copilot Coding Agent https://www.githubstatus.com/incidents/qrlc0jjgw517
Degraded Performance in Webhooks API and UI, Pull Requests https://www.githubstatus.com/incidents/ffz2k716tlhx
ChrisArchitect | 42 minutes ago
Notifications are delayed https://www.githubstatus.com/incidents/54hndjxft5bx
Incident with Issues, Actions and Git Operations https://www.githubstatus.com/incidents/lcw3tg2f6zsd
patrick4urcloud | 5 hours ago
QuiDortDine | 4 hours ago
zingerlio | 5 hours ago
semiinfinitely | 5 hours ago
romshark | 5 hours ago
bigbuppo | 5 hours ago
guluarte | 5 hours ago
davidfekke | 4 hours ago
petterroea | 4 hours ago
1vuio0pswjnm7 | 4 hours ago
I am able to access api.github.com at 20.205.243.168 no problem
No problem with githubusercontent.com either
bovermyer | 4 hours ago
Codeberg gets hit by a fair few attacks every year, but they're doing pretty well, given their resources.
I am _really_ enjoying Worktree so far.
rmunn | 4 hours ago
bovermyer | 4 hours ago
simianwords | 4 hours ago
canterburry | 4 hours ago
adamcharnock | 4 hours ago
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
[0]: https://lithus.eu, adam@
[1]: https://codeberg.org/forgejo/discussions/issues/440
PS. We are also looking at offering this as a managed service to our clients.
cyberpunk | an hour ago
kachapopopow | 4 hours ago
coincidence I think not!
mrshu | 4 hours ago
https://mrshu.github.io/github-statuses/
gordonhart | 4 hours ago
jeltz | 3 hours ago
onionisafruit | 3 hours ago
onionisafruit | 3 hours ago
nightpool | 3 hours ago
everfrustrated | 3 hours ago
EDIT: You mention this with archive.org links! Love it! https://mrshu.github.io/github-statuses/#about
Anon1096 | 3 hours ago
That's not at all how you measure uptime. The per area measures are cool but the top bar measuring across all services is silly.
I'm unsure what they are targeting, seems across the board it's mostly 99.5+ with the exception of Copilot. Just doing math, 3 (independent, which I'm aware they aren't fully) 99.5 services brings you down to an overall "single 9" 98.5 healthy status but it's not meaningful to anyone.
munk-a | 3 hours ago
reed1234 | 3 hours ago
mynameisvlad | 3 hours ago
Copilot seems to be the worst offender, and 99% of people using Github likely couldn't care less.
jablongo | 3 hours ago
GeneralGrevous | 3 hours ago
properbrew | 3 hours ago
guluarte | 3 hours ago
properbrew | 3 hours ago
GeneralGrevous | 3 hours ago
twistedpair | 3 hours ago
Are the other providers offering much better uptime GitLab, CircleCI, Harness? Saying this as someone that's been GH exclusive sicne 2010.
ilovefrog | 3 hours ago
trollbridge | an hour ago
gamblor956 | an hour ago
Github stability worse than ever. Windows 11 and Office stability worse than ever. Features that were useful for decades on computers with low resources are now "too hard" too implement.
Coincidence?
altern8 | an hour ago
Just saying.
jnhbgvjkb | 48 minutes ago
hkt | 31 minutes ago