I vaguely remember reading when this occurred. It was very recent no? Last few years for sure.
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
Ultimately, CPU schedulers are about choosing which attributes to weigh more heavily. See this[0] diagram from Github. EEVDF isn't a straight upgrade on CFS. Nor is LAVD over either.
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
Part of that is the assumption that Amazon/Meta/Google all have dedicated engineers who should be doing nothing but tuning performance for 0.0001% efficiency gains. At the scale of millions of servers, those tweaks add up to real dollar savings, and I suspect little of how they run is stock.
This is really just an example of survivorship bias and the power of Valve's good brand value. Big tech does in fact employ plenty of people working on the kernel to make 0.1% efficiency gains (for the reason you state), it's just not posted on HN. Someone would have found this eventually if not Valve.
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
Latency-aware scheduling is important in a lot of domains. Getting video frames or controller input delivered on a deadline is a similar problem to getting voice or video packets delivered on a deadline. Meanwhile housecleaning processes like log rotation can sort of happen whenever.
I mean.. many SteamOS flavors (and Linux distros in general have) have switched to Meta's Kyber IO scheduler to fix microstutter issues.. the knife cuts both ways :)
The comment was perfectly valid and topical and applicable. It doesn't matter what kind of improvement Meta supplied that everyone else took up. It could have been better cache invalidation or better usb mouse support.
In this case yes, but on the other hand Red Hat won't publish the RHEL code unless you have the binaries. The GPLv2 license requires you to provide the source code only if you provide the compiled binaries. In theory Meta can apply its own proprietary patches on Linux and don't publish the source code if it runs that patched Linux on its servers only.
Can't anyone get a RHEL instance on their favorite cloud, dnf install whatever packages they want sources of, email Redhat to demand the sources, and shut down the instance?
But that would be silly, because all of the code and binaries is already available via CentOS Stream. There's nothing in RHEL that isn't already public at some point via CentOS Stream.
There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions that are available to you, which is a huge temporal superset of what is specifically in RHEL.
This violates the GPL, which explicitly states that recipients are entitled to the source tree in a form suitable for modification -- which a web view is not.
RHEL source code is easily available to the public - via CentOS Stream.
For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort by finding those packages in Stream. It's just not served up to you on a silver platter unless you're a paying customer. You have M package versions for N packages - all open source - and you have to figure out the correct construction for yourself.
> SCX-LAVD has been worked on by Linux consulting firm Igalia under contract for Valve
It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?
They seem to be doing it through Igalia, which is a company based on specialized consulting for the Linux ecosystem, as opposed to hiring individual contractors. Your point still stands, but from my perspective this arrangement makes a lot of sense while the Igalia employees have better job security than they would as individual contractors.
Valve is actually extremely small, I've heard estimates at around 350-400 people.
They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.
Of course smaller companies exist — there are 1 person companies! But in a world where many tech companies have 50,000+ employees, 300 is much closer to 100 or 10 and they can all be considered small.
And then you consider it in context: a company with huge impact, brand recognition, and revenue (about $50M/employee in 2025). They’ve remained extremely small compared to how big they could grow.
There are not many tech companies with 50k+ employees, as a point of fact.
I’m not arguing just to argue - 300 people isn’t small by any measure. It’s absolutely not “extremely small” as was claimed. It’s not relatively small, it’s not “small for what they are doing”, it’s just not small at all.
300 people is a large company. The fact that a very small number of ultrahuge companies exist doesn’t change that.
For context, 300 people is substantially larger than the median company headcount in Germany, which is the largest economy in the EU.
You certainly seem to be arguing just to argue. You’re comparing Valve to the companies you choose to work for or the median German company, but those are irrelevant. You’re using the wrong reference set.
Valve is a global, revenue-dominant, platform-level technology company. In its category, 300 employees is extremely small.
Valve is not a German company, so that’s an odd context, but if you want to use Germany for reference, here are the five German companies with the closest revenue to Valve’s:
Would you say a country of 300 people isn't small?
Big, small, etc. are relative terms. There is no way to decide whether or not 300 is small without implicitly saying what it's small relative to. In context, it was obvious that the point being made was "valve is too small to have direct employees working on things other than the core business"
300 is extremely small for a company of their size in terms of revenue and impact. Linus media group and their other companies for instance is over 100 people, and is much smaller in impact and revenue than a company like valve, despite not being far off the number of employers (within an order of magnitude)...
Igalia is a bit unique as it serves as a single corporate entity for organizing a lot of sponsored work on the Linux kernel and open source projects. You'll notice in their blog posts they have collaborations with a number of other large companies seeking to sponsor very specific development work. For example, Google works with them a lot. I think it really just simplifies a lot of logistics for paying folks to do this kind of work, plus the Igalia employees can get shared efficiency's and savings for things like benefits etc.
This seems to be a win-win where developers benefit from more work in niche areas, companies benefit by getting better developers for the things they want done, and Igalia gets paid (effectively) for matching the two together, sourcing sufficient work/developers, etc.
I don't know much about Igalia but they are worker owned and I always see them work on high skill requirement tasks. Makes me wish I was good enough to work for them.
Valve is known to keep their employee count as low as possible. I would guess anything that can reasonably be contracted out is.
That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.
Proton is mainly a co-effort between in-house developers at Valve (with support on specific parts from contractors like Igalia), developers at CodeWeavers and the wider community.
For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.
It would be a large effort to stand up a department that solely focuses on Linux development just like it would be to shift game developers to writing Linux code. Much easier to just pay a company to do the hard stuff for you. I'm sure the steam deck hardware was the same, Valve did the overall design and requirements but another company did the actual hardware development.
This isn’t explicitly called out in any of the other comments in my opinion so I’ll state this. Valve as a company is incredibly focused internally on its business. Its business is games, game hardware, and game delivery. For anything outside of that purview instead of trying to build a huge internal team they contract out. I’m genuinely curious why other companies don’t do this style more often because it seems incredibly cost effective. They hire top level contractors to do top tier work on hyper specific areas and everyone benefits. I think this kind of work is why Valve gets a free pass to do some real heinous shit (all the gambling stuff) and maintain incredible good will. They’re a true “take the good with the bad” kind of company. I certainly don’t condone all the bad they’ve put out, and I also have to recognize all the good they’ve done at the same time.
Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.
Yeah, im sorry. Valve is the last company people should be focusing for this type of behavior. All the other AAA game companies use these mechanics to deliberate manipulate players. IMHO valve doesn't use predatory practices to keep this stuff going.
Just because they weren’t the first mover into predatory practices doesn’t mean they can’t say no to said practices. Each actor has agency to make their own operating and business decisions. Is Valve the worst of the lot? Absolutely not. But it was still their choice to implement.
What makes Valve special is that they were the first mover on those practices like lootboxes, gamepasses... but they never pushed it as far as the competition where it became predatory.
They have a track record of not engaging in these practices. It might be true that someday, we will get the wrong people in leadership positions at Valve that would entertain this behavior, but so far I don't think its going to happen. Valve has been time and time again, on the side of sane thinking around these topics. So IMHO your concern isn't really warranted as of yet.
How much of the video did you watch? I'm not aware of other game companies that enable 3rd party integrations into their item systems. This isn't just "lootboxes bad" - it's Valve profiting from actual gambling happening on external sites.
If you want to see how bad this really is, take a look at AAA games like call of duty where they dynamically alter in game loot mechanics to get people to purchase in game items.
Value is chump change in this department. They allow the practice of purchasing loot boxes and items but don't analyze and manipulate behaviors. Valve is the least bad actor in this department.
I watched half the video and found it pretty biased compared to whats happening in the industry right now.
I feel this argument of Valve deliberately profiting off of gambling not really the whole story. I certainly dont think that Valve designed there systems to encourage gambling. More like they wanted a way to bring in money to develop other areas of their platform so they can make it better, which they did. And in many cases are putting players first. Players developed bad behaviors around purchasing in-game and trading items and have chosen to indulge in the behavior. 3rd parties have rose up around a unhealthy need that IMHO is not Valves doing. And most importantly, since I was around when these systems went into place, allowing me to see what was happening, this kind of player behavior developed over time. I don't think Valve deliberately encouraged it.
The entire gaming industry is burning down before our eyes because of AAA greed and you guys are choosing to focus on the one company thats fighting against it. Im not getting it.
> call of duty where they dynamically alter in game loot mechanics to get people to purchase in game items.
[Citation needed]
> I certainly dont think that Valve designed there systems to encourage gambling
Cases are literally slot machines.
> [section about third-party websites] I don't think Valve deliberately encouraged it.
OK, but they continue to allow it (through poor enforcement of their own ToS), and it continues to generate them obscene amounts of money?
> you guys are choosing to focus on the one company thats fighting against it.
Yes, we should let the billion dollar company get away with shovelling gambling to children.
Also, frankly speaking, other AAAs are less predatory with gambling. Fortnite, CoD, and VALORANT to pick some examples, are all just simple purchases from a store. Yes, they have issues with FOMO, and bullying for not buying skins [0], but oh my god, it isn't allowing children to literally do sports gambling (and I should know, I've actively gambled on esports while underage via CS, and I know people that have lost $600+ while underage on CS gambling).
I'm choosing not to place the blame on them as I don't see it as something they can control. And I trust Valve to do the right thing over most any large game studio out there. The history of reputation and actions matter. I think you want to to try and skew the narrative based on you own particular bias. The situation is much bigger than what you are making it out to be.
If you have competent people on both sides who care, I don't see why it wouldn't work.
The problem seems, at least from a distance, to be that bosses treat it as a fire-and-forget solution.
We haven't had any software done by oursiders yet, but we have hired consultants to help us on specifics, like changing our infra and help move local servers to the cloud. They've been very effective and helped us a lot.
We had talks though so we found someone who we could trust had the knowledge, and we were knowledgeable enough ourselves that we could determine that. We then followed up closely.
Most companies that hiring a ton of contractors are doing it for business/financial reporting reasons. Contractors don't show up as employees so investors don't see employee count rise so metric of "Revenue/Employee" ratio does not get dragged down and contractors can be cut immediately with no further on expenses. Laid off employees take about quarter to be truly shed from the books between severance, vacation payouts and unemployment insurance.
Nope. Plenty of top-tier contractors work quietly with their clientele and let the companies take the credit (so long as they reference the contractor to others, keeping the gravy train going.)
If you don't see it happening, the game is being played as intended.
The .308 footgun with software contracting stems from a misunderstanding of what we pay software developers for. The model under which contracting seems like the right move is "we pay software developers because we want a unit of software", like how you pay a carpenter to build you some custom cabinets. If the union of "things you have a very particular opinion about, and can specify coherently" and "things you don't care about" completely cover a project, contracting works great for that purpose.
But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.
The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.
I've seen both good and bad contractors in multiple industries.
When I worked in the HFC/Fiber plant design industry, the simple act of "Don't use the same boilerplate MSA for every type of vendor" and being more specific about project requirements in the RFP makes it very clear what is expected, and suddenly we'd get better bids, and would carefully review the bids to make sure that the response indicated they understood the work.
We also had our own 'internal' cost estimates (i.e. if we had the in house capacity, how long would it take to do and how much would it cost) which made it clear when a vendor was in over their head under-bidding just to get the work, which was never a good thing.
And, I've seen that done in the software industry as well, and it worked.
That said, the main 'extra' challenge in IT is that key is that many of the good players aren't going to be the ones beating down your door like the big 4 or a WITCH consultancy will.
But really at the end of the day, the problem is what often happens is that business-people who don't really know (or necessarily -care-) about specifics enough unfortunately are the people picking things like vendors.
And worse, sometimes they're the ones writing the spec and not letting engineers review it. [0]
[0] - This once led to an off-shore body shop getting a requirement along the lines of 'the stored procedures and SQL called should be configurable' and sure enough the web.config had ALL the SQL and stored procedures as XML elements, loaded from config just before the DB call, thing was a bitch to debug and their testing alone wreaked havoc on our dev DB.
Igalia isn’t your typical contractor. It’s made up of competent developers that actually want to be there and care to see open source succeed. Completely different ball game.
This is mostly because the title of contracter has come to mean many things. In the original form, of outsourcing temporary work to experts in the field it still works very very well. Where it fails is when a business contracts out business critical work, or contracts to a general company rather than experts.
Yeah, I suppose this workflow is not for everyone. I can only imagine Valve has very specific issue or requirements in mind when they hire contractors like this. When you hire like this, i suspect what one really pay for is a well known name that will be able to push something important to you to upstream linux. Its the right way to do it if you want it resolved quickly. If you come in as a fresh contributor, landing features upstream could take years.
I don't know what you're trying to suggest or question. If there is a question here, what is it exactly, and why is that question interesting? Do they employ contractors? Yes. Why was that a question?
Speaking for myself, Valve has been great to work with - chill, and they bring real technical focus. It's still engineers running the show there, and they're good at what they do. A real breath of fresh air from much of the tech world.
Valve has a weird obsession with maximizing their profit-per-employee ratio. There are stories from ex-employees out on the web about how this creates a hostile environment, and perverse incentives to sabotage those below you to protect your own job.
I don't remember all the details, but it doesn't seem like a great place to work, at least based on the horror stories I've read.
Valve does a lot of awesome things, but they also do a lot of shitty things, and I think their productivity is abysmal based on what you'd expect from a company with their market share. They have very successful products, but it's obvious that basically all of their income comes from rent-seeking from developers who want to (well, need to) publish on Steam.
Valve is practically singlehandedly dragging the Linux ecosystem forward in areas that nobody else wanted to touch.
They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.
Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.
Over time they're going to touch things that people were waiting for Microsoft to do for years. I don't have an example in mind at the moment, but it's a lot better to make the changes yourself than wait for OS or console manufacturer to take action.
Sleep and hibernate don't just work on Windows unless Microsoft work with laptop and boards manufacturers to make Windows play nice with all those drivers. It's inevitable that it's hit and miss on any other OS that manufacturers don't care much about. Apple does nearly everything inside their walls, that's why it just works.
Regardless of how it must be implemented, if this is a desirable feature then this explanation isn’t an absolution of Linux but rather an indictment: its development model cannot consistently provide this product feature.
(And same for Windows to the degree it is more inconsistent on Windows than Mac)
It's not the development model at fault here. It's the simple fact that Windows makes up nearly the entire user base for PCs. Companies make sure their hardware works with Windows, but many don't bother with Linux because it's such a tiny percentage of their sales.
Except when it doesn't. I can't upgrade my Intel graphics drivers to any newer version than what came with the laptop or else my laptop will silently die while asleep. Internet is full of similar reports from other laptop and graphics manufacturers and none have any solutions that work. The only thing that reliably worked is to restore the original driver version. Doesn't matter if I use the WHQL version(s) or something else.
> Regardless of how it must be implemented, if this is a desirable feature then this explanation isn’t an absolution of Linux but rather an indictment: its development model cannot consistently provide this product feature.
The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.
Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.
> its development model cannot consistently provide this product feature.
The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.
The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.
A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.
I should have said ‘product development’ model versus just ‘development’ to be more clear. To state another way: Linux has no way, no function, no pathway to providing this. This is not really surprising, because it isn’t the work software developers find fun and self-rewarding, but rather more the relatively mundane business-as-usual scope of product managers and business development folks.
… And that’s all fine, because this is a super niche need: effectively nobody needs Linux laptops and even fewer depend on sleep to work. If ‘Linux’ convinced itself it really really needed to solve this problem for whatever reason, it would do something that doesn’t look like its current development model, something outside that.
Regardless, the net result in the world today is that Linux sleep doesn’t work in general.
Sleep has always worked on my desktop with a random Asus board from the early 2020s with no issues aside from one Nvidia driver bug earlier this year (which was their fault not MS's). Am I just really lucky?
From what I read, it was a lot of the prosumer/gamer brands (MSI, Gigabyte, ASUS) implementing their part of sleep/hibernate badly on their motherboards. Which honestly lines up with my experience with them and other chips they use (in my case, USB controllers). Lots of RGB and maybe overclocking tech, but the cheapest power management and connectivity chips they can get (arguably what usually gets used the most by people).
It never really worked in games even with S3 sleep. The new connected standby stuff created new issues but sleeping a laptop while gaming was a roulette wheel. SteamOS and the like actually work, like maybe 1/100 times I've run into an issue. Windows was 50/50.
Power management is a really hard problem. It's the stickiest of programming problems, a multi-threaded sequence where timing matters across threads (sometimes down to the ns). I'm convinced only devices that have hardware and software made by the same company (Apple, Andoid phones, Steam deck, maybe Surface laptops) have a shot in hell at getting it perfect. The long-tail/corner cases and testing is a nightmare.
As an example, if you have a mac, run "ioreg -w0 -p IOPower" and see all the drivers that have to interact with each other to do power management.
On my Framework 13 AMD : Sleep just works on Fedora. Sleep is unreliable on Windows; if my fans are all running at full speed while running a game and I close the lid to begin sleeping, it will start sleeping and eventually wake up with all fans blaring.
I don't understand this comment in this context. Both of these features work on my Steam Deck. Neither of them have worked on any Windows laptop my employers have foisted upon me.
That requires driver support. What you're seeing is Microsoft's hardware certification forcing device vendors to care about their products. You're right that this is lacking on Linux, but it's not a slight on the kernel itself.
I was at Microsoft during the Windows 8 cycle. I remember hearing about a kernel feature I found interesting. Then I found linux had it for a few years at the time.
I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.
io_uring does more than IOCP. It's more like an asynchronous syscall interface that avoids the overhead of directly trapping into the kernel. This avoids some overheads IOCP cannot. I'm rusty on the details but the NT kernel has since introduced an imitation: https://learn.microsoft.com/en-us/windows/win32/api/ioringap...
I think they are a bit different - in the Windows kernel, all IO is asynchronous on the driver level, on Linux, it's not.
io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.
In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.
That argument holds no water. IOUring is essential for the performance of some modern POSIX programs.
You can see shims for fork() to stop tanking performance so hard too. IOUring doesnt map at all onto IOCP, at least the windows subtitute for fork has “ZwCreateProcess“ to work from. IOUring had nothing.
IOCP is much nicer from a dev point of view because your program can be signalled when a buffer has data on it but also with the information of how much data, everything else seems to fail at doing this properly.
And behind on a lot of stuff. The Microsoft's ACLs are nothing short of one of the best designed permission systems there are.
On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.
ACLs in Linux were tacked on later; not everything supports them properly. They were built into Windows NT from the start and are used consistently across kernel and userspace, making them far more useful in practice.
Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.
Some of us can! I certainly enjoy doing it, and according to "man 5 acl" what you assert is completely false. Unless you have a particular commit or document from kernel.org you had in mind?
See 6.2.1 of RFC8881, where NFSv4 ACLs are described. They are quite similar to Windows ACLs.
Here is kernel dev telling they are against adding NFSv4 ACL implementation. The relevant RichAcls patch never got merged: https://lkml.org/lkml/2016/3/15/52
I see what I misunderstood, even in the presence of an ALLOW entry, a DENY entry would prohibit access. I am familiar with that on the Windows side but haven't really dug into Linux ACLs. The ACCESS CHECK ALGORITHM[1] section of the acl(5) man page was pretty clear, I think.
Haha, sure. Sorry, it's not you, it's the ACLs (and me nerves). Have you tried configuring NFSv4 ACLs on Linux? Because kernel devs are against supporting them, you either use some other OS or have all sorts of "fun". Also, not to be confused with all sorts of LSM based ACLs... Linux has ACLs in the most ridiculous way imaginable...
Do you have any favorite docs or blogs on these? Reading about one of the best designed permissions systems sounds like a fun way to spend an afternoon ;)
Oh yeah for sure. Linux is amazing in a computer science sense, but it still can't beat Windows' vertically integrated registry/GPO based permissions system. Group/Local Policy especially, since it's effectively a zero coding required system.
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.
2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.
3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.
There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).
Not an implementer of group policy, more of a consumer. There are 2 things that I find extremely problematic about them in practice.
- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?
- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.
That's not a problem with group policy. You're just complaining that GPO is not omnipotent. That's out of scope for group policies mate. You win, yeah yeah.... Bye
Debian (and thus Ubuntu) has full support for automated installs since the 90's. It's built into `dpkg` since forever. That include saving or generating answer to install time questions, PXE deployment, ghosting, CloudInit and everything. Then stuff like Ansible/Puppet have been automating deployment for a long time too. They might have added yet another way of doing it, but full stack deployment automation has been there for as long as Ubuntu existed.
> Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.
Yeah, I have been working on the RHEL and Fedora installer since 2013 and already back then it had a long history almost lost to time - the git history goes all the way back to 1999 (the history was imported from CVS, as it predates Git) and that actually only cover the first graphical interface - it had automated installation support via kickstart and a text interface long before that, but the commit history has been apparently lost. And there seems to have been even some earlier distict installer before Anaconda, that likely also supported some sort of automated install.
BTW, we managed to get the earlies history of the project written down here by one of the earliest contributors for anyone who might be interested:
Note how some commands were introduced way back in the single digit Fedora/Fedora Core age - that was from about 2003 to 2008. Latest Fedora is Fedora 43. :)
> The Microsoft's ACLs are nothing short of one of the best designed permission systems there are.
You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.
Procmon.exe. Give me 2 minutes. You make it sound like it's such a difficult thing to do. It literally will not take me more than 2 minutes to tell you exactly where the permission issue is and how to fix it.
Procmon won't show you every type of resource access. Even when it does, it won't tell you which entity in the resource chain caused the issue.
And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...
Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.
Especially when the permission issue is up the chain from the application. Sure it is allowed to access that subkey, but not the great great grandparent key.
At this point you're just arguing for the sake of bashing on Microsoft. You said it yourself, that's not related to ACL, so what are you doing, mate? This is not healthy foundation for a constructive discussion.
The file permission system on Windows allows for super granular permissions, yes; administrating those permissions was a massive pain, especially on Windows file servers.
And yet, it requires kernel extension anti-cheat to stop a game mod from reading and writing memory locations in a running process. It’s a toy operating system if it can’t even prevent that. It’s why corporate machines are so locked down. Then there is the fact video drivers run in ring 0 and are allowed to phone home… but hey you can prevent notepad++ from running FTW.
I was surprised to hear that Windows just added native NVMe which Linux has had for many years. I wonder if Azure has been paying the SCSI emulation tax this whole time.
It was always wild to me that their installer was just not able to detect an NVMe drive out of the box in certain situations. I saw it a few times with customers when I was doing support for a Linux company.
Yeah and Linux is waaay behind in other areas. Windows had a secure attention sequence (ctrl-alt-del to login) for several decades now. Linux still doesn't.
Linux (well, more accurately, X11), has had a SAK for ages now, in the form of the CTRL+ALT+BACKSPACE that immediately kills X11, booting you back to the login screen.
I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.
This setup came from the era of Windows running basically everything as administrator or something close to it.
The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.
The "threat model" (if anyone even called it that) of applications back then was bugs resulting in unintended spin-locks, and the user not realizing they're critically short on RAM or disk space.
There are many a ways to disable CTRL+ALT+DEL on windows too, from registry tricks to group policy options. Overall, SAK seems to be a relic of the past that should be kept far away from any security consideration.
It made a lot more sense in the bygone years of users casually downloading and running exe's to get more AIM "smilies", or putting in a floppy disk or CD and having the system autoexec whatever malware the last user of that disk had. It was the expected norm for everybody's computer to be an absolute mess.
These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.
The more powerful form is the UAC full privilege escalation dance that Win 7+(?) does, which is a surprisingly elegant UX solution.
1. Snapshot the desktop
2. Switch to a separate secure UI session
3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.
Clever way of dealing with the train wreck of legacy Windows user/program permissioning.
One of the things Windows did right, IMO. I hate that elevation prompts on macOS and most linux desktops are indistinguishable from any other window.
It's not just visual either. The secure desktop is in protected memory, and no other process can access it. Only NTAUTHORITY\System can initiate showing it and interact with it any way, no other process can.
You can also configure it to require you to press CTRL+ALT+DEL on the UAC prompt to be able to interact with it and enter credentials as another safeguard against spoofing.
I'm not even sure if Wayland supports doing something like that.
My only experience with non-UAC endpoint privilege management was BeyondTrust and it seemed to try to do what UAC did but with a worse user experience. It looks like the Intune EPM offering also doesn't present as clear a delineation as UAC, which seems like a missed opportunity.
It's useful for shared spaces like schools, universities and internet cafes. The point is that without it you can display a fake login screen and gather people's passwords.
I actually wrote a fake version of RMNet login when I was in school (before Windows added ctrl-alt-del to login).
Linux is behind Windows wrt (Hybrid) Microkernel vs Monolith, which helps with having drivers and subsystems in user mode and support multiple personalities (Win32, POSIX, OS/2 and WSL subsystems). Linux can hot‑patch the kernel, but replacing core components is risky and drivers and filesystems cannot be restarted independently.
I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).
Kernel level anti-cheat with trusted execution / signed kernels is probably a reasonable new frontier for online games, but it requires a certain level of adoption from game makers.
This is a part of Secure Boot, which Linux people have raged against for a long time. Mostly because the main key signing authority was Microsoft.
But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.
Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.
There are plenty of locked down computers in my life already. I don't need or want another system that only runs crap signed by someone, and it doesn't really matter whether that someone is Microsoft or Redhat. A computer is truly "general purpose" only if it will run exactly the executable code I choose to place there, and Secure Boot is designed to prevent that.
I don't know overall in the ecosystem but Fedora has been working for me with secureboot enabled for a long time.
Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.
Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.
The goals of the people mandating Secure Boot are completely opposed to the goals of people who want to decide what software they run on the computer they own. Literally the entire point of remote attestation is to take that choice away from you (e.g. because they don't want you to choose to run cheating software). It's not a matter of "no one stepped up"; it's that Epic Games isn't going to trust my secure boot key for my kernel I built.
The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.
> anti-cheat far precedes the casinoification of modern games.
> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.
There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.
I'm not giving game ownership of my kernel, that's fucking insane. That will lead to nothing but other companies using the same tech to enforce other things, like the software you can run on your own stuff.
Tbh i'm starting to think that I do not see microsoft being able to keep it's position in the OS market ; with steam doing all the hard work and having a great market to play with ; the vast distributions to choose from, and most importantly how easy it has become to create an operating system from scratch - they not only lost all possible appeal, they seem stuck on really weird fetichism with their taskbar and just didn't provide me any kind of reason to be excited about windows.
Their research department rocks however so it's not a full bash on Microsoft at all - i just feel like they are focusing on other way more interesting stuff
Kernel improvements are interesting to geeks and data centers, but open source is fundamentally incompatible with great user experience.
Great UX requires a lot of work that is hard but not algorithmically challenging. It requires consistency and getting many stakeholders to buy in. It requires spending lots of time on things that will never be used by more than 10-20% of people.
Windows got a proper graphics compositor (DWM) in 2006 and made it mandatory in 2012. macOS had one even earlier. Linux fought against Compiz and while Wayland feels inevitable vocal forces still complain about/argue against it. Linux has a dozen incompatible UI toolkits.
Screen readers on Linux are a mess. High contrast is a mess. Setting font size in a way that most programs respect is a mess. Consistent keyboard shortcuts are a mess.
I could go on, but these are problems that open source is not set up to solve. These are problems that are hard, annoying, not particularly fun. People generally only solve them when they are paid to, and often only when governments or large customers pass laws requiring the work to be done and threaten to not buy your product if you don't do it. But they are crucially important things to building a great, widely adopted experience.
If you have anything less than perfect vision and need any accessibility features, yes. If you have a High DPI screen, yes. In many important areas (window management, keyboard shortcuts, etc.), yes.
Linux DEs still can't match the accessibility features alone.
yeah, there's layers and layers of progressively older UIs layered around the OS, but most of it makes sense, is laid out sanely, and is relatively consistent with other dialogs.
macOS beats it, but its still better in a lot of ways over the big Linux DEs.
Start menu in the middle of the screen that takes a couple seconds to even load (because it is implemented in React horribly enought to be this slow) only to show adds next to everything is perfect user experience.
Every other button triggering Copilots assures even better UX goodness.
Your comment gives the impression that you think open source software is only developed by unpaid hobbyists. This not true, this is quite an outdated view. Many things are worked on by developers paid full time.
And that people are mostly interested in algorithmically challenging stuff, which I don't think is the case.
Accessibility does need improvement. It seems severely lacking. Although your link makes it look like it's not that bad actually, I would have expected worse.
> Tbh i'm starting to think that I do not see microsoft being able to keep it's position in the OS market
It's a big space. Traditionally, Microsoft has held both the multimedia, gaming and lots of professional segments, but with Valve doing a large push into the two first and Microsoft not even giving it a half-hearted try, it might just be that corporate computers continue using Microsoft, people's home media equipment is all Valve and hipsters (and others...) keep on using Apple.
Windows will remain as the default "enterprise desktop." It'll effectively become just another piece of business software, like an ERP.
Gamers, devs, enthusiasts will end up on Linux and/or SteamOS via Valve hardware, creatives and personal users that still use a computer instead of their phone or tablet will land in Apple land.
With the massive adoption of web apps in Enterprise I have seen I would expect Windows to become irelevant or even a liability in business use as well.
Still, some sort of OS is required to run that browser that renders the websites, and some team needs to manage a fleet of those computers running that OS. And that's where Microsoft will sit, since they're unable to build good consumer products, they'll eventually start focusing exclusively on businesses and enterprises.
If you just need something that runs a browser, can't you do that with something like Chrome OS/MacOS/RHEL Workstation/whatever SUSE has for workstation users ? :)
Add to that all the bullshit they have been pushing on their customers lately:
* OS level adds
* invasive AI integration
* dropping support for 40% of their installed base (Windows 10)
* forcing useless DRM/trusted computing hardware - TPM - as a requirement to install the new and objectively worse Windows version version, with even more spying and worse performance (Windows 11)
With that I think their prospects are bleak & I have no idea who would install anything else than Steam OS or Bazzite in the future with this kind of Microsoft behavior.
First Valve has to actually start pushing for proper Linux games, until then Windows can keep enjoying its 70% market share, with game studios using Windows business as usual.
Also Raspeberri PIs are the only GNU/Linux devices most people can find at retail stores.
The Linux kernel and Windows userspace are not very well matched on a fundamental level. I’m not sure we should be looking forward to that, other than for running games and other insular apps.
It kinda looked like this is the future, about at the same time they introduced WSL, released dotNET for Linux and started contributing to the Linux Kernel - all the while making the bank with Azure mostly thanks to running Linux workloads.
But then they deCided it is better to show adds at OS level, rewrite OS UI as a web app, force harware DRM for their new OS version (TPM requirement) as well as automatically capturing content of you screen and feed it to AI.
I’ve heard from several people who game on Windows that Gamescope side panel with OS-wide tweakables for overlays, performance, power, frame limiters and scaling is something that they miss after playing on Steam Deck. There are separate utilities for each, but not anything so simple and accessible as in Gamescope.
A good one is the shader pre caching with fossilize, microsoft is only now getting around it and it still pales in comparison to Valve's solution for Linux.
I do agree. It's also thanks to gaming that the GPU industry was in such a good state to be consumed by AI now. Game development used to always be the frontier of software optimisation techniques and ingenious approaches to the constraints.
> Valve is practically singlehandedly dragging the Linux ecosystem forward in areas that nobody else wanted to touch.
I'm loving what valve has been doing, and their willingness to shove money into projects that have long been under invested in, BUT. Please don't forget all the volunteers that have developed these systems for years before valve decided to step up. All of this is only possible because a ton of different people spent decades slowly building a project, that for most of it's lifetime seemed like a dead end idea.
Wine as a software package is nothing short of miraculous. It has been monumentally expensive to build, but is provided to everyone to freely use as they wish.
Nobody, and I do mean NOBODY would have funded a project that spent 20 years struggling to run office and photoshop. Valve took it across the finish line into commercially useful project, but they could not have done that without the decade+ of work before that.
I low key hope the current DDR5 prices push them to drag the Linux memory and swap management into the 21st century, too, because hard locking on low memory got old a while ago
I feel like all of the elements are there: zram, zswap, various packages that improve on default oom handling... maybe it's more about creating sane defaults that "just work" at this point?
It takes a solid 45 seconds for me to enable zram (compressed RAM as swap) on a fresh Arch install. I know that doesn't solve the issue for 99% of people who don't even know what zram is / have no idea how to do it / are trying to do it for the first time, but it would be pretty easy for someone to enable that in a distro. I wouldn't be shocked if it is already enabled by default in Ubuntu or Fedora.
that just pushes away the problem ,it doesn't solve it. I still hit that limit when i ran a big compile while some other programs were using a lot of memory.
Zswap is arguably better. It confers most of the benefits of zram swap, plus being able to evict to non-RAM if cache becomes more important or if the situation is dire. The only times I use zram are when all I have to work with for storage is MMC, which is too slow and fragile to be written to unless absolutely necessary.
See mac or windows: grow swap automatically up to some sane limit, show a warning, give user an option to kill stuff; on headless systems, kill stuff. Do not page out critical system processes like sshd or the compositor.
A hard lock which requires a reboot or god forbid power cycling is the worst possible outcome, literally anything else which doesn’t start a fire is an improvement TBH.
I feel I just need to run a slightly too large LLM with too much context on a MBP, and it's enough to slow it down irreparably until it suddenly hard resets. Maybe the memory pressure it does that at is much higher though compared to Linux?
> This is the best kind of open source trickledown.
We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.
They have to abide by the Wine license, which is basically GPL, so unless they’re going to make their own from scratch, they can’t make the bread and butter of their compat layer proprietary
There is absolutely nothing harmful about permissive licenses. Let's say that Wine was under the MIT license, and Valve started publishing a proprietary fork. The original is still there! Nobody is harmed by some proprietary fork existing, because nothing was taken away from them.
A decade or two ago Wine was on permissive license (MIT I think). When proprietary forks started appearing, Codewavers (which employs all the major Wine contributors) relicensed it as GPL.
If I'm not mistaken this has been greatly facilitated by the recent bpf based extension mechanism that allows developers to go crazy on creating schedulers and other functionality through some protected virtual machine mechanism provided by the kernel.
Man, if only meta would give back, oh and also stop letting scammers use their AI to scam our parents, but hey, that accounted for 10% of their revenue this last year, that's $16 BILLION.
Valve seemingly has no concerns with using the same tactics casinos perfected to hook people (and their demographics are young). They are not Meta level of societal harm, but they are happy to be a gateway for kids into gambling. Not that this is unusual in gaming unfortunately.
One would've expected one of the many desktop-oriented distros (some with considerable funding, even) to have tackled these things already, but somehow desktop Linux has been stuck in the awkward midway of "it technically works, just learn to live with the rough edges" until finally Valve took initiative. Go figure.
There's far more of that, starting with the lack of a stable ABI in gnu/linux distros. Eventually Valve or Google (with Android) are gonna swoop in with a user-friendly, targetable by devs OS that's actually a single platform
I don't have a whole lot of faith in Google, based on considerable experience with developing for Android. Put plainly, it's a mess, and even with improvements in recent years there's enough low-hanging fruit for improving its developer story that much of it has fallen off the tree and stands a foot thick on the ground.
Mobile in general is a disappointment. iOS is better but not great. It was a real chance to get a lot of things right that sucked on desktop, and that chance was mostly squandered.
At least iOS made the deep and robust AppKit/Cocoa the foundation of its primary kit and then over the years made sensible QoL changes, resulting in something reasonably pleasant to write for. That, and it doesn’t fight you and make you jump through hoops if you’d rather use some flavor of C, C++, or something else LLVM can handle instead of a JVM-something. That goes a long way.
Mobile is completely hamstrung, all of the effort went into creating as much vendor lock-in as possible rather than into creating a useful pocket computer. There's all this cool tech on and adjacent to mobile that you can't actually use in any meaningful way because every aspect of it is someone's money patch and they don't want to work together.
Except that Android doesn't have a fixed ABI either. Google Play requires apps to rebuild targeting the latest Android ABI all the time. They have one year after each release to update or be removed.
Ubuntu LTS is currently on track to be that. Both in the server and desktop space, in my personal experience it feels like a rising number of commercial apps are targeting that distro specifically.
It’s not my distribution of choice, but it’s currently doing exactly what you suggest.
The problem with any LTS release is lack of support for newer hardware. Not as much of an issue for an enthusiast or sysadmin who's likely to be using well-supported hardware, but can be a huge one for a more typical end user hoping to run Linux on their recently purchased laptop.
That may have been from a generation that’d been out for many months or a year, or was built on a CPU and chipset that’d been out for quite some time already.
The problem is that Linux can’t handle hardware it doesn’t have drivers for (or can only run it in an extremely basic mode), and LTS kernels only have drivers for hardware that existed prior to their release.
I just installed Ubuntu again after a few years, and it’s striking how familiar the pain points are—especially around graphics. If Ubuntu LTS is positioning itself as the standard commercial Linux target, it has to clearly outperform Windows on fundamentals, not just ideology. Linux feels perpetually one breakthrough release away from actually displacing it.
That's why, RHEL for example, has such a long support lifecycle. It's so you can develop software targeting RHEL specifically, and know you have a stable environment for 10+ years. RHEL sells a stable (as in unchanging) OS for x number of years to target.
And if you want to follow the RHEL shaped bleeding edge you can develop on latest Fedora. I'll often do this, develop/package and Fedora and then build on RHEL as well.
Please don't erase all the groundwork they've done over the years to make it possible for these later enhancements to happen. It wasn't like they were twiddling their thumbs this whole time!
That's not my intention at all. It's just frustrating how little of it translates to impact that's readily felt by end users, including those of us without technical inclination.
That isn't it. Generally whatever the majority of users tend to use that where the majority of focus goes.
The vast majority of people that were using Linux on the desktop before 2015 were either hobbyists, developers or people that didn't want to run proprietary software for whatever reason.
These people generally didn't care about a lot of fancy tech mentioned. So this stuff didn't get fixed.
There’s some truth to that, but a lot of (maybe most) Linux desktop users are on laptops and yet there are many aspects of the Linux laptop experience that skew poor.
I think the bigger problem is that commercial use cases suck much of the air out of the room, leaving little for end user desktop use cases.
It's not just Valve taking the initiative. It's mostly because Windows has become increasingly hostile and just plain horrible over the years. They'll be writing textbooks on how badly Microsoft screwed up their operating system.
I'm a Mac user, but I recently played around with a beefy laptop at work to see how games ran on it, and I was shocked at how bad and user-hostile Windows 11 is. I had previously used Windows 98, 2000, XP, Vista, and 7, but 11 is just so janky. It's feestoned with Co-pilot/AI jank, and seems to be filled with ads and spyware.
If I didn't know better, I'd assume Windows was a free, ad-supported product. If I ever pick up a dedicated PC for gaming, it's going to be a Steam Machine and/or Steam Deck. Microsoft is basically lighting Xbox and Windows on fire to chase AI clanker slop.
(I've been a cross platform numerical developer in GIS and geophysics for decades)
serious windows power users, current and former windows developers and engineers, swear by Chris Titus Tech's Windows Utility.
It's an open powershell suite collaboration by hundreds maintained by an opinionated coordinater that allows easy installation of common tools, easy setting of update behaviours, easy tweaking of telemetry and AI addons, and easy creation of custom ISO installs and images for VM application (dedicated stripped down windows OS for games or a Qubes shard)
It's got a lot of help hover tooltip's to assist in choices and avoiding suprises, you can always look to the scripts that are run if you're suspicious.
" Windows isn't that bad if you clean it out with a stiff enough broom "
That said, I'm setting my grandkids up with Bazzite decks and forcing them to work in CLI's for a lot of things to get them used to seeing things under the hood.
Bazzite is nice but its not very CLI centric I think because of the immutability. Its a great OS, but I found Cachy a lot better if you want to work from CLI in normal ways
Linux (and its ecosystem) sucks at having focus and direction.
They might get something right here and there, especially related to servers, but they are awful at not spinning wheels
See how wayland progress is slow. See how some distros moved to it only after a lot of kicking and screaming.
See how a lot of peripherals in "newer" (sometimes a model that's 2 or 3 yrs on the market) only barely works in a newer distro. Or has weird bugs
"but the manufacturers..." "but the hw producers..." "but open source..." whine
Because Linux lacks a good hierarchy at isolating responsibility, otherwise going for a "every kernel driver can do all it wants" together with "interfaces that keep flipping and flopping at every new kernel release" - notable (good) exception : USB userspace drivers. And don't even get me started on the whole mess that is xorg drivers
And then you have a Ruby Goldberg machine in form of udev dbus and what not, or whatever newer solution that solves half the problems and create another new collection of bugs.
Honestly I can't see it remaining tenable to keep things like drivers in the kernel for too much longer… both due to the sheer speed at the industry moves and due to the security implications involved.
I have a feeling this will also drag Linux mobile forwards.
Currently almost no one is using Linux for mobile because the lack or apps (banking for example) and bad hardware support.
When developing for Linux becomes more and more attractive this might change.
> When developing for Linux becomes more and more attractive this might change.
If one (or maybe two) OSes win, then sure. The problem is there is no "develop for Linux" unless you are writing for the kernel.
Each distro is a standalone OS. It can have any variety of userland. You don't develop "for Linux" so much as you develop "for Ubuntu" or "for Fedora" or "for Android" etc.
There's always appimages or flatpaks that could fill that cross-distro gap, though I suspect a lot of development work would need to be done to get that to a point where either of those are streamlined enough to work in the phone ecosystem.
If anything it’s crazy that a company as large as meta is doing such a shitty job that it has to pull in solutions from entirely different industries … but that’s just my opinion
Game development is STILL a highly underrated field. Plenty of advancements/optimizations (both in software/hardware) can be directly traced back to game development. Hopefully, with RAM prices shooting up the way it is, we go back to keeping optimizations front and center and reduce all the bloat that has accumulated industry wide.
The large file sizes are not because of bloat per-se...
It's a technique which supposedly helped at one point in time to reduce loading times, helldiver's being the most note-able example of removing this "optimization".
However, this is by design - specifically as an optimization. Can't really be calling that boat in the parents context of inefficient resource usage
We aren't talking about the initial downloads though. We are talking about updates. I am like 80% sure you should be able to send what changed without sending the whole game as if you were downloading it for the first time.
from my understanding of the technique youre wrong despite being 80% sure ;)
any changes to the code or textures will need the same preprocessing done. large patch size is basically 1% of changes + 99% all the preprocessed data for this optimization
Helldiver's engine does have that capability, where bundle patches only include modified files and markers for deleted files.
However, the problem with that, and likely the reason Arrowhead doesn't use it, is the lack of a process on the target device to stitch them together. Instead, patch files just sit next to the original file.
So the trade-off for smaller downloads is a continuously increasing size on disk.
Generally "small patches" and "well-compressed assets" are on either end of a trade-off spectrum.
More compression means large change amplification and less delta-friendly changes.
More delta-friendly asset storage means storing assets in smaller units with less compression potential.
In theory, you could have the devs ship unpacked assets, then make the Steam client be responsible for packing after install, unpacking pre-patch, and then repacking game assets post-patch, but this basically gets you the worst of all worlds in terms of actual wall clock time to patch, and it'd be heavily constraining for developers.
It goes all the way back to tapes, was still important for CDs, and still thought relevant for HDDs.
Basically you can get much better read performance if you can read everything sequentially and you want to avoid random access at all costs. So you can basically "hydrate" the loading patterns for each state, storing the bytes in order as they're loaded from the game. The only point it makes things slower is once, on download/install.
Of course the whole excercise is pointless if the game is installed to an HDD only because of its bigger size and would otherwise be on an nvme ssd... And with still affordable 2TB nvme drives it doesn't make as much sense anymore.
It's also a valid consideration in the context of streaming games -- making sure that all resources for the first scene/chapter are downloaded first allows the player to begin playing while the rest of the resources are still downloading.
So this basically leads to duplicating data for each state it's needed in? If that's the case I wonder why this isn't solvable by compressing the update download data (potentially with the knowledge of the data already installed, in case the update really only reshuffles it around)
This was the the reason in Helldivers, other games have different reasons - like uncompressed audio (which IIRC was the reason for the CoD-install-size drama a couple of years back) - the underlying reason is always the same though, the dev team not caring about asset size (or more likely: they would like to take care of it but are drowned in higher priority tasks).
A number of my tricks are stolen from game devs and applied to boring software. Most notably, resource budgets for each task. You can’t make a whole system fast if you’re spending 20% of your reasonable execution time on one moderately useful aspect of the overall operation.
I think one could even say gaming as a sector single handedly move most of the personal computing platform forward since 80s and 90s. Before that it was probably Military and cooperate. From DOS era, overclocking CPU to push benchmarks, DOOM, 3D Graphics API from 3DFx Glide to Direct X. Faster HDD for faster Gaming Load times. And for 10 - 15 years it was gaming that carried CUDA forward.
I wish valve didn't abandon mac as a platform, honestly. As nice as these improvements are for linux and deck users they have effectively abandoned their mac ports as they never updated them to 64 bit like the linux and windows builds, so they can't run on new macs at all. You can coax them into running with wine on mac but it is a very tricky experience. My kegworks wine wrapper for tf2 is currently broken as of last month because the game update download from wine steam keeps corrupting and I'm at a bit of a loss at this point how to work around it. Even when it was working performance was not great and subject to regular lag spikes whenever too many explosions went off.
I totally get why they did, having had to support Mac for an in-house engine. Apple is by far the most painful platform to support out of the big 3 if you're not using turnkey tools, and they don't make up for it with sales outside of iOS. The extra labor is hard to justify already, and then we get to technical deficiencies like MoltenVK, plus social deficiencies like terrible support. It's just a really hard sell all around.
It was likely about control. Valve saw that Microsoft was becoming more controlling about the Windows platform and that's what pushed them towards developing SteamOS on Linux as that means that Valve can put resources into fixing anything that they want to. The Apple platform is also under control of a single entity, so it doesn't make too much sense for Valve to care about that (as well as Apple not being known as a gaming platform).
What you should do is just buy a SteamDeck for gaming.
yeah thats kinda harsh, phoronix is a good oss news aggregator at the very least, and the PTS is a huge boon for "whats the best bang for buck llvm build box" type of question (which is very useful!)
It is certainly not, I'm not sure where the commentor gets that view. Most likely because alongside their primary journalistic content they also produce secondary reporting like these short pieces, disseminating niche viewership content to a wider audience. I can see how it might be easy to see it as blog spam given the latter is almost entirely their purview, but it should not be misconstrued as such in this case.
https://lpc.events/event/19/contributions/2099/ is a much better reference in my view. It is the original conference website, it contains all the material in text format as well, and it does not force you to watch a video (and maybe an ad or two before that, idk, I use adblock). I call this link "primary" and the Youtube video "secondary" (as well as Phoronix).
Blogspam is very disingenuous. Phoronix covers a lot of content in the open source world that isn't well tracked elsewhere, and does some of the best and most comprehensive benchmarking of hardware and software you'll find anywhere on the internet.
It's worth mentioning that sched_ext was developed at Meta. The schedulers are developed by several companies who collaborate to develop them, not just Meta or Valve or Italia and the development is done in a shared GitHub repo - https://github.com/sched-ext/scx.
If you have 50,000 servers for your service, and you can reduce that by 1 percent, you save 50 servers. Multiply that by maybe $8k per server and you have saved $400k,you just paid for your self for a year. With meta the numbers are probably a bit bigger.
That's not how it works though. Budgets are annual. A 1% savings of cpu cycles doesn't show up anywhere, it's a rounding error. They don't have a guy that pulls the servers and sells them ahead of the projection. You bought them for 5 years and they're staying. 5 years from now, that 1% got eaten up by other shit.
You don't buy servers once every 5 years. I've done purchasing every quarter and forecasted a year out. You reduce your services budget for hardware by the amount saved for that year.
5 years is the lifecycle. You're not going to get rid of a 4 year old server because you're using less cycles that you thought you would. You already bought it. You find something else for it to do or you have a little extra redundancy. If I increase the mpg of my semi fleet, that doesn't mean I can sell some of my semis off just because the cost per trip goes down.
You're wrong about how services that cost 9+ figures to run annually are budgeted. 1% CPU is absolutely massive and well measured and accounted for in these systems.
What you're missing is that for these massive systems there's never enough capacity. You can go look at datacenter buildouts YOY if you'd like. Any and all compute power that can be used is being used.
For individual services what that means is that for something like Google Search there will be dozens of projects in the hopper that aren't being worked on because there's just not enough hardware to supply the feature (for example something may have been tested already at small scale and found to be good SEO ranking wise but compute expensive). So a team that is able to save 1% CPU can directly repurpose that saved capacity and fund another project. There's whole systems in place for formally claiming CPU savings and clawing back those savings to fund new efforts.
> Meta has found that the scheduler can actually adapt and work very well on the hyperscaler's large servers.
I'm not at all in the know about this, so it would not even occur to me to test it. Is it the case that if you're optimizing Linux performance you'd just try whatever is available?
almost certainly bottom-up: some eng somewhere read about it, ran a test, saw positive results, and it bubbles up from there. this is still how lots of cool things happen at big companies like Meta.
I've been using Bazzite Desktop for 4 months now and it has been my everything. Windows is just abandonware now even with every update they push. It is clunky and hard to manage.
Bazzite is advertised for gamers, however from my understanding it's just Fedora Atomic wrapped up to work well on steamdeck adjacent hardware and gaming is a top priority. You'd still be receiving the same level of quality you would expect from Fedora/RHEL (I would think).
Precisely, I like it's commitment to the Fedora Atomic. Fedora is in my opinion the best user experience Linux out there, not just because Linus Torvalds said it was his favorite. Probably not the best server or best to base a console OS on, but as a daily driver consistency is more important. Keeping things in flatpaks makes it easy to manage what is installed too.
Gaming or not, stability is important. An OS that focuses on gaming will typically focus on stability, neither bleeding edge or lag behind in support. Has to update enough to work with certain games and behind enough to not have weird support isues.
So Bazzite in my opinion is probably one of the best user experience flavors of Fedora around.
I think you’ve forgotten or aren’t aware that before 3d graphics cards took over, people would buy new video cards to ostensibly make excel faster but then use them to play video games. It was an interesting time with interesting justifications for buying upgrades.
How well does Linux handle game streaming? I’m just now getting into it, and now that Windows10 is dead, I want to move my desktop PC over to linux, and end my relationship with Microsoft, formally.
It works will. I've tried used Sunshine as stream server and Moonlight as client to play games on my Steam Deck, my PC installed openSUSE Tumbleweed with KDE Plasma. There may be some key binding issues, but they can be solved with a little setup.
Looks like open source helped create Silicon Valley, while no IP laws made Shenzhen. Sharing seems to really drive industry growth, so maybe the US and EU should rethink their IP laws?
I keep being puzzled by the unwillingness of developers to deal with scheduling issues. Many developers avoid optimization, almost all avoid scheduling. There are some pretty interesting algorithms and data structures in that space, and doing it well almost always improves user experience. Often it even decreases total wall-clock time for a given set of tasks.
Something built to shave off latency on a handheld gaming device ends up scaling to hyperscale servers, not because anyone planned it that way, but because the abstraction was done right
999900000999 | a day ago
kstrauser | a day ago
bigyabai | a day ago
accelbred | a day ago
phdelightful | a day ago
> Starting from version 6.6 of the Linux kernel, [CFS] was replaced by the EEVDF scheduler.[citation needed]
0x1ch | a day ago
> The Linux kernel began transitioning to EEVDF in version 6.6 (as a new option in 2024), moving away from the earlier Completely Fair Scheduler (CFS) in favor of a version of EEVDF proposed by Peter Zijlstra in 2023 [2-4]. More information regarding CFS can be found in CFS Scheduler.
jorvi | 23 hours ago
Just traditionally, Linux schedulers have been rather esoteric to tune and by default they've been optimized for throughput and fairness over everything else. Good for workstations and servers, bad for everyone else.
[0]https://tinyurl.com/mw6uw9vh
3eb7988a1663 | a day ago
Anon1096 | a day ago
And the people at FB who worked to integrate Valve's work into the backend and test it and measure the gains are the same people who go looking for these kernel perf improvements all day.
ranger207 | 14 hours ago
giantrobot | a day ago
jorvi | a day ago
bronson | a day ago
Brian_K_White | a day ago
senfiaj | a day ago
cherryteastain | a day ago
dfedbeef | a day ago
OsrsNeedsf2P | a day ago
dralley | 23 hours ago
There's nothing special or proprietary about the RHEL code. Access to the code isn't an issue, it's reconstructing an exact replica of RHEL from all of the different package versions that are available to you, which is a huge temporal superset of what is specifically in RHEL.
Aperocky | a day ago
tremon | 23 hours ago
SSLy | 22 hours ago
carlwgeorge | 20 hours ago
https://gitlab.com/redhat/centos-stream/rpms
dralley | 23 hours ago
For any individual RHEL package, you can find the source code with barely any effort. If you have a list of the exact versions of every package used in RHEL, you could compose it without that much effort by finding those packages in Stream. It's just not served up to you on a silver platter unless you're a paying customer. You have M package versions for N packages - all open source - and you have to figure out the correct construction for yourself.
sintax | 23 hours ago
HexPhantom | 7 hours ago
mikkupikku | a day ago
It seems like every time I read about this kind of stuff, it's being done by contractors. I think Proton is similar. Of course that makes it no less awesome, but it makes me wonder about the contractor to employee ratio at Valve. Do they pretty much stick to Steam/game development and contract out most of the rest?
jvanderbot | a day ago
treyd | a day ago
tapoxi | a day ago
They're also a flat organization, with all the good and bad that brings, so scaling with contractors is easier than bringing on employees that might want to work on something else instead.
sneak | 22 hours ago
hatthew | 22 hours ago
frakkingcylons | 21 hours ago
PlanksVariable | 20 hours ago
And then you consider it in context: a company with huge impact, brand recognition, and revenue (about $50M/employee in 2025). They’ve remained extremely small compared to how big they could grow.
sneak | 14 hours ago
There are not many tech companies with 50k+ employees, as a point of fact.
I’m not arguing just to argue - 300 people isn’t small by any measure. It’s absolutely not “extremely small” as was claimed. It’s not relatively small, it’s not “small for what they are doing”, it’s just not small at all.
300 people is a large company. The fact that a very small number of ultrahuge companies exist doesn’t change that.
For context, 300 people is substantially larger than the median company headcount in Germany, which is the largest economy in the EU.
PlanksVariable | 12 hours ago
Valve is a global, revenue-dominant, platform-level technology company. In its category, 300 employees is extremely small.
Valve is not a German company, so that’s an odd context, but if you want to use Germany for reference, here are the five German companies with the closest revenue to Valve’s:
- Infineon Technologies, $16.4B revenue, 57,000 employees
- Evonik Industries, $16B, 31,930 employees
- Covestro, $15.2B, 17,520 employees
- Commerzbank, $14.6B, 39,000 employees
- Zalando, $12.9B, 15,793 employees
hatthew | 12 hours ago
Big, small, etc. are relative terms. There is no way to decide whether or not 300 is small without implicitly saying what it's small relative to. In context, it was obvious that the point being made was "valve is too small to have direct employees working on things other than the core business"
tester756 | 19 hours ago
Yes, 300 is quite small.
zipy124 | 8 hours ago
ZeroCool2u | a day ago
butlike | 21 hours ago
the_mitsuhiko | 21 hours ago
dan-robertson | 20 hours ago
saagarjha | 18 hours ago
zipy124 | 8 hours ago
ksynwa | 7 hours ago
everfrustrated | a day ago
That said, something like this which is a fixed project, highly technical and requires a lot of domain expertise would make sense for _anybody_ to contract out.
mindcrash | a day ago
For contextual, super specific, super specialized work (e.g. SCX-LAVD, the DirectX-to-Vulkan and OpenGL-to-Vulkan translation layers in Proton, and most of the graphics driver work required to make games run on the upcoming ARM based Steam Frame) they like to subcontract work to orgs like Igalia but that's about it.
wildzzz | a day ago
izacus | a day ago
There have been demands to do that more on HN lately. This is how it looks like when it happens - a company paying for OSS development.
chucky_z | a day ago
Back to the root point. Small company focused on core business competencies, extremely effective at contracting non-core business functions. I wish more businesses functioned this way.
smotched | a day ago
mewse-hn | a day ago
msh | a day ago
crtasm | a day ago
If you have 30mins for a video I recommend People Make Games' documentary on it https://www.youtube.com/watch?v=eMmNy11Mn7g
trinsic2 | a day ago
heywoods | a day ago
inexcf | 23 hours ago
trinsic2 | 16 hours ago
crtasm | 20 hours ago
trinsic2 | 17 hours ago
Value is chump change in this department. They allow the practice of purchasing loot boxes and items but don't analyze and manipulate behaviors. Valve is the least bad actor in this department.
I watched half the video and found it pretty biased compared to whats happening in the industry right now.
I feel this argument of Valve deliberately profiting off of gambling not really the whole story. I certainly dont think that Valve designed there systems to encourage gambling. More like they wanted a way to bring in money to develop other areas of their platform so they can make it better, which they did. And in many cases are putting players first. Players developed bad behaviors around purchasing in-game and trading items and have chosen to indulge in the behavior. 3rd parties have rose up around a unhealthy need that IMHO is not Valves doing. And most importantly, since I was around when these systems went into place, allowing me to see what was happening, this kind of player behavior developed over time. I don't think Valve deliberately encouraged it.
The entire gaming industry is burning down before our eyes because of AAA greed and you guys are choosing to focus on the one company thats fighting against it. Im not getting it.
pityJuke | 16 hours ago
[Citation needed]
> I certainly dont think that Valve designed there systems to encourage gambling
Cases are literally slot machines.
> [section about third-party websites] I don't think Valve deliberately encouraged it.
OK, but they continue to allow it (through poor enforcement of their own ToS), and it continues to generate them obscene amounts of money?
> you guys are choosing to focus on the one company thats fighting against it.
Yes, we should let the billion dollar company get away with shovelling gambling to children.
Also, frankly speaking, other AAAs are less predatory with gambling. Fortnite, CoD, and VALORANT to pick some examples, are all just simple purchases from a store. Yes, they have issues with FOMO, and bullying for not buying skins [0], but oh my god, it isn't allowing children to literally do sports gambling (and I should know, I've actively gambled on esports while underage via CS, and I know people that have lost $600+ while underage on CS gambling).
[0]: https://www.polygon.com/2019/5/7/18534431/fortnite-rare-defa...
trinsic2 | 12 hours ago
graynk | 8 hours ago
trinsic2 | 3 hours ago
tayo42 | a day ago
magicalhippo | a day ago
The problem seems, at least from a distance, to be that bosses treat it as a fire-and-forget solution.
We haven't had any software done by oursiders yet, but we have hired consultants to help us on specifics, like changing our infra and help move local servers to the cloud. They've been very effective and helped us a lot.
We had talks though so we found someone who we could trust had the knowledge, and we were knowledgeable enough ourselves that we could determine that. We then followed up closely.
tayo42 | a day ago
stackskipton | a day ago
abnercoimbre | a day ago
If you don't see it happening, the game is being played as intended.
TulliusCicero | a day ago
m4rtink | 20 hours ago
OkayPhysicist | a day ago
But most of the time you don't want "a unit of software", you want some amorphous blob of product and business wants and needs, continuously changing at the whims of business, businessmen, and customers. In this context, sure, you're paying your developers to solve problems, but moreover you're paying them to store the institutional knowledge of how your particular system is built. Code is much easier to write than to read, because writing code involves applying a mental model that fits your understanding of the world onto the application, whereas reading code requires you to try and recreate someone else's alien mental model. In the situation of in-house products and business automation, at some point your senior developers become more valuable for their understanding of your codebase than their code output productivity.
The context of "I want this particular thing fixed in a popular open source codebase that there are existing people with expertise in", contracting makes a ton of sense, because you aren't the sole buyer of that expertise.
to11mtm | a day ago
When I worked in the HFC/Fiber plant design industry, the simple act of "Don't use the same boilerplate MSA for every type of vendor" and being more specific about project requirements in the RFP makes it very clear what is expected, and suddenly we'd get better bids, and would carefully review the bids to make sure that the response indicated they understood the work.
We also had our own 'internal' cost estimates (i.e. if we had the in house capacity, how long would it take to do and how much would it cost) which made it clear when a vendor was in over their head under-bidding just to get the work, which was never a good thing.
And, I've seen that done in the software industry as well, and it worked.
That said, the main 'extra' challenge in IT is that key is that many of the good players aren't going to be the ones beating down your door like the big 4 or a WITCH consultancy will.
But really at the end of the day, the problem is what often happens is that business-people who don't really know (or necessarily -care-) about specifics enough unfortunately are the people picking things like vendors.
And worse, sometimes they're the ones writing the spec and not letting engineers review it. [0]
[0] - This once led to an off-shore body shop getting a requirement along the lines of 'the stored procedures and SQL called should be configurable' and sure enough the web.config had ALL the SQL and stored procedures as XML elements, loaded from config just before the DB call, thing was a bitch to debug and their testing alone wreaked havoc on our dev DB.
WD-42 | a day ago
zipy124 | 8 hours ago
javier2 | 23 hours ago
butlike | 21 hours ago
Brian_K_White | a day ago
mikkupikku | a day ago
koverstreet | a day ago
FartyMcFarter | 9 hours ago
bogwog | a day ago
I don't remember all the details, but it doesn't seem like a great place to work, at least based on the horror stories I've read.
Valve does a lot of awesome things, but they also do a lot of shitty things, and I think their productivity is abysmal based on what you'd expect from a company with their market share. They have very successful products, but it's obvious that basically all of their income comes from rent-seeking from developers who want to (well, need to) publish on Steam.
wocram | 21 hours ago
Fiveplus | a day ago
They needed Windows games to run on Linux so we got massive Proton/Wine advancements. They needed better display output for the deck and we got HDR and VRR support in wayland. They also needed smoother frame pacing and we got a scheduler that Zuck is now using to run data centers.
Its funny to think that Meta's server efficiency is being improved because Valve paid Igalia to make Elden Ring stutter less on a portable Linux PC. This is the best kind of open source trickledown.
MarleTangible | a day ago
benoau | a day ago
"Slide left or right" CPU and GPU underclocking.
pmontra | a day ago
Insanity | a day ago
Liquid Glass ruined multitasking UX on my iPad. :(
Also my macbook (m4 pro) has random freezes where finder becomes entirely unresponsive. Not sure yet why this happens but thankfully it’s pretty rare.
pbh101 | a day ago
(And same for Windows to the degree it is more inconsistent on Windows than Mac)
spauldo | a day ago
tharkun__ | a day ago
mschuster91 | a day ago
The problem is: the specifications of ACPI are complex, Windows' behavior tends to be pretty much trash and most hardware tends to be trash too (AMD GPUs for example were infamous for not being resettable for years [1]), which means that BIOSes have to work around quirks on both the hardware and software. Usually, as soon as it is reasonably working with Windows (for a varying definition of "reasonably", that is), the ACPI code is shipped and that's it.
Unfortunately, Linux follows standards (or at least, it tries to) and cannot fully emulate the numerous Windows quirks... and on top of that, GPUs tend to be hot piles of dung requiring proprietary blobs that make life even worse.
[1] https://www.nicksherlock.com/2020/11/working-around-the-amd-...
AnthonyMouse | 22 hours ago
The real problem is that the hardware vendors aren't using its development model. To make this work you either need a) the hardware vendor to write good drivers/firmware, or b) the hardware vendor to publish the source code or sufficient documentation so that someone else can reasonably fix their bugs.
The Linux model is the second one. Which isn't what's happening when a hardware vendor doesn't do either of them. But some of them are better than others, and it's the sort of thing you can look up before you buy something, so this is a situation where you can vote with your wallet.
A lot of this is also the direct fault of Microsoft for pressuring hardware vendors to support "Modern Standby" instead of rather than in addition to S3 suspend, presumably because they're organizationally incapable of making Windows Update work efficiently so they need Modern Standby to paper over it by having it run when the laptop is "asleep" and then they can't have people noticing that S3 is more efficient. But Microsoft's current mission to get everyone to switch to Linux appears to be in full swing now, so we'll see if their efforts on that front manage to improve the situation over time.
gf000 | 22 hours ago
That's a vastly different statement.
pbh101 | 19 hours ago
… And that’s all fine, because this is a super niche need: effectively nobody needs Linux laptops and even fewer depend on sleep to work. If ‘Linux’ convinced itself it really really needed to solve this problem for whatever reason, it would do something that doesn’t look like its current development model, something outside that.
Regardless, the net result in the world today is that Linux sleep doesn’t work in general.
ls612 | 20 hours ago
dijit | a day ago
until the new s2idle stuff that Microsoft and Intel have foisted on the world (to update your laptop while sleeping… I guess?)
dabockster | a day ago
zargon | 23 hours ago
miohtama | 12 hours ago
chocochunks | 22 hours ago
QuiEgo | 16 hours ago
As an example, if you have a mac, run "ioreg -w0 -p IOPower" and see all the drivers that have to interact with each other to do power management.
Krssst | a day ago
seba_dos1 | a day ago
devnullbrain | a day ago
tremon | 23 hours ago
asveikau | a day ago
I think the reality is that Linux is ahead on a lot of kernel stuff. More experimentation is happening.
dijit | a day ago
IO_Uring is still a pale imitation :(
asveikau | a day ago
loeg | a day ago
torginus | a day ago
io_uring didn't change that, it only got rid of the syscall overhead (which is still present on Windows), so in actuality they are two different technical solutions that affect different levels of the stack.
In practice, Linux I/O is much faster, owing in part to the fact that Windows file I/O requires locking the file, while Linux does not.
senderista | 21 hours ago
layla5alive | 20 hours ago
senderista | 21 hours ago
https://learn.microsoft.com/en-us/windows/win32/api/ioringap...
Although Windows registered network I/O (RIO) came before io_uring and for all I know might have been an inspiration:
https://learn.microsoft.com/en-us/previous-versions/windows/...
dijit | 21 hours ago
You can see shims for fork() to stop tanking performance so hard too. IOUring doesnt map at all onto IOCP, at least the windows subtitute for fork has “ZwCreateProcess“ to work from. IOUring had nothing.
IOCP is much nicer from a dev point of view because your program can be signalled when a buffer has data on it but also with the information of how much data, everything else seems to fail at doing this properly.
senderista | 20 hours ago
7bit | a day ago
On the surface, they are as simple as Linux UOG/rwx stuff if you want it to be, but you can really, REALLY dive into the technology and apply super specific permissions.
trueismywork | a day ago
Arainach | a day ago
Also, as far as I know Linux doesn't support DENY ACLs, which Windows does.
onraglanroad | a day ago
112233 | a day ago
onraglanroad | 23 hours ago
112233 | 23 hours ago
opello | 21 hours ago
Wouldn't the o::--- default ACL, like mode o-rwx, deny others access in the way you're describing?
112233 | 11 hours ago
Here is kernel dev telling they are against adding NFSv4 ACL implementation. The relevant RichAcls patch never got merged: https://lkml.org/lkml/2016/3/15/52
opello | 3 hours ago
I see what I misunderstood, even in the presence of an ALLOW entry, a DENY entry would prohibit access. I am familiar with that on the Windows side but haven't really dug into Linux ACLs. The ACCESS CHECK ALGORITHM[1] section of the acl(5) man page was pretty clear, I think.
[1] https://man7.org/linux/man-pages/man5/acl.5.html#ACCESS_CHEC...
112233 | a day ago
7bit | a day ago
bbkane | a day ago
torginus | a day ago
dabockster | a day ago
Ubuntu just recently got a way to automate its installer (recently being during covid). I think you can do the same on RHEL too. But that's largely it on Linux right now. If you need to admin 10,000+ computers, Windows is still the king.
esseph | a day ago
1. cloud-init support was in RHEL 7.2 which released November 19, 2015. A decade ago.
2. Checking on Ubuntu, it looks like it was supported in Ubuntu 18.04 LTS in April 2018.
3. For admining tens of thousands of servers, if you're in the RHEL ecosystem you use Satellite and it's ansible integration. That's also been going on for... about a decade. You don't need much integration though other than a host list of names and IPs.
There are a lot of people on this list handling tens of thousands or hundreds of thousands of linux servers a day (probably a few in the millions).
LeSaucy | a day ago
7bit | a day ago
lll-o-lll | 22 hours ago
- There does not seem to be a way to determine which machines in the fleet have successfully applied. If you need a policy to be active before doing deployment of something (via a different method), or things break, what do you do?
- I’ve had far too many major incidents that were the result of unexpected interactions between group policy and production deployments.
7bit | 10 hours ago
bigstrat2003 | 14 hours ago
UltraSane | 8 hours ago
Elv13 | a day ago
benterix | 23 hours ago
What?! I was doing kickstart on Red Hat (want called Enterprise Linux back then) at my job 25 years ago, I believe we were using floppies for that.
m4rtink | 21 hours ago
BTW, we managed to get the earlies history of the project written down here by one of the earliest contributors for anyone who might be interested:
https://anaconda-installer.readthedocs.io/en/latest/intro.ht...
As for how the automated installation on RHEL, Fedora and related distros works - it is indeed via kickstart:
https://pykickstart.readthedocs.io/en/latest/
Note how some commands were introduced way back in the single digit Fedora/Fedora Core age - that was from about 2003 to 2008. Latest Fedora is Fedora 43. :)
cactacea | 23 hours ago
Preseed is not new at all:
https://wiki.debian.org/DebianInstaller/Preseed
RH has also had kickstart since basically forever now.
I've been using both preseeds and kickstart professionally for over a decade. Maybe you're thinking of the graphical installer?
max-privatevoid | 23 hours ago
jandrese | 23 hours ago
You have a hardened Windows 11 system. A critical application was brought forward from a Windows 10 box but it failed, probably a permissions issue somewhere. Debug it and get it working. You can not try to pass this off to the vendor, it is on you to fix it. Go.
butlike | 23 hours ago
jandrese | 22 hours ago
7bit | 23 hours ago
roblabla | 22 hours ago
And then you get security product who have the fun idea of removing privileges when a program creates a handle (I'm not joking, that's a thing some products do). So when you open a file with write access, and then try to write to the file, you end up with permission errors durig the write (and not the open) and end up debugging for hours on end only to discover that some shitty security product is doing stupid stuff...
Granted, thats not related to ACLs. But for every OK idea microsoft had, they have dozen of terrible ideas that make the whole system horrible.
jandrese | 22 hours ago
7bit | 10 hours ago
lmz | 4 hours ago
fooker | 13 hours ago
nunez | 23 hours ago
Eggpants | 32 minutes ago
wmf | a day ago
athoneycutt | a day ago
stackskipton | a day ago
pantalaimon | 22 hours ago
ndiddy | 22 hours ago
b00ty4breakfast | 23 hours ago
ethbr1 | 22 hours ago
"Now that's curious..."
newsclues | 21 hours ago
IshKebab | 23 hours ago
roblabla | 22 hours ago
I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.
dangus | 22 hours ago
The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.
TeMPOraL | 22 hours ago
eqvinox | 18 hours ago
Unfortunately it doesn't take any display server into consideration, both X11 and Wayland will just get killed.
roblabla | 16 hours ago
IshKebab | 10 hours ago
dangus | 22 hours ago
mikkupikku | 22 hours ago
These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.
ethbr1 | 22 hours ago
Clever way of dealing with the train wreck of legacy Windows user/program permissioning.
thewebguyd | 21 hours ago
It's not just visual either. The secure desktop is in protected memory, and no other process can access it. Only NTAUTHORITY\System can initiate showing it and interact with it any way, no other process can.
You can also configure it to require you to press CTRL+ALT+DEL on the UAC prompt to be able to interact with it and enter credentials as another safeguard against spoofing.
I'm not even sure if Wayland supports doing something like that.
opello | 21 hours ago
lawlessone | 20 hours ago
Is there an offset. I could have sworn things always seemed offset to the side a little.
IshKebab | 21 hours ago
I actually wrote a fake version of RMNet login when I was in school (before Windows added ctrl-alt-del to login).
https://www.rmusergroup.net/rm-networks/
I got the teacher's password and then got scared and deleted all trace of it.
ttctciyf | 22 hours ago
IshKebab | 21 hours ago
ttctciyf | 19 hours ago
> Example output of the SysRq+h command:
> sysrq: HELP : loglevel(0-9) reboot(b) crash(c) terminate-all-tasks(e) memory-full-oom-kill(f) kill-all-tasks(i) thaw-filesystems(j) sak(k) show-backtrace-all-active-cpus(l) show-memory-usage(m) nice-all-RT-tasks(n) poweroff(o) show-registers(p) show-all-timers(q) unraw(r) sync(s) show-task-states(t) unmount(u) force-fb(v) show-blocked-tasks(w) dump-ftrace-buffer(z) dump-sched-ext(D) replay-kernel-logs(R) reset-sched-ext(S)
But note "sak (k)".
tmtvl | 18 hours ago
IshKebab | 10 hours ago
eqvinox | 18 hours ago
marcodiego | 17 hours ago
mycall | 16 hours ago
pjmlp | 5 hours ago
Without Proton there would be no "Linux" games.
It would be great if Valve actually continued Loki Entertainment's work.
duped | a day ago
I do, MIDI 2.0. It's not because they're not doing it, just that they're doing it at a glacial pace compared to everyone else. They have reasons for this (a complete rewrite of the windows media services APIs and internals) but it's taken years and delays to do something that shipped on Linux over two years ago and on Apple more like 5 (although there were some protocol changes over that time).
guidopallemans | a day ago
packetlost | a day ago
dabockster | a day ago
But here's my rub: no one else bothered to step up to be a key signer. Everyone has instead whined for 15 years and told people to disable Secure Boot and the loads of trusted compute tech that depends on it, instead of actually building and running the necessary infra for everyone to have a Secure Boot authority outside of big tech. Not even Red Hat/IBM even though they have the infra to do it.
Secure Boot and signed kernels are proven tech. But the Linux world absolutely needs to pull their heads out of their butts on this.
codeflo | a day ago
mhitza | a day ago
Having the option to disable secureboot, was probably due to backlash at the time and antitrust concerns.
Aside from providing protection "evil maid" attacks (right?) secureboot is in the interest of software companies. Just like platform "integrity" checks.
ndriscoll | a day ago
The only thing Secure Boot provides is the ability for someone else to measure what I'm running and therefore the ability to tell me what I can run on the device I own (mostly likely leading to them demanding I run malware like like the adware/spyware bundled into Windows). I don't have a maid to protect against; such attacks are a completely non-serious argument for most people.
cogman10 | 23 hours ago
jpalawaga | 23 hours ago
nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
anti-cheat is essentially existential for studios/publishers that rely on multiplayer gaming.
So yes, the second half of your statement is true. The first half--not so much.
cogman10 | 23 hours ago
> nobody wants to play games that are full of bots. cheaters will destroy your game and value proposition.
You are correct, but I think I did a bad job of communicating what I meant. It's true that anti-cheat has been around since forever. However, what's changed relatively recently is anti-cheat integrated into the kernel alongside requirements for signed kernels and secure boot. This dates back to 2012, right as games like Battlefield started introducing gambling mechanics into their games.
There were certainly other games that had some gambly aspects to them, but 2010s is pretty close to where esports along with in game gambling was starting to bud.
packetlost | a day ago
esseph | a day ago
No thanks.
mstank | a day ago
xmprt | a day ago
harrisoned | a day ago
https://developer.valvesoftware.com/wiki/Using_Source_Contro...
6r17 | a day ago
Their research department rocks however so it's not a full bash on Microsoft at all - i just feel like they are focusing on other way more interesting stuff
Arainach | 23 hours ago
Great UX requires a lot of work that is hard but not algorithmically challenging. It requires consistency and getting many stakeholders to buy in. It requires spending lots of time on things that will never be used by more than 10-20% of people.
Windows got a proper graphics compositor (DWM) in 2006 and made it mandatory in 2012. macOS had one even earlier. Linux fought against Compiz and while Wayland feels inevitable vocal forces still complain about/argue against it. Linux has a dozen incompatible UI toolkits.
Screen readers on Linux are a mess. High contrast is a mess. Setting font size in a way that most programs respect is a mess. Consistent keyboard shortcuts are a mess.
I could go on, but these are problems that open source is not set up to solve. These are problems that are hard, annoying, not particularly fun. People generally only solve them when they are paid to, and often only when governments or large customers pass laws requiring the work to be done and threaten to not buy your product if you don't do it. But they are crucially important things to building a great, widely adopted experience.
einr | 23 hours ago
Arainach | 22 hours ago
Here's one top search result that goes into far more detail: https://www.reddit.com/r/linux/comments/1ed0j10/the_state_of...
thewebguyd | 21 hours ago
Linux DEs still can't match the accessibility features alone.
yeah, there's layers and layers of progressively older UIs layered around the OS, but most of it makes sense, is laid out sanely, and is relatively consistent with other dialogs.
macOS beats it, but its still better in a lot of ways over the big Linux DEs.
m4rtink | 21 hours ago
Every other button triggering Copilots assures even better UX goodness.
Klaster_1 | 10 hours ago
strtok | 18 hours ago
Of course that is minus all the recent AI/ad stuff on Windows…
jraph | 20 hours ago
Accessibility does need improvement. It seems severely lacking. Although your link makes it look like it's not that bad actually, I would have expected worse.
embedding-shape | 23 hours ago
It's a big space. Traditionally, Microsoft has held both the multimedia, gaming and lots of professional segments, but with Valve doing a large push into the two first and Microsoft not even giving it a half-hearted try, it might just be that corporate computers continue using Microsoft, people's home media equipment is all Valve and hipsters (and others...) keep on using Apple.
thewebguyd | 21 hours ago
Windows will remain as the default "enterprise desktop." It'll effectively become just another piece of business software, like an ERP.
Gamers, devs, enthusiasts will end up on Linux and/or SteamOS via Valve hardware, creatives and personal users that still use a computer instead of their phone or tablet will land in Apple land.
m4rtink | 21 hours ago
embedding-shape | 19 hours ago
m4rtink | 17 hours ago
embedding-shape | 8 hours ago
pjmlp | 4 hours ago
pjmlp | 4 hours ago
m4rtink | 21 hours ago
* invasive AI integration
* dropping support for 40% of their installed base (Windows 10)
* forcing useless DRM/trusted computing hardware - TPM - as a requirement to install the new and objectively worse Windows version version, with even more spying and worse performance (Windows 11)
With that I think their prospects are bleak & I have no idea who would install anything else than Steam OS or Bazzite in the future with this kind of Microsoft behavior.
pjmlp | 4 hours ago
Also Raspeberri PIs are the only GNU/Linux devices most people can find at retail stores.
theLiminator | 23 hours ago
layer8 | 23 hours ago
theLiminator | 23 hours ago
Apocryphon | 21 hours ago
m4rtink | 21 hours ago
But then they deCided it is better to show adds at OS level, rewrite OS UI as a web app, force harware DRM for their new OS version (TPM requirement) as well as automatically capturing content of you screen and feed it to AI.
shantara | 22 hours ago
amlib | 21 hours ago
bilekas | a day ago
ls612 | a day ago
Gaben does something: Wins Harder
7bit | a day ago
captn3m0 | a day ago
delusional | a day ago
I'm loving what valve has been doing, and their willingness to shove money into projects that have long been under invested in, BUT. Please don't forget all the volunteers that have developed these systems for years before valve decided to step up. All of this is only possible because a ton of different people spent decades slowly building a project, that for most of it's lifetime seemed like a dead end idea.
Wine as a software package is nothing short of miraculous. It has been monumentally expensive to build, but is provided to everyone to freely use as they wish.
Nobody, and I do mean NOBODY would have funded a project that spent 20 years struggling to run office and photoshop. Valve took it across the finish line into commercially useful project, but they could not have done that without the decade+ of work before that.
aeyes | 23 hours ago
I'm sure there have been more commercial contributors to Wine other than Valve and CodeWeavers.
mixmastamyk | 13 hours ago
baq | a day ago
stdbrouw | 23 hours ago
gf000 | 21 hours ago
the_pwner224 | 23 hours ago
johnny22 | 21 hours ago
m4rtink | 21 hours ago
https://fedoraproject.org/wiki/Changes/SwapOnZRAM
MrDrMcCoy | 15 hours ago
ahepp | 22 hours ago
baq | 22 hours ago
A hard lock which requires a reboot or god forbid power cycling is the worst possible outcome, literally anything else which doesn’t start a fire is an improvement TBH.
jpc0 | 21 hours ago
Hilariously this happens on windows too.
Actually everything you said windows and mac doesn't do they do, if you put on a ton a memory pressure the system becomes unresponsive and locks up...
thewebguyd | 21 hours ago
You get an OOM dialog with a list of apps that you can have it kill.
socksy | 18 hours ago
jhasse | 22 hours ago
marcodiego | 17 hours ago
dabockster | a day ago
We shouldn't be depending on trickledown anything. It's nice to see Valve contributing back, but we all need to remember that they can totally evaporate/vanish behind proprietary licensing at any time.
stavros | a day ago
dymk | a day ago
nextaccountic | 21 hours ago
bigstrat2003 | 14 hours ago
gpderetta | 5 hours ago
jact | 21 hours ago
irusensei | a day ago
PartiallyTyped | 23 hours ago
The guy is Philip Rebohler.
robotnikman | 20 hours ago
foresto | 19 hours ago
https://www.gamingonlinux.com/2018/09/an-interview-with-the-...
downrightmike | 23 hours ago
justapassenger | 21 hours ago
phatfish | 19 hours ago
GZGavinZhao | 23 hours ago
cosmic_cheese | 22 hours ago
iknowstuff | 22 hours ago
cosmic_cheese | 21 hours ago
api | 20 hours ago
cosmic_cheese | 19 hours ago
likeclockwork | 19 hours ago
MarsIronPI | 18 hours ago
MarsIronPI | 18 hours ago
ninth_ant | 21 hours ago
It’s not my distribution of choice, but it’s currently doing exactly what you suggest.
cosmic_cheese | 21 hours ago
kristianp | 20 hours ago
cosmic_cheese | 20 hours ago
The problem is that Linux can’t handle hardware it doesn’t have drivers for (or can only run it in an extremely basic mode), and LTS kernels only have drivers for hardware that existed prior to their release.
mips_avatar | 20 hours ago
thewebguyd | 21 hours ago
That's why, RHEL for example, has such a long support lifecycle. It's so you can develop software targeting RHEL specifically, and know you have a stable environment for 10+ years. RHEL sells a stable (as in unchanging) OS for x number of years to target.
nineteen999 | 21 hours ago
singron | 21 hours ago
LeFantome | 17 hours ago
johnny22 | 21 hours ago
cosmic_cheese | 20 hours ago
WackyFighter | 20 hours ago
The vast majority of people that were using Linux on the desktop before 2015 were either hobbyists, developers or people that didn't want to run proprietary software for whatever reason.
These people generally didn't care about a lot of fancy tech mentioned. So this stuff didn't get fixed.
cosmic_cheese | 20 hours ago
I think the bigger problem is that commercial use cases suck much of the air out of the room, leaving little for end user desktop use cases.
WackyFighter | 45 minutes ago
Most people learn that using some crap top will leave you with stuff on the laptop not working e.g. volume buttons, wifi buttons etc.
All of these just work with Linux.
rapind | 19 hours ago
pwthornton | 19 hours ago
If I didn't know better, I'd assume Windows was a free, ad-supported product. If I ever pick up a dedicated PC for gaming, it's going to be a Steam Machine and/or Steam Deck. Microsoft is basically lighting Xbox and Windows on fire to chase AI clanker slop.
defrost | 18 hours ago
(I've been a cross platform numerical developer in GIS and geophysics for decades)
serious windows power users, current and former windows developers and engineers, swear by Chris Titus Tech's Windows Utility.
It's an open powershell suite collaboration by hundreds maintained by an opinionated coordinater that allows easy installation of common tools, easy setting of update behaviours, easy tweaking of telemetry and AI addons, and easy creation of custom ISO installs and images for VM application (dedicated stripped down windows OS for games or a Qubes shard)
https://github.com/ChrisTitusTech/winutil
It's got a lot of help hover tooltip's to assist in choices and avoiding suprises, you can always look to the scripts that are run if you're suspicious.
" Windows isn't that bad if you clean it out with a stiff enough broom "
That said, I'm setting my grandkids up with Bazzite decks and forcing them to work in CLI's for a lot of things to get them used to seeing things under the hood.
fylo | 11 hours ago
LeFantome | 17 hours ago
MrDrMcCoy | 15 hours ago
raverbashing | 22 hours ago
Linux (and its ecosystem) sucks at having focus and direction.
They might get something right here and there, especially related to servers, but they are awful at not spinning wheels
See how wayland progress is slow. See how some distros moved to it only after a lot of kicking and screaming.
See how a lot of peripherals in "newer" (sometimes a model that's 2 or 3 yrs on the market) only barely works in a newer distro. Or has weird bugs
"but the manufacturers..." "but the hw producers..." "but open source..." whine
Because Linux lacks a good hierarchy at isolating responsibility, otherwise going for a "every kernel driver can do all it wants" together with "interfaces that keep flipping and flopping at every new kernel release" - notable (good) exception : USB userspace drivers. And don't even get me started on the whole mess that is xorg drivers
And then you have a Ruby Goldberg machine in form of udev dbus and what not, or whatever newer solution that solves half the problems and create another new collection of bugs.
cosmic_cheese | 22 hours ago
thdrtol | 22 hours ago
Currently almost no one is using Linux for mobile because the lack or apps (banking for example) and bad hardware support. When developing for Linux becomes more and more attractive this might change.
thewebguyd | 21 hours ago
If one (or maybe two) OSes win, then sure. The problem is there is no "develop for Linux" unless you are writing for the kernel.
Each distro is a standalone OS. It can have any variety of userland. You don't develop "for Linux" so much as you develop "for Ubuntu" or "for Fedora" or "for Android" etc.
Zetaphor | 10 hours ago
Root_Denied | 8 hours ago
znpy | 20 hours ago
znpy | 20 hours ago
teekert | 20 hours ago
kshri24 | 19 hours ago
abustamam | 15 hours ago
ffsm8 | 15 hours ago
It's a technique which supposedly helped at one point in time to reduce loading times, helldiver's being the most note-able example of removing this "optimization".
However, this is by design - specifically as an optimization. Can't really be calling that boat in the parents context of inefficient resource usage
thanksgiving | 14 hours ago
ffsm8 | 13 hours ago
any changes to the code or textures will need the same preprocessing done. large patch size is basically 1% of changes + 99% all the preprocessed data for this optimization
laggyluke | 12 hours ago
SirAiedail | 10 hours ago
thunderfork | an hour ago
More compression means large change amplification and less delta-friendly changes.
More delta-friendly asset storage means storing assets in smaller units with less compression potential.
In theory, you could have the devs ship unpacked assets, then make the Steam client be responsible for packing after install, unpacking pre-patch, and then repacking game assets post-patch, but this basically gets you the worst of all worlds in terms of actual wall clock time to patch, and it'd be heavily constraining for developers.
SkiFire13 | 11 hours ago
baobun | 10 hours ago
Basically you can get much better read performance if you can read everything sequentially and you want to avoid random access at all costs. So you can basically "hydrate" the loading patterns for each state, storing the bytes in order as they're loaded from the game. The only point it makes things slower is once, on download/install.
Of course the whole excercise is pointless if the game is installed to an HDD only because of its bigger size and would otherwise be on an nvme ssd... And with still affordable 2TB nvme drives it doesn't make as much sense anymore.
tremon | 7 hours ago
baobun | 6 hours ago
SkiFire13 | 35 minutes ago
flohofwoe | 9 hours ago
abustamam | 3 hours ago
hinkley | 10 hours ago
ksec | 4 hours ago
foresto | 19 hours ago
https://steamcommunity.com/games/221410/announcements/detail...
rcbdev | 19 hours ago
asdff | 19 hours ago
ux266478 | 18 hours ago
ndsipa_pomu | 3 hours ago
What you should do is just buy a SteamDeck for gaming.
HexPhantom | 8 hours ago
loeg | a day ago
hobobaggins | a day ago
webdevver | 23 hours ago
zipy124 | 8 hours ago
fph | 23 hours ago
loeg | 22 hours ago
a456463 | 19 hours ago
fph | 10 hours ago
MrDrMcCoy | 14 hours ago
redleader55 | a day ago
tayo42 | a day ago
esseph | 23 hours ago
tayo42 | 22 hours ago
jraph | 21 hours ago
tayo42 | 3 hours ago
binary132 | a day ago
Pr0Ger | a day ago
tayo42 | a day ago
pixelbeat__ | a day ago
bongodongobob | 23 hours ago
tayo42 | 22 hours ago
bongodongobob | 18 hours ago
Anon1096 | 22 hours ago
bongodongobob | 18 hours ago
Anon1096 | 17 hours ago
For individual services what that means is that for something like Google Search there will be dozens of projects in the hopper that aren't being worked on because there's just not enough hardware to supply the feature (for example something may have been tested already at small scale and found to be good SEO ranking wise but compute expensive). So a team that is able to save 1% CPU can directly repurpose that saved capacity and fund another project. There's whole systems in place for formally claiming CPU savings and clawing back those savings to fund new efforts.
binary132 | 7 hours ago
stuxnet79 | a day ago
commandersaki | a day ago
dabockster | a day ago
tra3 | a day ago
> Meta has found that the scheduler can actually adapt and work very well on the hyperscaler's large servers.
I'm not at all in the know about this, so it would not even occur to me to test it. Is it the case that if you're optimizing Linux performance you'd just try whatever is available?
laweijfmvo | a day ago
erichocean | 23 hours ago
Sparkyte | 23 hours ago
aucisson_masque | 21 hours ago
I wouldn't make excel spreadsheet on the steam deck for instance.
pawelduda | 21 hours ago
0x1ch | 21 hours ago
Sparkyte | 18 hours ago
Sparkyte | 18 hours ago
So Bazzite in my opinion is probably one of the best user experience flavors of Fedora around.
Yes you can do more than gaming on Bazzite.
hinkley | 10 hours ago
balls187 | 19 hours ago
Kholin | 18 hours ago
DebugDruid | 19 hours ago
shmerl | 19 hours ago
ahartmetz | 12 hours ago
HexPhantom | 8 hours ago