I'll just go ahead and make it salty because I know the RISC-V boosters will be here any moment. Can you name me, today, a RISC-V chip I can buy and run in a workstation that's at all comparable with current aarch64 or x86_64 offerings? Don't say "it's just around the corner" - I'm tired of hearing it.
Meanwhile, I'm still using my POWER9, so it's not as if I wouldn't spend the money if there were a reasonable alternative. I would like to believe in RISC-V but so far the vast majority look like a bunch of cheap cores trying to compete with RPi.
I genuinely don't think I've seen anyone claim that RISC-V hardware with performance comparable to x86 or AArch64 is right around the corner. That would be a borderline insane claim to make at this stage.
That would be a borderline insane claim to make at this stage.
It was made to me, to my face, by a Si5 representative who had just come off stage at the Ubuntu Summit in Rīga. This is absolutely categorically the sort of thing RISC-V promoters have been claiming for years, yes.
Many RISC-V evangelists have spouted that. I've seen it myself. Granted, those are sometimes accompanied by claims like "x86/ARM is doomed", so I ignore them.
Can you name me, today, a RISC-V chip I can buy and run in a workstation that's at all comparable with current aarch64 or x86_64 offerings?
Of course not. RISC-V is extremely new. It's less than five years since the first $100 RISC-V SBC equivalent to the first 2012 Raspberry Pi came out: specifically the AWOL Nezha in June 2021. So 9 years behind, at that point.
The Milk-V Megrez that has been shipping since January 2025 is equivalent to a mid 2019 Raspberry Pi 4 (other than not having SIMD), so that's 5 1/2 years behind.
The half dozen SpacemiT K3 boards that are expected to ship late next month (and I've already been using a pre-production one for two months — they are real) are equivalent to Pi 5 which shipped in October 2023, 2 1/2 years ago.
Don't say "it's just around the corner" - I'm tired of hearing it.
It's irrelevant whether you're tired of hearing it, it's still true, at least with respect to non-Apple Arm.
The Tenstorrent Ascalon has apparently already taped out, and they're saying a dev board will be available in Q3. Let's say Q4 :-) It's got Apple M1-equivalent µarch — done by the actual Apple M1 architect. The first dev board is only going to clock at half the speed of an M1 but that still puts it at something like Zen 2 performance.
I use both an M1 Mini and a six core Zen 2 laptop every day. Both are just fine for everyday tasks that everyone does: web browsing, playing video etc.
Huh? "It's irrelevant whether you're tired of hearing it, it's still true" ... after you give a whole bunch of currently shipping examples that are indeed behind by years, and then move the goal posts with "at least with respect to non-Apple Arm." Then you give me another example of yet another chip in tapeout that at least right now may or may not ship, and then tell me it's only going to be half the speed of M1.
This is exactly the kind of nonsense I'm referring to. And I'm typing this on an M1 MBA!
Not that new, no. I bought my Acorn Archimedes A310 2nd hand in 1989, with its 8 MHz ARM2 chip -- the second ever ARM chip, released 1986, 1 year after the ARM1.
My £800 desktop ARM computer came with a free PC emulator and it could boot DR-DOS and run full-fledged PC apps at nearly the speed of an IBM PC-XT. I ran QuickBASIC 4 on mine and it was usable for production work.
That was 2nd gen silicon of a brand new architecture, and it was that much quicker.
RISC-V -- and I call it "risk vee", because RISC-5 has precedence -- is not that impressive and it's had a whole decade.
I understand the point you are making, but I think this comparison is disingenuous. There are immense differences in the process of chip-making between 1989 and today; the capital intensity is orders of magnitude higher. You need to take the diminishing returns in speed of the best-in-class chips into consideration here.
There is an even more important difference: Acorn Computers was a private business investing capital to make profit. RISC-V is a totally different animal here, isn't it? An open standard created in academia and controlled by a nonprofit is never going to go toe-to-toe with major capital players on its own. So what is the purpose of even making this comparison? People who are interested in RISC-V are interested for different reasons. A totally open architecture is an interesting thing to have, even if it isn't competitive in speed.
There is another reason I think this might be interesting to keep an eye on. As China's chip-making ramps up, they're going to want architectures that don't make them dependent on US intellectual property, since that has been weaponized against them. So there's a good chance that RISC-V may fill this role, at least to some degree. As with deepseek, using open tech to break monopoly power could be a strategic choice.
Isn't that also the point of LoongArch? Anyone have a good analysis about what the various Chinese players are doing with the different architectures (RISC-V, LoongArch, others?)?
I don't really get the point of LoongArch and would love to read an explanation of what role it fills that RV does not.
I know that Espressif, the Chinese microcontroller company behind the ESP32, has bet heavily on RISC-V. I imagine that one reason they went with RV instead of ARM is exactly the fear of western control over their IP (that, and it's not cheap to license the ARM ISA).
It's irrelevant whether you're tired of hearing it, it's still true, at least with respect to non-Apple Arm.
This seems like too broad a statement to me; I quite doubt that any current or upcoming RISC-V hardware meaningfully competes with any of Ampere's products either, and those are actually well within reach of consumers price-wise and can be put in a workstation.
"Architecture" performance isn't about the architecture at all, it's entirely about who's making CPUs. ARM got decently fast because the performance of your ARM core became a significant distinguishing feature in the hundred-billions dollar market that is the smartphone market. And ARM only got really really fast, as in desktop class, because Apple invested a ton.
It's not that interesting to discuss how long an architecture has existed, neither as an "excuse" for poor performance nor an attack that it "ought to have gotten faster by now". The Xtensa architecture is 16 years older than aarch64, so why is it still so slow? Because it's used for microcontrollers so nobody has invested the hundreds of billions it would take to make a fast Xtensa chip; that's it.
RISC-V kicking ass in the microcontroller space. It's a huge success. It is being used as a platform for experimentation by the academic community and the HPC space; another huge success. Will it make its way into a competitive laptop? Who knows, someone would need to invest a few more hundreds of billions. It's not up to the architecture.
I don't know who this is directed at, probably partially you and partially the "RV will eat the desktop computer market any day now" crowd. I like RV, I like that we have a royalty free architecture that scales down to something that can easily be implemented in Verilog on an FPGA and up to a full fat desktop class ISA that has great support in LLVM and GCC. It doesn't have to eat the desktop market in order to be a success.
I actually agree with this, and posted as much on the orange site that the truly successful architectures poured money into their performance.
My objection is to the RISC-V hype machine. If it were positioned as a simple core for modest tasks and an alternative in that space, fine. It certainly seems to be the product line that has emerged. But we also get wild promises about performance and RISC-V will be everywhere and scale to all kinds of levels, this core or that core are just around the corner, and all those things have been well-intentioned lies. No company to date, either because they can't (no funding, no fab, no volume) or won't (happy to sell low powered cores, no business interest in desktop or HPC), has issued anything in the desktop space, let alone bigger. And Linus is right that it has to be on your desktop to care about it.
I would be less irritated by that if it weren't taking the wind out of the sails of other possibilities. ppc64 is in the performance ballpark, which is why I use it, and it's also an open ISA, maybe even SPARC could make a comeback, but everyone has jumped on the RISC-V bandwagon and it's not justifying itself.
I think most of that is fair, but I really really don't like POWER. We're so close to a world where little endian is ubiquitous. Please don't let a big-endian-first architecture gain popularity again. If RV is the thing responsible for saving us from a world where POWER gains relevance, I am grateful. Plus, I don't trust IBM.
While I like big endian, my desktop is ppc64le. It swings both ways and most Linux distros run little. Even on Power (no more acronym) big endian is firmly considered legacy.
I obviously don't share your antipathy to IBM, but I won't argue it here.
"Swinging both ways" is worse than being just big endian frankly. They have managed to take one ISA and create two incompatible ABIs just through something as stupid as endianness. While I hate big endian chips*, I have more respect for an architecture which dares to stick with an endianness than I have for one which decides to make their own indecision everyone else's problem.
* when I say I hate big endian chips, understand that I have nothing bad to say about big endian in isolation. If the roles were switched and we were in a primarily big endian world, I would be exactly as negative towards a double-endian primarily-little-endian architecture as I am towards POWER today. If you asked me at gunpoint to declare a preference I'd probably say I prefer LE on the abstract just because it's cute how you can truncate a number by reinterpreting it from a longer integer type to a shorter integer type, but it's really not a strong opinion. What I hate is that POWER is still being used as an excuse to pester developers to support big endian systems.
What I hate is that POWER is still being used as an excuse to pester developers to support big endian systems.
You don't need POWER to keep pestering developers to support big endian systems---the Internet itself is big endian in TCP/IP. Fixing that makes the transition to IPv6 look trivial.
Disclaimer: I prefer big endian, but that's because I first learned assembly on big endian systems (6809/68000) and then hit the horror of the 8086 architecture ...
Thanks! That reminds me that I need to update the OS on it. I still like the form factor but it has yet to be anything more than a portable terminal to me so far.
Forget RISC-V .. I want a powerful ARM64 laptop, workstation or mini pc. You would think that is something solved in 2026 but it is the same story .. there is barely anything out there. (That is not a Mac)
Can someone explain the technical reasons for why they don’t cross compile? It seems like an obvious solution to avoid the long build times, but the author of the post seems to know what they’re talking about so I wonder what the reason is.
Cross compilation of entire distributions requires such distributions to be prepated for it. Which is not a case when you use OpenEmbedded/Yocto or Buildroot to build it. But it gets complicated with distributions which are built natively.
Fedora does not have a way to cross compile packages. The only cross compiler available in repositories is bare-metal one. You can use it to build firmware (EDK2, U-Boot) or Linux kernel. But nothing more.
Then there is the other problem: testing. What is a point of successful build if it does not work on target systems? Part of each Fedora build is running testsuite (if packaged software has any). You should not run it in QEMU so each cross-build would need to connect to target system, upload build artifacts and run tests. Overcomplicated.
Native builds allows to test is distribution ready for any kind of use. I use AArch64 desktop daily for almost a year now. But it is not "4core/16GB ram SBC" but rather "server-as-a-desktop" kind (80 cores, 128 GB ram, plenty of PCI-Express lanes). And I build software on, write blog posts, watch movies etc. And can emulate other Fedora architectures to do test builds.
Hardware architecture slow today, can be fast in the future. In 2013 building Qt4 for Fedora/AArch64 took days. Now it takes 18 minutes.
It's a good question, because you're right- cross-compilation would build faster. However,
Supporting cross-compilation generally implies an explosion of host->target toolchain combos to debug and support
Typically a build system goes all-in on cross-compilation (Yocto/Buildroot) or all-in on native-compilation (most everything else), and it can be surprisingly wonky to badger these systems into the other style
Native builds require bootstrapping and may be slow, but are conceptually simpler and generally more reliable
This article has no facts that point to why RISC-V being slow means it can't be properly supported:
Without it, we can not even plan for the RISC-V 64-bit architecture to became one of official, primary architectures in Fedora Linux.
What? So they can't admin something if it's not fast enough for them? How does that make sense? Based on this, they're really just saying they suck at administering systems. If there's a real, actual reason, I don't see it here.
Slow iteration speed can kill a project. If you can't rebuild packages to retest in a reasonable time, it's more beneficial to put people on other tasks that will improve things for architectures used by 99.99% of users instead.
Every entry on this board https://abologna.gitlab.io/fedora-riscv-tracker/ needs to be built and tested, likely multiple times. Either people will be paid to context switch a lot by doing that, or free contributors will burn out. Neither sounds great.
"kill a project" is, you have to be honest, quite hyperbolic.
You know, corporate workflows aren't normal here - they're the exception, or at least they have been the exception for decades. If someone wants to make the argument that "RISC-V is sloooow" or that "slow iteration speed can kill a project", it'd be worlds more honest to qualify that with, "for corporate workflows" and/or "for corporate goals".
You, and many others, seem too willing to accept the shortcomings of corporate priorities being applied generally to all workflows. Fedora is a corporate project. Debian is becoming one. But statements without proper qualifiers are just flatly wrong.
Case in point: a modern OS with thousands of binary packages compiled for it exists for VAX. According to the "RISC-V is sloooow" and according to you, this isn't possible, and the project doing that should've died by now. Clearly it hasn't, so clearly some of the assumptions Marcin Juszkiewicz and you have made can't just be blindly applied to everything.
It's unfortunate that "Linux" no longer evokes a mental picture which includes open source communities, but of corporate priorities.
Anyone is free to build Linux distribution on their own (I maintained OpenZaurus distribution about 20 years ago). With own rules, speed, release cycles etc.
Fedora decided to have one release every 6 months. Which requires many package builds. And package is released to repository only when it builds on all architectures. So if you add RISC-V, at the current state of hardware, it will slowdown whole process. And package maintainers will complain (they did that in past when AArch64 was slow) and will start disabling RISC-V architecture from their packages.
a modern OS with thousands of binary packages compiled for it exists for VAX.
It could be cross compiled from a modern system too. I do that with my retro-computing---I compile/assemble on a modern system and then copy the results to the hardware/emulator.
How much does an 80 core x86 cost? Looks to me that the CPU chip alone costs $10k+.
You can buy four complete $2500 RISC-V Milk-V Pioneers with 64 core CPU and 128 GB RAM each for that (or could two years ago). Each one of them will run RISC-V code faster than an 80 core x86 running QEMU -- and with a lot less power consumption too.
Or quad core Milk-V Megrez with 32 GB RAM was around $250 (it's been replaced by Titan now). You could buy 40 of those for the price of just that 80 core x86 chip, and have a wicked build farm.
The parity RISC-V core costs Infinity dollars because it doesn't exist yet.
The architecture is still a humongous step forward for all computing that isn't performance bound - anything that just keeps quietly plugging away at a slow clock.
Was going to comment that you bought it from a friend... but looking on eBay, you can find it for the same or cheaper used (this is the Ampere Altra Q80-30). You've tempted me to build my own system around it!
When you're doing fixing work like that, you typically want to do it locally first. Full rebuild plus waiting for a build queue in CI is a big time sink when you could try an incremental change locally instead.
Fixing locally first also ensures you're not clogging the build queue for other people with long failing tasks.
When you have tens/hundreds of system to run big project like Fedora you want proper servers. The boring ones, which you rack, cable and use. With BMC, high speed networking etc.
You do not want a box with SBCs inside, requiring manual labour when system requires reboot or reinstall.
If x86 is anything to go by, it's probably REALLY hard to make a microarch that's Genuinely difficult to make fast implementations of it. On the other hand RISC-V has a fair number of "wait why the heck did you do that? that's going to make Thing X way harder for compilers to handle efficiently!" moments, but there are few enough that they can probably be fixed with a relatively unintrusive extension.
I think you mean “instruction set” not “microarch”: a microarchitecture describes an implementation of an instruction set.
The classic way to make an instruction set slow is full CISC: all arguments are generalized to allow complex addressing modes with features like indirection and autoincrement, so the number of memory accesses per instruction can be huge. That’s the main technical disadvantage of 68k and VAX that led to them being displaced by RISC. By contrast x86 typically has one memory operand per instruction and its address is calculated entirely from registers; it’s not a very CISCy CISC.
Another way to make a slow instruction set is a stack machine, because that serializes all instructions through a single write bottleneck: the top of stack. The transputer t9000 tried to mitigate this by using idiom recognition to decode stack operations into a RISC-style internal form; but the transputer was not a pure stack machine, its stack had a limited depth and it had the notion of a workspace which the t9 turned into a conventional register file. The t9 project failed and I don’t know of any attempts to make a superscalar stack machine since then.
Ah right thanks for the correction! The distinction slipped my mind.
RV has some really weird oddities like some of the integer instructions being missing for 8/16 bits and requiring extra wrangling to get the right semantics (this is only a problem for a handful of operations), or the frontend being forced to deal with misaligned 32-bit instructions if 16-bit instructions are supported (there's several documented cases of people hitting performance bottlenecks because of this), but I haven't run into an example that would genuinely be hard to fix so far in looking at things.
Apple released their first 64-bit iPhone in 2013, the iPhone 5S using the Apple A7 CPU
The Raspberry Pi 2B was also released in 2013, using a 900 MHz quad-core Cortex-A53 CPU -- an in-order ARM reference design, definitely not high-end
AWS Graviton released in 2018, using Cortex-A72 CPU's -- an out-of-order ARM reference design that is in the "mid-high-end cell phone" category. AWS sold them based on price per performance, not raw speed. This is now changing but I don't know much about it.
Apple released the M1 in 2020, the first ARM64 CPU with actually competitive desktop performance that you could personally buy.
So from that we can conclude that going from "clean-slate instruction set" to "workstation-grade CPU ready for the mass market" takes about a decade -- if you have as much money and motivation as Apple and ARM put together.
classichasclass | 21 hours ago
I'll just go ahead and make it salty because I know the RISC-V boosters will be here any moment. Can you name me, today, a RISC-V chip I can buy and run in a workstation that's at all comparable with current aarch64 or x86_64 offerings? Don't say "it's just around the corner" - I'm tired of hearing it.
Meanwhile, I'm still using my POWER9, so it's not as if I wouldn't spend the money if there were a reasonable alternative. I would like to believe in RISC-V but so far the vast majority look like a bunch of cheap cores trying to compete with RPi.
alexrp | 20 hours ago
I genuinely don't think I've seen anyone claim that RISC-V hardware with performance comparable to x86 or AArch64 is right around the corner. That would be a borderline insane claim to make at this stage.
But for something upcoming that seems to actually have acceptable performance for daily use, you probably want to keep an eye on this: https://www.reddit.com/r/RISCV/comments/1oeey5v/tenstorrent_atlantis_silicon_dev_platform/
That's the board I'm eyeing for Zig's RISC-V CI, to replace our 4x Milk-V Jupiters (which have painfully poor single-core performance).
lproven | 5 hours ago
It was made to me, to my face, by a Si5 representative who had just come off stage at the Ubuntu Summit in Rīga. This is absolutely categorically the sort of thing RISC-V promoters have been claiming for years, yes.
colejohnson66 | 3 hours ago
Many RISC-V evangelists have spouted that. I've seen it myself. Granted, those are sometimes accompanied by claims like "x86/ARM is doomed", so I ignore them.
brucehoult | 19 hours ago
Of course not. RISC-V is extremely new. It's less than five years since the first $100 RISC-V SBC equivalent to the first 2012 Raspberry Pi came out: specifically the AWOL Nezha in June 2021. So 9 years behind, at that point.
The Milk-V Megrez that has been shipping since January 2025 is equivalent to a mid 2019 Raspberry Pi 4 (other than not having SIMD), so that's 5 1/2 years behind.
The half dozen SpacemiT K3 boards that are expected to ship late next month (and I've already been using a pre-production one for two months — they are real) are equivalent to Pi 5 which shipped in October 2023, 2 1/2 years ago.
It's irrelevant whether you're tired of hearing it, it's still true, at least with respect to non-Apple Arm.
The Tenstorrent Ascalon has apparently already taped out, and they're saying a dev board will be available in Q3. Let's say Q4 :-) It's got Apple M1-equivalent µarch — done by the actual Apple M1 architect. The first dev board is only going to clock at half the speed of an M1 but that still puts it at something like Zen 2 performance.
I use both an M1 Mini and a six core Zen 2 laptop every day. Both are just fine for everyday tasks that everyone does: web browsing, playing video etc.
classichasclass | 18 hours ago
Huh? "It's irrelevant whether you're tired of hearing it, it's still true" ... after you give a whole bunch of currently shipping examples that are indeed behind by years, and then move the goal posts with "at least with respect to non-Apple Arm." Then you give me another example of yet another chip in tapeout that at least right now may or may not ship, and then tell me it's only going to be half the speed of M1.
This is exactly the kind of nonsense I'm referring to. And I'm typing this on an M1 MBA!
lproven | 5 hours ago
Not that new, no. I bought my Acorn Archimedes A310 2nd hand in 1989, with its 8 MHz ARM2 chip -- the second ever ARM chip, released 1986, 1 year after the ARM1.
My £800 desktop ARM computer came with a free PC emulator and it could boot DR-DOS and run full-fledged PC apps at nearly the speed of an IBM PC-XT. I ran QuickBASIC 4 on mine and it was usable for production work.
That was 2nd gen silicon of a brand new architecture, and it was that much quicker.
RISC-V -- and I call it "risk vee", because RISC-5 has precedence -- is not that impressive and it's had a whole decade.
colonelpanic | 3 hours ago
I understand the point you are making, but I think this comparison is disingenuous. There are immense differences in the process of chip-making between 1989 and today; the capital intensity is orders of magnitude higher. You need to take the diminishing returns in speed of the best-in-class chips into consideration here.
There is an even more important difference: Acorn Computers was a private business investing capital to make profit. RISC-V is a totally different animal here, isn't it? An open standard created in academia and controlled by a nonprofit is never going to go toe-to-toe with major capital players on its own. So what is the purpose of even making this comparison? People who are interested in RISC-V are interested for different reasons. A totally open architecture is an interesting thing to have, even if it isn't competitive in speed.
There is another reason I think this might be interesting to keep an eye on. As China's chip-making ramps up, they're going to want architectures that don't make them dependent on US intellectual property, since that has been weaponized against them. So there's a good chance that RISC-V may fill this role, at least to some degree. As with deepseek, using open tech to break monopoly power could be a strategic choice.
IohannesArnold | an hour ago
Isn't that also the point of LoongArch? Anyone have a good analysis about what the various Chinese players are doing with the different architectures (RISC-V, LoongArch, others?)?
mort | an hour ago
I don't really get the point of LoongArch and would love to read an explanation of what role it fills that RV does not.
I know that Espressif, the Chinese microcontroller company behind the ESP32, has bet heavily on RISC-V. I imagine that one reason they went with RV instead of ARM is exactly the fear of western control over their IP (that, and it's not cheap to license the ARM ISA).
colonelpanic | an hour ago
The Chinese tech industry is plenty large enough for multiple approaches, in fact this kind of diversity has been useful in other areas.
alexrp | 16 hours ago
This seems like too broad a statement to me; I quite doubt that any current or upcoming RISC-V hardware meaningfully competes with any of Ampere's products either, and those are actually well within reach of consumers price-wise and can be put in a workstation.
mort | 3 hours ago
"Architecture" performance isn't about the architecture at all, it's entirely about who's making CPUs. ARM got decently fast because the performance of your ARM core became a significant distinguishing feature in the hundred-billions dollar market that is the smartphone market. And ARM only got really really fast, as in desktop class, because Apple invested a ton.
It's not that interesting to discuss how long an architecture has existed, neither as an "excuse" for poor performance nor an attack that it "ought to have gotten faster by now". The Xtensa architecture is 16 years older than aarch64, so why is it still so slow? Because it's used for microcontrollers so nobody has invested the hundreds of billions it would take to make a fast Xtensa chip; that's it.
RISC-V kicking ass in the microcontroller space. It's a huge success. It is being used as a platform for experimentation by the academic community and the HPC space; another huge success. Will it make its way into a competitive laptop? Who knows, someone would need to invest a few more hundreds of billions. It's not up to the architecture.
I don't know who this is directed at, probably partially you and partially the "RV will eat the desktop computer market any day now" crowd. I like RV, I like that we have a royalty free architecture that scales down to something that can easily be implemented in Verilog on an FPGA and up to a full fat desktop class ISA that has great support in LLVM and GCC. It doesn't have to eat the desktop market in order to be a success.
classichasclass | 2 hours ago
I actually agree with this, and posted as much on the orange site that the truly successful architectures poured money into their performance.
My objection is to the RISC-V hype machine. If it were positioned as a simple core for modest tasks and an alternative in that space, fine. It certainly seems to be the product line that has emerged. But we also get wild promises about performance and RISC-V will be everywhere and scale to all kinds of levels, this core or that core are just around the corner, and all those things have been well-intentioned lies. No company to date, either because they can't (no funding, no fab, no volume) or won't (happy to sell low powered cores, no business interest in desktop or HPC), has issued anything in the desktop space, let alone bigger. And Linus is right that it has to be on your desktop to care about it.
I would be less irritated by that if it weren't taking the wind out of the sails of other possibilities. ppc64 is in the performance ballpark, which is why I use it, and it's also an open ISA, maybe even SPARC could make a comeback, but everyone has jumped on the RISC-V bandwagon and it's not justifying itself.
I'm sorry for the rant but I'm sick of it.
mort | an hour ago
I think most of that is fair, but I really really don't like POWER. We're so close to a world where little endian is ubiquitous. Please don't let a big-endian-first architecture gain popularity again. If RV is the thing responsible for saving us from a world where POWER gains relevance, I am grateful. Plus, I don't trust IBM.
classichasclass | an hour ago
While I like big endian, my desktop is ppc64le. It swings both ways and most Linux distros run little. Even on Power (no more acronym) big endian is firmly considered legacy.
I obviously don't share your antipathy to IBM, but I won't argue it here.
mort | an hour ago
"Swinging both ways" is worse than being just big endian frankly. They have managed to take one ISA and create two incompatible ABIs just through something as stupid as endianness. While I hate big endian chips*, I have more respect for an architecture which dares to stick with an endianness than I have for one which decides to make their own indecision everyone else's problem.
* when I say I hate big endian chips, understand that I have nothing bad to say about big endian in isolation. If the roles were switched and we were in a primarily big endian world, I would be exactly as negative towards a double-endian primarily-little-endian architecture as I am towards POWER today. If you asked me at gunpoint to declare a preference I'd probably say I prefer LE on the abstract just because it's cute how you can truncate a number by reinterpreting it from a longer integer type to a shorter integer type, but it's really not a strong opinion. What I hate is that POWER is still being used as an excuse to pester developers to support big endian systems.
spc476 | 30 minutes ago
You don't need POWER to keep pestering developers to support big endian systems---the Internet itself is big endian in TCP/IP. Fixing that makes the transition to IPv6 look trivial.
Disclaimer: I prefer big endian, but that's because I first learned assembly on big endian systems (6809/68000) and then hit the horror of the 8086 architecture ...
lproven | 5 hours ago
Thanks for saying this.
Your Clockwork Pi review remains, 4Y on, one of the best writeups of this I've seen.
classichasclass | 2 hours ago
Thanks! That reminds me that I need to update the OS on it. I still like the form factor but it has yet to be anything more than a portable terminal to me so far.
st3fan | 4 hours ago
Forget RISC-V .. I want a powerful ARM64 laptop, workstation or mini pc. You would think that is something solved in 2026 but it is the same story .. there is barely anything out there. (That is not a Mac)
Sirikon | 20 hours ago
Current generation of RISC-V chips are slow*
thasso | 11 hours ago
Can someone explain the technical reasons for why they don’t cross compile? It seems like an obvious solution to avoid the long build times, but the author of the post seems to know what they’re talking about so I wonder what the reason is.
hrw | 7 hours ago
Cross compilation of entire distributions requires such distributions to be prepated for it. Which is not a case when you use OpenEmbedded/Yocto or Buildroot to build it. But it gets complicated with distributions which are built natively.
Fedora does not have a way to cross compile packages. The only cross compiler available in repositories is bare-metal one. You can use it to build firmware (EDK2, U-Boot) or Linux kernel. But nothing more.
Then there is the other problem: testing. What is a point of successful build if it does not work on target systems? Part of each Fedora build is running testsuite (if packaged software has any). You should not run it in QEMU so each cross-build would need to connect to target system, upload build artifacts and run tests. Overcomplicated.
Native builds allows to test is distribution ready for any kind of use. I use AArch64 desktop daily for almost a year now. But it is not "4core/16GB ram SBC" but rather "server-as-a-desktop" kind (80 cores, 128 GB ram, plenty of PCI-Express lanes). And I build software on, write blog posts, watch movies etc. And can emulate other Fedora architectures to do test builds.
Hardware architecture slow today, can be fast in the future. In 2013 building Qt4 for Fedora/AArch64 took days. Now it takes 18 minutes.
chadski | 10 hours ago
It's a good question, because you're right- cross-compilation would build faster. However,
mwt | 10 hours ago
Yeah, I'm wondering this as well
hrw | 8 hours ago
I updated blog post after reading comments from Matrix/Slack/Phoronix/HN/Lobster/etc. places.
johnklos | 20 hours ago
This article has no facts that point to why RISC-V being slow means it can't be properly supported:
What? So they can't admin something if it's not fast enough for them? How does that make sense? Based on this, they're really just saying they suck at administering systems. If there's a real, actual reason, I don't see it here.
Also, what's with all the sentence fragments?
viraptor | 19 hours ago
Slow iteration speed can kill a project. If you can't rebuild packages to retest in a reasonable time, it's more beneficial to put people on other tasks that will improve things for architectures used by 99.99% of users instead.
Every entry on this board https://abologna.gitlab.io/fedora-riscv-tracker/ needs to be built and tested, likely multiple times. Either people will be paid to context switch a lot by doing that, or free contributors will burn out. Neither sounds great.
johnklos | 14 hours ago
"kill a project" is, you have to be honest, quite hyperbolic.
You know, corporate workflows aren't normal here - they're the exception, or at least they have been the exception for decades. If someone wants to make the argument that "RISC-V is sloooow" or that "slow iteration speed can kill a project", it'd be worlds more honest to qualify that with, "for corporate workflows" and/or "for corporate goals".
You, and many others, seem too willing to accept the shortcomings of corporate priorities being applied generally to all workflows. Fedora is a corporate project. Debian is becoming one. But statements without proper qualifiers are just flatly wrong.
Case in point: a modern OS with thousands of binary packages compiled for it exists for VAX. According to the "RISC-V is sloooow" and according to you, this isn't possible, and the project doing that should've died by now. Clearly it hasn't, so clearly some of the assumptions Marcin Juszkiewicz and you have made can't just be blindly applied to everything.
It's unfortunate that "Linux" no longer evokes a mental picture which includes open source communities, but of corporate priorities.
hrw | 10 hours ago
Anyone is free to build Linux distribution on their own (I maintained OpenZaurus distribution about 20 years ago). With own rules, speed, release cycles etc.
Fedora decided to have one release every 6 months. Which requires many package builds. And package is released to repository only when it builds on all architectures. So if you add RISC-V, at the current state of hardware, it will slowdown whole process. And package maintainers will complain (they did that in past when AArch64 was slow) and will start disabling RISC-V architecture from their packages.
spc476 | 13 hours ago
It could be cross compiled from a modern system too. I do that with my retro-computing---I compile/assemble on a modern system and then copy the results to the hardware/emulator.
brucehoult | 19 hours ago
How much does an 80 core x86 cost? Looks to me that the CPU chip alone costs $10k+.
You can buy four complete $2500 RISC-V Milk-V Pioneers with 64 core CPU and 128 GB RAM each for that (or could two years ago). Each one of them will run RISC-V code faster than an 80 core x86 running QEMU -- and with a lot less power consumption too.
Or quad core Milk-V Megrez with 32 GB RAM was around $250 (it's been replaced by Titan now). You could buy 40 of those for the price of just that 80 core x86 chip, and have a wicked build farm.
riking | 18 hours ago
The parity RISC-V core costs Infinity dollars because it doesn't exist yet.
The architecture is still a humongous step forward for all computing that isn't performance bound - anything that just keeps quietly plugging away at a slow clock.
hrw | 10 hours ago
That 80 core, aarch64 cpu costed me 300 EUR.
More info about my desktop system: https://marcin.juszkiewicz.com.pl/2025/06/27/bought-myself-an-ampere-altra-system/
eyesinthefire | 4 hours ago
Was going to comment that you bought it from a friend... but looking on eBay, you can find it for the same or cheaper used (this is the Ampere Altra Q80-30). You've tempted me to build my own system around it!
hrw | 2 hours ago
Motherboard was the biggest cost.
Today, RAM may be the biggest cost...
eyesinthefire | an hour ago
Me and a coworker got bored and specified out a system for it, and yes by far, especially since it needs registered memory.
viraptor | 18 hours ago
When you're doing fixing work like that, you typically want to do it locally first. Full rebuild plus waiting for a build queue in CI is a big time sink when you could try an incremental change locally instead.
Fixing locally first also ensures you're not clogging the build queue for other people with long failing tasks.
hrw | 10 hours ago
When you have tens/hundreds of system to run big project like Fedora you want proper servers. The boring ones, which you rack, cable and use. With BMC, high speed networking etc.
You do not want a box with SBCs inside, requiring manual labour when system requires reboot or reinstall.
RISC-V is at SBC stage now.
wareya | 15 hours ago
If x86 is anything to go by, it's probably REALLY hard to make a microarch that's Genuinely difficult to make fast implementations of it. On the other hand RISC-V has a fair number of "wait why the heck did you do that? that's going to make Thing X way harder for compilers to handle efficiently!" moments, but there are few enough that they can probably be fixed with a relatively unintrusive extension.
fanf | 7 hours ago
I think you mean “instruction set” not “microarch”: a microarchitecture describes an implementation of an instruction set.
The classic way to make an instruction set slow is full CISC: all arguments are generalized to allow complex addressing modes with features like indirection and autoincrement, so the number of memory accesses per instruction can be huge. That’s the main technical disadvantage of 68k and VAX that led to them being displaced by RISC. By contrast x86 typically has one memory operand per instruction and its address is calculated entirely from registers; it’s not a very CISCy CISC.
Another way to make a slow instruction set is a stack machine, because that serializes all instructions through a single write bottleneck: the top of stack. The transputer t9000 tried to mitigate this by using idiom recognition to decode stack operations into a RISC-style internal form; but the transputer was not a pure stack machine, its stack had a limited depth and it had the notion of a workspace which the t9 turned into a conventional register file. The t9 project failed and I don’t know of any attempts to make a superscalar stack machine since then.
wareya | an hour ago
Ah right thanks for the correction! The distinction slipped my mind.
RV has some really weird oddities like some of the integer instructions being missing for 8/16 bits and requiring extra wrangling to get the right semantics (this is only a problem for a handful of operations), or the frontend being forced to deal with misaligned 32-bit instructions if 16-bit instructions are supported (there's several documented cases of people hitting performance bottlenecks because of this), but I haven't run into an example that would genuinely be hard to fix so far in looking at things.
nortti | 11 hours ago
Does Fedora have a policy that all packages must be compiled natively? This seems like the perfect use case for cross-compilation.
sugaryboa | 13 hours ago
According to his stats, s390x is still the king of compile time.
valdemar | 9 hours ago
tbh s390x cores are very performant compared to most other platforms, they also have a pretty massive amount of cache
icefox | 3 hours ago
For context:
So from that we can conclude that going from "clean-slate instruction set" to "workstation-grade CPU ready for the mass market" takes about a decade -- if you have as much money and motivation as Apple and ARM put together.
pkolloch | 10 hours ago
Less bad than I expected but I was probably pessimistic:
binutilsbuild is about 4x slower than ARM64 on Fedora's build hardware. And ARM64 is 1.5x slower than AMD64.The RISC-V builder has also less RAM, maybe another constraining factor, which might also be because of hardware availability.
That is obviously not great but I am happy for the diversity and openness and might already be enough for many tasks.
Would be interested how it feels for browsing the web etc.