RISC-V is eating their lunch. There will be a myriad of Chinese design houses and fabless manufacturers churning out cheap cores for use in almost everything. When you're making a CPU core for a microwave or cheap trinket even a few pennies saved by not having to pay for an ARM license matters. And the freedom to alter the design as you see fit is worth its weight in gold in a hyper-competitive market
RISC-V is certainly a great option for low cost controllers because you can just slot in an existing design cheaply with minimal customisation. Most applications for ARM and RISC-V use existing standard base core designs to which you add whatever mix of additional components you need.
So the real question is, where is the investment in the open RISC-V core architecture coming from? That’s what will push RISC-V forward. With ARM every time a small design house drops an ARM core into a SOC they pay a fee that funds further ARM architecture development. That’s not so with RISC-V. None of those Chinese design houses are making any significant investment in advancing the state of the art in RISC-V core design. Also those private shops that do innovate on the RISC-V core architecture don’t necessarily have an interest in open sourcing their improvements.
So I suppose we’ll see. As is often the case it’s not so much the technical issues that will decide the matter but the investment and funding model for the open core architecture.
Core architecture is the easiest part actually, there's plenty of high-level core designs available in the open. Turning that broad architecture into real products is a lot more challenging, and is where shops like e.g. SiFive can thrive. But that's just how fabbing chips works, it applies to RISC-V as much as anything else. And we'll probably see even more openness than that at the trailing edge of chip fabrication - the relevant segment for e.g. commodity embedded parts, where there's a far better understanding of processes and design rules.
> private shops that do innovate on the RISC-V core architecture don’t necessarily have an interest in open sourcing their improvements.
Most stuff in China is 'open sourced' through extensive code leaks, even for silicon designs. There are websites designed for downloading your competitors code. Most cheap products from China I have no difficulty locating at least some source code for. Getting it to build on the other hand is a totally different ball game...
You actually find the reverse - innovation is very fast, because when someone writes some code, that manufacturer can get their product to market perhaps 1 or 2 months before a competitor who has to wait for the product to hit the shelves or code to leak. So everyone needs to be constantly improving their product while also merging in innovations of all the competitors or be overtaken.
Imagine the US patent system, and the way everyone copies a feature 20 years afterwards when the patent runs out. But now replace "20 years" with "2 months", and thats what it's like.
So there zero incentive to invest into anything as long as it might take more than a couple of months for it to pay off?
This might work on a small scale. But if find it hard to imagine that anyone would be willing to spend billions (or any large mount which would only payoff in several years) to develop some new product if they knew that that all their competitors can just copy it without paying anything and ship a much cheaper but otherwise identical product in less than half a year.
Take a piece of open-source software. An individual pull request takes no more than a couple months - usually much less. We still get a variety of hobbyists and businesses contributing, and over the years the OSS ecosystem gets better and better. Some software costs hundreds of millions of dollars to build but for these there is usually some open source alternative that's somewhere between "passable" and "much better".
I'm imagining a world where you can take a RISC-V design (or equally, some software/firmware for a cheap piece of electronics) and a business can invest some days/weeks making a small improvement (for some benefit to their own cheap product) - or a hobbyist can choose to do it on the weekend. So long as that gets in the open somehow (open sourced or leaked) everyone can benefit.
It's a whole different world than a once-every-year-to-18-months chip revision. A "pull request" might only optimize a tiny chunk of the die, but if the pace of contributions is large enough that can add up. Along the way there are still incentives for businesses to invest their time and money.
Even in China most companies are licensing their RISC-V core. From Andes. From Alibaba's T-Head division. From StarFive (SiFive licensee). Those companies are all innovating in various directions.
Most companies license a RISC-V core just as they would an ARM core. The difference is there is price and feature competition between different core creators.
RISC-V is definitely more palatable than ARM to China, high end or low end, because it is one less thing the United States can sanction China with. What remains to be seen is how much influence China's preference for RISC-V can have on the world.
Sorta except at least when x86 vendors said this, they actually dominated the datacenter. ARM isn't even done supplanting x86 yet and they've already got a lower-power, more open competitor breathing down their neck.
In the early 2000s it was x86 being a threat in the datacenter to Sun, PA-RISC, POWER etc.
Most companies switched to x86 15-20 years ago and it's pretty entrenched now.
They are just now switching -- or freshly switched -- to ARM, and the technical differences between ARM and RISC-V being very minimal, ARM won't be entrenched before RISC-V is competitive around 2025 (designs on the drawing boards now).
True, I probably went too far back, still, though it seems like ARM was closer to where x86 was back then than RISC-V is to ARM now, at least if we compare actual chips that are available.
> entrenched before RISC-V is competitive around 2025 (designs on the drawing boards now).
I'm sure people working on POWER, Itanium etc had similar plans at some point in the past. Are there significantly more resources being invested into RISC-V compared to ARM now and if not is there something inherently superior about RISC-V which would outweigh that?
What do you mean by "closer?" A modern arm chip would of course blow an x86 server from 200X out of the water, performance-wise. But by 200X, at least for large values of X, x86 was dominant in the server market (apparently they were around 90% of server sales in 2007 https://www.cnet.com/tech/tech-industry/despite-its-aging-de... ).
ARM is starting to make some headway, but it is significantly less than half. And AMD does seem to have revitalized the x86 market a bit. It isn't a given that we'll ever see ARM dominance.
And the world has changed, right? If someone is interested in running their workloads on ARM, then they apparently have a pretty portable stack. ARM adopters at the moment will specifically select for those with the least lock-in, who'll try out RISC-V when it comes around.
A few years ago I would have agreed with this hierarchy but having recently replaced my relatively powerful i7 laptop with an ARM based one and experienced a significant improvement in my use cases, I’d say ARM most definitely poses a threat to x86 for the majority of consumer use cases.
The vector extension in RISC-V is extremely powerful, and I think it will be very interesting to see what can be done with it in a desktop or server CPU down the road.
It is. SVE is very similar too. It's going to be interesting between then, because they are both coming out of the blocks at about the same time.
The only consumers who have SVE now are owners of this year's phones with the Snapdragon 8 Gen 1 (Galaxy S22, Xiaomi 12, ...), where it is buried so deep few will be aware of or playing with it.
No one at all has mass-production RISC-V vector extension 1.0 just yet. Two popular Chinese SBC makers have said in chat rooms that boards they will announce (maybe even ship) in December do (and with quad 2.5 GHz OoO cores!) I'm slightly dubious but we will see. In any case, some will be coming in 2023 -- and in laptops, not only SBCs.
Quite a lot of people have bought cheap RISC-V SBCs using the Allwinner D1 SoC, which implements draft version 0.7.1 of the RISC-V vector extension. Most instructions are unchanged (in both mnemonics and binary encoding) between 0.7.1 and 1.0 but enough very important instructions have changed -- vsetvli and loads and stores, for example -- that code needs to be tweaked to move between them. But 0.7.1 is good practice, and boards start from $17 (1 GHz single core with 512 MB RAM, very similar to a Raspberry Pi Zero) which is a lot cheaper than a sacrificial Galaxy S22.
Back in the mid 80's, I was an operating system architect at IBM during the first couple of versions of AIX (IBM's Unix system). This ran on a new RISC architecture that is now called POWER.
The idea for RISC came from the realization that complex instructions in a CISC processor still had to run a set of lower level functions provided by the hardware. Consider, for example, the CISC architecture of the DEC PDP-11 computer (the system that Unix was originally developed on). It's indirect addressing modes where used to store or load from an address found in a register; this is a frequently used addressing mode on both RISC and CISC machines. However, the PDP-11, being a CISC machine, had eight variations of indirect addressing that automatically incremented/decremented the memory address in the register by one or two, before or after the location was used, etc.
As an assembly language programmer I liked this because it seemed like I could get more work done in a single instruction when I was iterating over an array of data. However, this is an illusion. The hardware still had to do the work so that single instruction using autoincrement indirect addressing took more time to run.
RISC machines generally have a few very simple load and store instructions to access memory and most other instruction work on registers alone. The underlying hardware is more straight forward. The instructions have more predictable running time. It is easier to perform out of order execution and speculative execution since the instructions can be arranged to use non-overlapping set of registers.
For these reasons, the researchers developing RISC believed that they could achieve the same speed as CISC by running more instructions each running a bit faster than CISC instructions.
At this point, one might think that it is kind of a six of one, half-dozen kind of comparison where there is no clear advantage. But RISC has another advantage over CISC. Because the hardware is so straight forward it is easier for compilers to do very sophisticated optimizations. CISC computers often have special registers used by certain instructions differently than other registers. RISC computers usually have a larger number of basically identical registers. This makes register allocation much easier for compilers. On RISC computers, the instructions are often all of the same size again making it easier for compilers to arrange the instructions to fit better in the highest performing cache memory. The idea is that RISC is friendlier to compilers, and the combination of the fast simple instructions and advanced compilers will outperform the CISC machines.
This all sounds good and I consider IBM's POWER and the ARM architecture a success, but Intel is full of very smart people and they have proven that it's not entirely clear that RISC is better than CISC. Some complex instructions are just very useful, like Intel's vectorization instructions and the 2013 instructions that accelerate the calculation of SHA and SHA-256.
Lot's of factors are important in general purpose processor designs: virtual memory support (IBM's POWER has an inverted page table design for example), multiple compute units, virtualization, multicore, caches, good support for JIT compiler designs not just the AOT compilers envisioned when RISC was first being developed.
Intel's success and the need for backward compatability has shackled its current designs, and they have done very well despite this. Although I like the idea of a simpler faster architecture (RISC), Intel might be developing their own next generation processor architecture right now because this would fend off RISC-v and AMD too. They might come out with a new design that was RISC or CISC or a hybrid; it might even be wildly different like a very long instruction architecture.
I don't know of any public information at that level. Here's a few tidbits.
The system was originally called the IBM RT PC, but it wasn't really a PC, it was a Unix workstation. We were going for the market dominated by companies like Sun, HP, DEC, and Apollo. However, IBM already had a range of computers that were important to its business. At the low end, there was the PC-AT running OS/2, but we were shooting for higher performance and higher price point. We had to do this without threatening the interesting AS/400, a mid range business machine, and the large IBM/390 mainframe computers. Traditionally the IBM/390 systems required a dedicated machine room with an elevated floor and fire suppression equipment and so forth. However, they were interested in producing a desk-side 390 machine and didn't understand why we would want to have RISC hardware running Unix, instead of running a personal 390 architecture machine the size of a file cabinet with the VM/390 operating system. I think the name was picked to make us less threatening to other well established lines of business.
IBM's John Cocke did foundational work on RISC systems at IBM Research and won the Turing award while I was at IBM. He didn't work in Austin where AIX and the RT PC (RS/6000) development was going on, but he did stop by to talk to me a few times when he had business down the hall from my office in Austin. He was very interesting. I was just a young OS Architect, but I was working on things he was interested in. I got to work with other IBM Fellows too and learned a great deal from these talented people.
The plan for AIX was complicated because we were supposed to produce a working version of AIX at the same time that the hardware guys finished the PC RT (the first RS/6000 hardware). The compiler was developed at IBM Research (probably T.J Watson Research or Hawthorn Research centers I had occasion to visit both, but I can't remember which was the location for the PL.8 compiler) so the Austin team didn't have to worry about that.
AIX was based on Unix System V with some additions from 4.3BSD. Because we didn't have a stable hardware platform (as it was being developed along side our efforts) AIX adopted a micro-kernel, written in PL.8. The micro-kernel interacted with the hardware while providing a consistent abstraction of the hardware for a traditional Unix kernel written in C running on top of it. For example, the virtual memory manager was written in PL.8 and was in the micro-kernel as were the floating point exception handlers that took care of corner cases to provide a clean IEEE floating point abstraction to higher levels of the OS.
I didn't enjoy PL.8 very much. I had written PL/1 for assignments as an undergraduate years before, and although PL.8 was intended to keep just 80% of PL/1, it still wasn't my cup of tea. In truth, I never had to write any PL.8 code. I attended code reviews, but the team doing the VM was so good that I didn't need to dive into the code very deeply for the virtual memory manager.
Before the first release of AIX, I did work on integrating the Unix file system with the micro-kernel virtual memory management. I was also responsible for the design of the distributed file system DS.
Like you, I've always had an interest in compilers and programming languages. I happen to have John Cocke's 1970 book Programming Languages and Their Compilers. It's one of my oldest books on compilers.
I think Apple just designed a better CPU than AMD and Intel could due to a variety of reasons. The fact that it's ARM seems to be mostly tangential if Apple if had access to another architecture and invested a similar amount of money and resources into it for over 10 years the result would probably be similar.
It’s not like Qualcomm or any other ARM manufacturer has anything remotely close to M-series SoC.
a) It's not on the desktop, it's on the laptop
b) CISC machines are just RISC machines with an extra layer of decoding
c) Bringing your CPU design in house means it can interface better with your hardware and software
d) cost
It's a forgone conclusion. Once the ratcheting process starts, it doesn't matter how far behind the open source option is behind the proprietary one, and doesn't matter how slowly it advances, it wins.
By ratcheting process I mean how something open can never go backwards to un-existing. Once something exists at all, it's there for that maybe one other person maybe 5 years later to build on by adding maybe one weekends work, eventually it gets to a usable state for at least some applications, even if they are only small ones. The slick feature rich proprietary current industry standard can sneer at that every day for decades, and then it is surpassed.
Either you have an extremely broad interpretation of what open source is or you simply ignore all the cases where this didn’t happen or indeed the opposite occurred.
Also I'm not quite sure how much of an advantage being open source is for RISC-V. Unless we’re taking about low cost chips with extremely narrow margins the licensing fee a manufacturer needs to pay ARM is insignificant. Even the price for an ARM architectural license is pocket change compared to the cost it would take to design a competitive core if you’re aiming for data-centers.
A company like ARM (i.e. willing to license their designs and architecture to anyone for a small fee, it's basically behaving like a non-profit compared to many other tech companies) seems like the best case scenario for the industry. With RISC-V why would a company which designed a high-end CPU core license it to anyone instead of trying to maximize their profit/competitive advantage? With Arm at least there something to fallback to accessible for everyone.
You sound like someone from the 90's dismissing Linux.
The bottom line is that ARM represents vendor lock-in and broken market with only one seller. A company can take it upon itself to design a high-end processor for servers and be successful (like Apple did for user machines) but it's certain it won't choose the ARM arch, that would make no sense. Now that the RV arch is established and growing it's inevitable that it will be the arch of choice, because no-one in their right mind is going to tie their hands and donate money to ARM for nothing useful in return.
Yes, CPUs are expensive to design, but it's amazing what a free and functioning market can produce. We're about to find out exactly what.
> You sound like someone from the 90's dismissing Linux.
Software seems to be very different from hardware. How would a company like Red Hat, Canonical or basically almost every enterprise company which primarily relies on Linux/OSS/GNU products make money from RISC-V? The cost of entry is much higher and there is no clear way of building a business around open source RISC-V (yet)
> The bottom line is that ARM represents vendor lock-in and broken market with only one seller
Yes. But why would anyone who which was capable of building a high-end RISC-V core give their design away for free? Unless there is a clear incentive for them to do that it’s objectively worse than the current situation where anyone can license a high-end CPU design from ARM.
> like Apple did for user machines) but it's certain it won't choose the ARM arch
Right, except in the one case where they did choose ARM?
> that would make no sense
Why? As long as the cost of designing such a processor remains very high, there are no open source designs available and it’s cheaper to build on top of what ARM than build it from scratch it makes perfect sense.
I mean in principle I agree with you. It would be great if CPU design was open source, anyone could take a high end core design improve on top of it and then share their changes with everyone but I don’t see a clear path to that. Instead companies might just take what they can from the OSS community and contribute nothing back. ARM seems like a somewhat decent but far from perfect solution for this (as long as it remains independent)
>Right, except in the one case where they did choose ARM?
You mean the M1?
I'm fairly sure this was designed even before the RISC-V base spec, the unprivileged one (no supervisor vs user, or MMU), was ratified. It was ratified in 2019, by the way.
The batch of extensions ratified by the end of 2021 contains important featuresets like vector or hypervisor support.
So, of course they didn't pick RISC-V back then. Yet, nobody in their right mind would pick ARM today.
Leaving aside the merits of the two ISAs, for Apple using ARM for the M1 had the huge advantage unifying all their devices to a single ISA.
Based on just that (and the fact that they own a perpetual license to all things ARM) they would likely make the same choice now and even in a couple of years.
>for Apple using ARM for the M1 had the huge advantage unifying all their devices to a single ISA.
Wait, let me read that again. Oh, so you're saying that Apple wants to unify their ISA... by switching their already ARM smaller/hidden cores to RISC-V?
No I am saying that even if RISC-V was strictly better than ARM Apple would have likely still chosen ARM for the M1 generation as it would be the simplest way to unify their ISAs across their devices.
The Nuvia people had just spent years developing high performance ARM cores (for Apple) and as a small company obviously want a ready market for their designs.
By "nothing in return" I mean if you want to design a CPU from scratch, what exactly are you getting from licensing the ARM arch? Software compatibility? That doesn't mean anything for servers where compatibility is largely a recompile away.
Of course ARM is a huge supplier of IP and their ability to design processors in not question, but they only supply IP for one arch which they control. If you want to design something different you must get ARM's permission and they have proven to be fickle about that.
It's not "boosterism" is a real and hard fact that freedom for the fundamental parts of any class of technology means that a vibrant ecosystem can flourish around it. That's the fundamental difference here and the technical challenges have proven not to be a barrier.
By all accounts Nuvia got substantial help (and probably quite a bit of IP) from Arm and I imagine that would be the case for any architectural licensee - they weren't just get sent a copy of the ISA spec and left to it! So 'nothing in return' just isn't true.
What is the price of an ARM architectural license? I don't think any of us know.
What does it let you do? Not what Qualcomm thought it did, apparently. No exchanging notes with other holders of an ARM architectural license. Wow. Everyone is strictly on their own -- use ARM's cores, or use their own completely internal design, and nothing else.
Assuming ARM wins in court, of course, which is not necessarily going to happen.
If you have your own ASIC design and validation teams targeting the latest nodes, the ARM license is not a major cost to you. Heck, it's probably less than what you pay for your simulator or clock tree compiler. And probably worth it given the high quality support you get.
And if you are not at that level, do you really need a custom architecture? Can you pull it off with your limited resources?
It's not the direct cost of the license that is the big problem with using Arm. It's bullshit you have to deal with, not least the 18 months of time to negotiate the license in the first place.
Why don't Arm have a standard license that you download the PDF, tick the box for what you want, with a standard cost, sign it, and mail it in?
I don't know, but from all accounts from people who have been there, done that, they don't.
As we can see from the current revelation (or at least claim) that despite both Nivia and Qualcomm having "Arm Architectural Licences" they're apparently not allowed to do the thing that was the sole reason for Arm wanting to buy Nuvia i.e. use the core they had designed.
Because Qualcomm - of all companies - would never push the envelope of what their licenses allow them to do?
This is a firm that got endorsements from 22 other companies for their acquisition of Nuvia but apparently somehow forgot to tell the firm that that are legally obliged to inform.
ARM licensing allows USA to enforce their "rule of law" or whatever BS on the rest of the world, "sanctions" is a nice word that gets thrown around a bit....
Tomorrow USA can force ARM to not sell products to countries on their naughty list. They can do that because of sanctions but what about those countries? why should say IRAN or russia not get to enjoy the fruits of open source because foss/open source does not care about such petty things, in the long run.
If i build a GPL software/hardware, i don't care if iranians or americans or russians or chinese use it. If i was building proprietary one, then i do care.
>With RISC-V why would a company which designed a high-end CPU core license it to anyone instead of trying to maximize their profit/competitive advantage? With Arm at least there something to fallback to accessible for everyone.
> ARM licensing allows USA to enforce their "rule of law"
Yes but how does that incentivize western companies to invest into RISC-V? I mean any western company would have to comply with US sanctions if they want to do business with US companies regardless of the architecture they are using.
And if they don’t care about any of that they might as well just use ARM anyway without licensing it. It’s not like all the documentation required wasn’t already leaked anyway making it basically equivalent to RISC-V (if you don’t want to play by the (western) rules). In fact it probably make more sense for Iran or Russia to clone ARM/x86 designs because they don’t have enough resources or expertise to create anything even marginally competitive on their own.
> If i build a GPL software/hardware, i don't care if iranians or americans or russians or chinese use it. If i was building proprietary one, then i do care.
> FOSS ?
Unlike in software there seems to be little incentive for companies to opensource their designs. Why would a hardware company which invested millions to design a RISC-V core give it away for free to their competitors? Unless they can monetize it some other way (.e.g like software companies can) there is no incentive for them to do that.
> ARM licensing allows USA to enforce their "rule of law" or whatever BS on the rest of the world, "sanctions" is a nice word that gets thrown around a bit....
> Tomorrow USA can force ARM to not sell products to countries on their naughty list.
You make it sound like sanctions are arbitrary. The United States may not be perfect and may not always live up to its ideals. But there is an enormous difference between the imperfect democracies of the liberal world order (eg. US, Western Europe, Japan, South Korea, etc) and the autocracies or downright totalitarian crypto-fascism in Russia, China, Iran and North Korea. Both sides are imperfect, but they are not equally imperfect.
I want RISC-V to succeed. I believe in the freedom that FOSS delivers. The fact that dictators will be able to use these tools to oppress their citizens is unfortunate consequence of this freedom.
>ARM licensing allows USA to enforce their "rule of law" or whatever BS on the rest of the world, "sanctions" is a nice word that gets thrown around a bit....
RISC-V Foundation timely moved to Switzerland before this became an issue.
From what I heard from my contacts, RISC-V did so to accommodate Chinese companies which were deeply concerned that US export restrictions would become an issue; RISC-V is huge in China.
I don't recall OpenSPARC starting any kind of "ratcheting process" (whatever that is) in 2006 or anything similar when MIPS was made an open ISA so where is this giddy claim of inevitability coming from?
An open ISA is a small part of the picture since you have to then go from an ISA to an design and then from the design to a fabricable chip (imagine it cost you $X00,000 or more every time you compiled) and that last step has to be re-done for each process node for every fab you use. And then don't forget you need a compiler that's tweaked to generate code for your design that puts it on par with competing processor designs.
Is interoperability a requirement for all RISC-V implementations? I thought one of the draws for RISC-V was that adding custom instructions doesn't require an architecture license.
If your chip doesn't run standard RISC-V instructions then you can't use the trademarked "RISC-V" name.
Standard operating system, standard binary-distributed software packages, your own programs compiled with standard compilers with default options, will always run on all RISC-V implementations, forever.
Of course if you go out of your way to use your current vendor's custom instructions (if any) then you're trading short term gain (possibly) for long term pain if you ever want to move on. Always make sure you also have a generic version of that code that runs on standard chips (if not other ISAs too).
I can take a copy of linux and modify it to produce a kernel that can't run anyone else's binaries.
This is irrelevant to the essentially infinite value created by linux being open in the first place. You still have access to the same linux I started with to make my custom one. I didn't need anyone's permission to modify it for my purpose, and you don't need anyone's permission to take my same starting point and 99.99% of the engineering in my product.
This is just from the basic availability like MIT/BSD where I don't have to give back, not even counting the extra boost that gpl or cc-sa creates.
I don't think immediate interoperability is a very important part of this. It's already good enough that the open nature makes interoperability possible if anyone happens to need it and aren't getting it from a vendor. Same goes for things like extended support or adding or removing features. If a manufacturer stops selling a chip you want, or triples the price, or adds hostile features, or any of the other ways manufacturers use to control consuption, anyone else is able to start supplying the same thing.
It still costs resources to do it, but there is no artificial block because of patents.
> anyone else is able to start supplying the same thing
Not if there's a custom extension.
Additionally, "supplying the same thing" is equivalent of asking someone to write a perfect drop-in replacement for the Linux kernel, to connect to your example.
It's not that simple, by far.
> It still costs resources to do it, but there is no artificial block because of patents.
Actually, no. CPU IP can be patented and no, RISC-V does not prohibit it.
You are asking for someone to design a new CPU, while also supporting whatever custom extension already exists, while also navigating the huge patent minefield.
I would not be so eager to advertize my ignorance of such simple language devices. Are you sure you want to play that stupid? It's not usually the best way to show how on top of things you are and how much your opinions matter and how seriously everyone else should take your reasoning and arguments.
Anyway...
Patended customizations are irrelevant.
If the open part of a risc-v cpu or platform may or may not be the entire end-user product, so what?
So there is one open brick and a useful product needs 10 bricks and some of those other bricks don't exist in an open form yet. So what? It in no way invalidates the value of attaining any given one of the bricks.
Sooner or later, if not already, now that some parts exist, someone will tackle the next part and the next part.
ARM has a ton of mutually-incompatible ISAs. You can't take assembly language or binary code between them AT ALL. Forget extensions -- just the basic instructions are incompatible.
ARM are still today selling cores that run the original 1980s A32 instruction set plus 16 bit Thumb. You can find them, for example, in the Raspberry Pi A, A+, B, B+, Zero, Zero W, and Compute Module 1.
Then there are cores that run only the 16 bit Thumb ISA plus a handful of extra instructions (because Thumb was not originally designed to be the only ISA on a CPU): Cortex-M0+ as in Pi Pico
Then there are cores that only run Thumb2: Cortex-M3/M4/M7
Then there are cores that only run A64: Cortex A34, A65, A510, A715, everything from Apple since the iPhone 8, Cavium, probably Nuvia.
RISC-V has only two variants: 32 bit and 64 bit. And actually almost all the instruction mnemonics and binary opcodes are the same between them. The only difference is the size of the registers. With a little care you can run 32 bit RISC-V code on a 64 bit CPU, not in a special mode but just sticking to the top and/or bottom 2 GB of address space. You can easily write assembly language code that runs on both 32 bit and 64 bit. It is easy to use the same compiler for both.
What else would one expect them to say? "History has shown that open source wins, and we've had a good run and it would come as no surprise were a new low-power architecture end up eating our lunch." ?
I would like them to acknowledged the threat. Maybe something like:
> We respect RISC-V and look forward to more competition in the datacenter space. However we are confident that our advanced technology will allow us to maintain our decisive lead in performance, power efficiency and cost.
Lots of examples of open source replacing closed-source in recent history, hardly any for the other way around. Plenty of examples of proprietary becoming open-source as well. The only trend that runs counter to this is the emergence of SaaS, most of which rely heavily on open source themselves.
It's hard to compete with established FOSS, which is why companies tend to embrace and build upon it, strengthening its position even further.
The web stack is full of examples: Web servers*, OBS killing everything else, ActiveX, Flash, Mosaic, and also lots of smaller examples related to how we interact with various file formats/codecs.
If you wanted to start developing a new closed-source web server, streaming software, alternative to ffmpeg, programming language, compiler, browser engine, or even mobile operating system, people would probably call you insane. Trying to beat the open-source incumbents with something closed-source and possibly paid looks impossible.
* I am amazed Microsoft IIS is still barely hanging on.
I think 'hardly any of the other way around' isn't really fair, particularly when SaaS is clearly the most popular way to distribute software these days.
Your examples are almost entirely focused on the tools to build software, and there open source truly has 'won'. However, what do we all do when we get that tooling? We build SaaS software running on proprietary IaaS cloud environments. The big players know there is no money in proprietary tooling because startups will gravitate to free software. Even then, that is often free as in beer (vscode and chrome spring to mind).
> If you wanted to start developing a new closed-source web server, streaming software, alternative to ffmpeg, programming language, compiler, browser engine, or even mobile operating system, people would probably call you insane.
Absolutely, but people would also call you insane for trying to start a business where you open source your IP.
Honestly I think for consumer level software open source has truly and utterly lost. All consumer operating systems with non-negligible marketshare are proprietary (including android), apps on iOS and android are overwhelmingly closed source, and SaaS is the dominant business model for everything else. Hell, you can't even look at the source code of modern websites without seeing a wall of minified/obfuscated js junk.
> Your examples are almost entirely focused on the tools to build software
All save of the things I listed aren't just a tool used by developers and have user-facing components, but yeah: that's my bubble. However if most of our tools, libraries, and technologies become open source, then inevitably end-user software will become more open-source as well.
> All consumer operating systems with non-negligible marketshare are proprietary (including android)
That's the purist way to look at it. I can understand it when it comes to free-as-in-freedom software vs. proprietary software, but for open source vs proprietary it doesn't make much sense.
It is a sliding scale and it matters how much of the software stack someone is using is open source software. If you open some random "SaaS" website nowadays, most of the code running on your computer will be open source, as will most of the software running on the remote server. The only things that aren't open source will be whatever the SaaS company hacked together on their open source stack, and possibly the operating system your mostly open-source browser is running on. In the case of Android most if not all of the code executing in that moment would be open source as well. Take further apart their JS blobs and it's just 90% libraries they grabbed from NPM. Take apart their server side and it's more of the same.
The point is that the amount of open source code people are running is increasing. Saying "but a lot of software has closed-source parts!" is being blind to the change that occurred.
If you just care about OSS as opposed to FOSS, you likely either care about being able to audit (even crowdsourcing it) what you're running on your computer, or you simply believe that it is the best way to develop software. In recent years we've had both a better shot at auditing what we're running as well increasing collaboration between software developers.
A lot of this is because large companies which make proprietary software, such as Google, invested in the software you listed. I am not sure if the same would happen in the case of hardware. There is more upfront investment. The culture is different, more secretive.
I'm into tech as a hobby, not as a career, so take this with a grain of salt:
Android ate every other mobile OS excluding iOS, in part because it was OS and close-enough to free for OEMs. I can't even think of a closed-source webserver off-hand. Even the fine people at Redmond had to offer Linux support on Azure. The vast majority of the web runs of FLOSS.
FLOSS wins when working with OEMs. Only with consumers does the new-shiny-proprietary win.
> I can't even think of a closed-source webserver off-hand.
Microsoft IIS? Still about 6% of web servers & in 5th place, according to September 2022 stats. Though that's a huge crash from 2018 when it was the #1 largest share of websites (40%) according to Netcraft.
The Free part of Android is part of the reason it won. Windows Phone OEMs couldn't modify the system, while OEMs like Samsung could (and at the time, did) heavily modify the OS.
That's presumably why my 2016 Civic used Android under the hood instead of QNX. The latter would be faster and more responsive, but Android is free, easy enough to force into the shape Honda needed, and possibly even gave them an easier time finding people familiar with the more inner workings of the OS.
Android without the closed source Google parts is pretty useless for the average consumer. E.g. just having each app open a persistent connection in the background for it to receive notifications instead of one centralized one is pretty bad for battery life.
There is a little country called China that you might have heard of that has plenty of manufacturers selling AOSP based phones with no Google software.
Linux, android, almost all databases, web frameworks, programming languages, chromium/electron, the most popular editor in the world and .net from Microsoft of all places (which once declared OSS to be "communism"), the list goes on.
just go back a few decades and remember people paying for Borland Turbo C. The long arc of history is very clearly bending towards open source software.
Software seems to be very different from most other industries in this regard. Companies which focus on enterprise have generally found it easier to make money from consulting, SaaS or providing other complementary services rather than selling software outright, because it’s generally cheaper to take an off the shelf open-source product and make money by customizing according to your customers needs.
If we are talking about tools (programming languages, web frameworks databases) there is little incentive for companies not share their improvements or full projects with everyone if they are not planning to make money from selling it directly. E.g. it’s not like open-sourcing React had any-negative business impact for Facebook, the opposite, people worked for free to improve their internal tools and it made it easier for them to attract new talent.
Google created Android and Chromium because it allowed them to increase their ad-revenue and they made them open-source because it was much cheaper to build top of projects which which already worked. I.e. they gained more by open source software than they lost by giving away their improvements for free, which is fine because they never intend to make money from them directly and having access to the OSS bits didn’t make it that much easier for other companies to compete with Google in the core market.
I don’t see how would this play out with hardware. If you designed a CPU core which is much better than your competitors what would you gain by giving it away? If making chips is not part of your core business and you just want a product which fits your needs better (e.g. equivalent situation of a software company which allows it’s employs to submit patches to Linux/Posgres/etc.) how would you even do that? You don’t have the expertise or equipment needed to produce chips and if you did it meant that you made a significant investment to be in that position and therefore have no incentive to share your improvements with anyone).
I always find the "if you discount China" qualifier funny. Every fifth person on the planet is Chinese mate, much of the Chinese tech ecosystem is built on top of OSS, it's been a great equalizer and accelerator.
Funny how a few years ago they were saying how it wasn't a rival in the embedded space and all this other FUD they published.
Sure its not a rival in some aspects yet, but the trend-line is pretty clear. Unlike ARM RISC-V gets to standardize the Serve Platform based on much of the work done for ARM but in RISC-V case when people will start to seriously develop these things the standard will be there already.
You already have RISC-V massively in the 'AI' market and some of these companies are targeting the data center, the famous example are Jim Kellers Tenstorrent and David Dizel Esperanto Technologies. In addition the the AI accelerators both these companies are also building high performance CPU.
In addition you have a lot going on in China as well, with large Chinese companies building on RISC-V instead.
To be fair, Espressif didn't use ARM on their previous ESP designs, but Xtensa[1].
Looking at which chips currenly use ARM vs RISC-V isn't interesting, it's more interesting to look at which new designs are done using ARM vs RISC-V.
I assume large players like ST, Microchip and Infineon (Cypress) have bought the ARM licenses and have the infrastructure that allow them to churn out more STM32[2], SAMD[3] and PSoC[4] variations (respectively) for a good while, for example, so there is a fair bit of inertia in this space.
That said, browsing that RISC-V member list[5], we do find those same three players as strategic members, so they clearly haven't closed the door on RISC-V...
There are tens maybe hundreds of embedded cpus for every human. While riscv products exist, they are still mostly a novelty. I dont think riscv has even 0.1% of the embedded market yet.
There was a lot of publicity in October 2021 that ARM had just passed 200 billion cumulative chips with their cores -- over 30 or 35 years. RISC-V International announced passing 10 billion cores back in July. Discounting the distinction between cores and chips, that's RISC-V at around 5% installed base already.
That's a lot more than 0.1%.
The current run rate will be closer than that. ARM is currently doing about 7 billion chips a quarter. RISC-V is probably around 1 billion -- let's say 15% market share.
Over the last couple of years, yes. The GD32 and CH32V are increasingly popular. This is not so much ARM's fault directly though, as it is the complete unavailablility of STM32s forcing many companies to look for a quick replacement.
In some spaces it seems like it's gaining ground pretty fast. Western Digital, for example, seems pretty committed to replacing the ARM processors in their flash and SSD controllers with RiscV.
High performance datacenter cores will likely remain a niche for the best chip design companies. Architecture mostly matters in that your workload / ecosystem is supported. I can't see this being a place where an open source RISC-V core wins.
Even if you had the complete plans to a top of the range EPYC processor, you would struggle to Fab it, and by the time you were able to do so, it would likely not be valuable.
The problem is DC CPUs are driven by power efficiency. Building the most efficient processors requires the latest manufacturing nodes and incredible optimisation. Generally your chips must be within the last 1-3 generations, the inefficiencies are very noticeable. Power efficiency impacts cooling and density, additionally driving the economics of DCs.
The existing manufacturers have the teams to churn these out, for someone else to join the club it would be a massive investment (just look at home many failed Server Arm companies there have been). You can also see that even companies that have ARM server chips are mostly just tweaking cores provided from ARM, not developing their own from scratch.
Given this I think your unlikely to see this unless the economics change. If one of the cloud giants decided that they aren't getting value with ARM, they might end up developing their own, potentially even using RISC-V as the architecture, but I can't imagine them wanting to share it. Unless you end up with a multi-cloud consortium deciding that its in all their interests to build a common processor in order to turn it into a commodity. In that case you might get an open source core out of it, but the fab problems still remain.
Compute power per square meter and per watt are only one half of the DC efficiency story.
The product is still software. If you double the flops, but software is only half efficient in the end you move nowhere. You need performant compilers to go along chip/architecture. You also need user software. A "performant" CPU that does not run upstream versions of postgres/python/java/nginx/whatever is mostly useless for a DC.
A chip design/architecture being a commodity is a good way to have user software and compiler tuning going.
I'd argue that RISC-V is only really competing with ARM when it's possible to license a complete, competitive RISC-V core implementation and integrate it into a product.
At the low end, I expect (and already see) that people can give away open source IP. But not sure this model works as well at the high end where things get a lot more difficult? While it's easy to imagine people creating high performance RISC-V cores, I think they would have to either sell processors or license the IP. That would be competition with ARM, and would be a good thing, but I don't think it is clear/obvious that using the RISC-V ISA would give any company doing that much of an advantage over ARM?
This is something I'm excitedly watching as well. It's one thing to put out a free rv32i microcontroller and quite another to put out not just one but a regular stream of cores that keep raising the performance bar. The classic CPU industry is still squeezing out 10%+ performance gains per generation every year or two. Unless these large organizations start collaborating to make these chips faster (...doubtful unfortunately), I suspect they'll fall behind and stay behind.
I'm not sure it makes sense to describe (or implement) a RISC-V instruction decoder as "4-wide" or "8-wide".
It makes more sense to talk about how many bytes of code are decoded per clock cycle.
If it's 16 bytes (like current x86) then you get between 4 and 8 instructions depending on the percentage of "C" extension code. If it's 32 bytes (like M1) then you get between 8 and 16 instructions, with a typical average of 11 point something. Or, depending on decoder design, you might always get 16 instructions, but some of them are NOPs. That can be easier to deal with as later stages will be doing things such as turning MOV (and other things) into NOP and dropping all the NOPs (and also NOPs used for branch target alignment) before they reach the back end.
It's a little harder to decode dual-length RISC-V code than fixed length Aarch64 code, but I've described designs here and Reddit (and elsewhere) that show at 16 or 32 byte wide decode it's not enough harder to matter. Going much wider on decode usually isn't useful because you start to very often get a taken branch somewhere in that much code.
Okay, not yet. I don't think RISC-V rivals you in any market except micro-controllers. However, it's a matter of time before RISC-V dominates every market. Due to its openness and lack of fees.
>I don't think RISC-V rivals you in any market except micro-controllers.
I used to think like this. Then I saw the sad state of mobile SoCs[0], and learned that Cortex-A55 is pretty much still the peak, with full awareness of U74[1] and U74-MC[2], which is considerably better (less area, faster, and more efficient to boot).
Considering that P650[3] has existed for a while competing with ARM's performance-focused cores, and X280[4], basically an U74 with Vector, also exists today, I simply do not see a 2023 without a pack of new SoC based on these cores (particularly, I expect to see X280 pop up everywhere), and phones based on these SoCs. Android was already demonstrated on RISC-V years ago, and a lot of effort is being put in polishing and upstreaming this work.
That's how immediate we're talking about. As for the ARM story we're discussing, it's clear to me: Softbank wants to sell ARM ASAP, and that's about the only thing they care about. They thought that doing this statement is helping them, but I am not sure it is.
There is nothing precluding ARM from providing a RISC-V fetch and decode unit as an option with their IP. Like having a POSIX compatibility layer or Linux subsystem in Windows. There is a lot AXI-based IP blocks out there, even in the RISC-V ecosystem. Having a RISC-V from ARM would make the switch more or less painless and the ability to benefit from known-to-be-good IP.
I'd conjecture that there are likely already toy projects at ARM to do just that, if nothing else as POCs by individual engineers eager to show colleagues how easily this could be done.
Now, whether strategically ARM would do this is another question. I suspect they'd hold off as long as it was realistically possible. But the deal is that an ISA is only one part of a huge stack of IP. And in that arena, ARM has a huge lead, especially considering that the granularity of the increment in terms of potential open source contributions is gated by huge costs that individuals can rarely take on on their own and lots of often arcane domain-specific knowledge.
Genuine question. Why would Arm do this? Armv8 is a modern ISA built built based on decades of experience with huge software support. Adding a comparable but different ISA alongside it makes no sense.
More likely is that they could loosen up some of the IP restrictions around the Arm ISA. Allowing extensions, maybe making simpler cores available for free.
I am not sure this has been proven (at least within the M1 or M2, I seem to remember RISC-V cores in other places but cannot find a reference right now, so I might be misremembering). I think the parent is overly enthusiastic.
What we know is that Apple is hiring people with RISC-V experience. And it would be very off-character if they did not have at least a team somewhere working on it. You can bet that they would jump on it the moment it makes sense for them.
Taking out the aspect of Symbian C++ tooling, Symbian was great as OS.
Nokia's problem number 1 was the political internal wars between Symbian and Linux divisions that distracted management from what should be the main focus, and then Elop brought Windows into an ecosystem that was pretty much anti-MS to start with.
It remains to be seen if ARM management is going to have this sort of decisions.
I had a Sony P800 and it was a horrible phone. Adding apps was a pain, there were only very few good apps, the pen was a pain, UI was a pain (b/c desktop design moved to a mobile device).
I bought the first iPhone, no comparison (same app problem but the ones supplied were excellent).
I don't think if they said it, but circa 2010 they definitely could have said that, and they would have been right for many years actually, until they weren't. The same thing could definitely happen to ARM.
Rather than worrying about RISC-V vs. ARM let's focus on the big picture: Intel is no longer a relevant talking point in this discussion. That's a tectonic shift from how this industry has been operating for the past 40 years.
Do you mean x86 rather than Intel? Which is mostly AMD today, datacenter-wise. Intel doesn't have competitive products, only an inertia they have been losing at a worrying pace.
Intel has shown an interest in RISC-V, and will likely be pivoting to that. They're well aware x86 is unsustainable, requiring several times the effort (=investment) to simply remain competitive with the RISC competitors.
It's purely about the money invested in each so far, and the fact that chips take at least three to four years from getting the funding until having something in mass production for sale to the general public.
The really serious money started to move into RISC-V core design during 2021. Things available for purchase now were started back in 2018 or so, with much less money.
The "frozen" RISC-V base spec was only published to the world from Berkeley University in 2015 (and ratified in 2019) and the first chip you could buy (F310 chip on the HiFive1 board) was in December 2016.
That's barely two chip design cycles ago.
The company that did it (SiFive) produced their first two chip and boards (one a Linux-capable 1.5 GHz 64 bit quad core) on their first $8m of funding. Now they have several hundred million of funding, but that's recent.
The 'innovation' in RISC-V is not the ISA itself, which in many ways is a pretty standard conservative RISC, although one that by virtue of being later to the game has avoided many historical missteps like delay slots or register windows.
The 'innovation' is the licensing, or business, model. Anyone can design a RISC-V core and license it to others, or just give it away for free under an open source license on github, no need to negotiate with the ISA owner to buy some kind of architectural license.
It's an open architecture designed for extensibility. So if you're wanting to make an AI processor for instance it's much easier to add an AI extension to RISC-V than it is to ARM. And you don't have to pay ARM license fees either.
Yes but ARM processors are more powerful for now. They are also better designed compared to the RISC-V models currently available in the market. RISC-V can't yet take on ARM v8 or v9.
ARMv7 (32 bit) has slightly better code density than RISC-V 32 bit, although that will change with the next crop of RISC-V extensions (and already did to a certain extent with the B extension).
ARMv8/v9 (by which I'm sure you mean A64 / Aarch64) is where RISC-V has the big code density advantage. And RISC-V right now has proven cores in mass-production up to the A55 level, with A72 level (e.g. Pi 4) having been around on developer boards for a year and hitting mass-production at around the end of this year.
A76-level RISC-V will be demonstrated on a developer board within the next six weeks or so, and available to developers first half of next year. That's Intel "Horse Creek"(using SiFive P550 cores), and also there is something called "Dubhe" coming out of China which I think is related but they don't like to say so.
M1-level RISC-V is on the drawing boards, probably out in 2025.
RISC-V is behind, but can already take on a significant part of the ARMv8/v9 market.
Nothing, but that's not what's important. RISC-V is permissionless. You can grab an open source core (or design your own, but that's a lot more work) and go into production without paying a cent in licensing fees or negotiating any legal agreements.
First they ignore you, then they laugh at you, then they fight you, then you win.
Something tells me things are going to look very different in a decade. Of course, if I know this, the ARM execs also know this but have to keep saying whatever is expected of them
RISC-V won't be a rival until there are performant implementations in the different load profiles:
- the soc for usb keyboard and mouse.
- performant energy wise mobile soc.
- desktop cpu.
- server cpu.
And for RISC-V, don't need C, you can go straight to assembly because its not locked by very few vendors and has no toxic IP tied to it (like mpeg and hdmi).
> The architecture has proven good enough for NASA…
The same NASA that has spent years circling the proverbial pork barrel strapping old school reusable shuttle boosters to a rocket as non-reusable kick boosters and struggling to fill said “old tech” without mishap (lessons already learned and documented again and again)? That NASA??
Frankly, invoking NASAs “buy in” as about like claiming that a revived BeOS has chosen it. Or that RIM plans to come out of receivership for their next gen “Blackberry V”. Or that Chrysler’s rerunning a limited re-edition of the PT Cruiser featuring RISC-V.
Other than that, I like RISC-V. I’m excited to see it grow.
Well, there's the NASA doing manned launches (or nowadays, working towards eventually being able to, to put it charitably) which, as you say, sadly seems co-opted by pork-barrel interests.
But there's also the science side of NASA that does stuff like space probes, the Hubble and James Webb telescopes, and so on. And this side seems to have a better track record (well, post-Apollo at least) of actually delivering something valuable. And, AFAIU it's this side that has chosen the RISC-V based platform for their future work.
RISC-V is eating their lunch. There will be a myriad of Chinese design houses and fabless manufacturers churning out cheap cores for use in almost everything. When you're making a CPU core for a microwave or cheap trinket even a few pennies saved by not having to pay for an ARM license matters. And the freedom to alter the design as you see fit is worth its weight in gold in a hyper-competitive market