we optimize for code runtime but not for our own typing latency
100ms optimization is a lot different for a CPU or a human brain.
I'm not defending having the entire system log dumped out on every prompt but a few amenities are worth a few milliseconds computation time for a human.
Besides, I don't see how, for example , having your prompt take those 100ms to print a git branch or status breaks your "flow" yet having to type out the commands yourself and taking longer doing it doesn't.
Its a balance between bloat and and usability like so many other things, but, to me at least, being on either extreme of bloat or extreme-minimalism seems counterproductive.
Not at all. Just saying that for the average citizen, being spied by CCP or any other big economical player is more likely the default expectation they can have with off the shelf product. That it's CCP (also) spying them is not really the nub of the issue.
Always wondered why RISC-V doesn't get more mainstream adoption.
Even if not at the consumer level, having your data center, for example, running a cheaper (I assume since no license for the instruction set means not having to pay for it and more options to buy from leading to lower prices) and less power demanding option when compared to x86-64 sounds enticing to me.
Maybe no one wants to be the genea pig to iron out the kinks of the transition or maybe the raw performance of x86 is bigger deal than I think it is and its worth the price and power. Dunno.
> Always wondered why RISC-V doesn't get more mainstream adoption.
It's very simple!
Because the amount of time it takes to design and produce a data centre level CPU microarchitecture is greater than the time RISC-V extensions needed for data centre CPUs have existed.
The original RISC-V specification was ratified less than six years ago, but you really couldn't create a data centre CPU until at least RVA22, ratified two years ago in March 2023 -- or preferably RVA23 which was ratified in October 2024 and has the features needed for efficient hypervisors.
You can knock out a microcontroller CPU core in a weekend, but something to compete with current Apple, AMD, Amazon etc CPUs takes a long time to make. Most companies doing that started work only in 2021 or 2022.
It is simply too soon. A lot of stuff is in the pipeline.
Any price advantage, even if we assumed a fully mature ecosystem with equivalent processors and systems available, is massively overstated. ARM had a total revenue of just ~3.7 G$ in 2024 over ~29 billion chips [1].
In contrast, Qualcomm, just one of many large suppliers of ARM-based systems, had a total revenue of ~39 G$ and a operating income of ~10 G$. ARM's entire revenue would easily fit into Qualcomm's profit and only increase costs by ~12%. And that is just one supplier. You have Samsung, Apple, Broadcom, Google, Amazon, Nvidia, TI, NXP, etc. to help round that out.
The total impact of ARM licensing and IP costs is almost certainly less than 1%. And given that RISC-V does not currently have a fully mature ecosystem, you get to trade that for a 1% cost improvement; not really a winning strategy right now.
It is likely the main advantage in the long run for RISC-V is that not requiring a license might enable a more vibrant ecosystem due to removing the licensing barrier which might enable better designs at comparable costs (because, again, the cost differential should only be on the order of 1% in the long run) rather than just creating comparable designs that just chip off the licensing cost. That or RISC-V could win because the giant manufacturers feel like putting the squeeze on ARM to drive 1% off their BoM.
Using 2024 numbers for ARM might not give a clear picture, because that was many years after major companies like nVidia and WD already switched all their major internal chips to RISC-V. ARM's lunch was already partly eaten; many billions of RISC-V chips were already in the wild in 2024. And companies like Tenstorrent have been building high-performance stuff on RISC-V for a while. If Jim Keller thinks RISC-V is worth it, the advantage must be worth more than 1%.
Based on your numbers, revenue increased by 35% between 2021 and 2022, but only 18.5% between 2022 and 2023.
Oh! In fact it seems your figures are incorrect -- 2023 was $2.68B, a 1% decrease from 2022, and it was 2024 that was $3.2B.
So assuming 2023 was an anomaly (but why?) that's only a average 9% annual increase from 2022 to 2024.
Looks like a flattening trend. 2025 will be interesting, especially if it's flat again.
As you quite correctly point out, RISC-V's advantage isn't licensing cost -- if you license a core from SiFive or Andes or others then it might cost a bit less than Arm but it's not significant. And if you develop your own core then you'll spend more.
The RISC-V advantage is that you can customise it how you want without protracted licensing negotiations with Arm and a very real possibility that they might sue you if you try to do anything innovative.
You can add instructions, implement a subset of the instructions, sell chips or completed products to anyone in any market, license your design to other people for them to build, get acquired by other people who can then use what you designed. None of which you can do with Arm.
From what I've seen, most RISC V chips are still far behind x86 and ARM when it comes to raw power. I don't think the loss in performance is justifiable with lower cost (yet)
What fraction of the total cost of operating a datacenter do you suppose goes into ARM licensing? That's how much more efficient than RISC-V an ARM would have to be in order to make it the preferred CPU in a "we don't need x86" scenario. That's a very low bar to clear (for ARM, not for RISC-V)
I think there has been a big uptake in things you don't see like embedded or FPGA cores but as a general CPU it's nowhere near as efficient as ARM/x86 right now is my understanding. So it might be running in the SSD and the Fan controller but not as the CPU.
I think a large part of the cost of a CPU core is not the instruction set but the optimisation of the CPU and ARM/Intel/AMD are still way ahead in those. And so it needs people to optimise the cores, which when they have done that they charge for being a better CPU.
> Always wondered why RISC-V doesn't get more mainstream adoption.
For me, it's because the ecosystem has fragged even harder than Xtensa, who will sell you custom CPUs. THead made yet another vector unit that's required to approach anything near the Intel/AMD moat numbers.
SpecInt/GHz last year was around half of Intel/AMD/ARM numbers.
The imminent demise of CISC has been trumpeted from the rooftops for at least the last 30 years...
ARM isn't CISC and has, by sheer numbers, completely dominated x86 for decades now; not counting the massive number of MIPS, AVR, etc embedded chips.
Additionally, if you want to get super technical (as if there were ever a real delineation between RISC/CISC), both AMD and Intel decode x86 into internal micro-ops which are essentially RISC.
So, for all intents and purposes, CISC is dead and buried.
> Additionally, if you want to get super technical (as if there were ever a real delineation between RISC/CISC), both AMD and Intel decode x86 into internal micro-ops which are essentially RISC.
Given that most CISC chips also relied on microcoding and micro-ops, x86 having micro-ops wouldn't have made it anything like RISC as far as the original CISC/RISC debate goes.
The only reason that the "x86 is really RISC because of micro-ops" comes up is because x86 implementations are superscalar, which was supposed to be impossible with RISC chips, so people started coming up with the micro-op fudge to salvage the story that you need RISC to be an advanced modern microprocessor.
The truth is that CISC was never a meaningful category in the first place (it was only ever "not-RISC"), and RISC itself ceased to be a meaningful category around 30 years ago.
> The truth is that CISC was never a meaningful category in the first place (it was only ever "not-RISC"), and RISC itself ceased to be a meaningful category around 30 years ago.
Yeah, I think we're saying the same things. Thus the "(as if there were ever a real delineation between RISC/CISC)". It's an arbitrary delineation that means nothing today.
There's a fair bit of adoption where you don't see it (for example, if you have an NVidia GPU or a WD hard drive, you likely have a few embedded RISC-V cores already). We're expecting server hardware with good performance in a year or two.
(note: it will be far from exhaustive. On the low end, uC architectures are like water in the ocean. Some are just more popular than others).
Many of the products those arch's go into, have 10y+ production & support lifecycles. Change comes slow there.
I suspect that over time, RISC-V will mop up a good portion of that list (for those entries still in production), and become a go-to default choice for maaany applications. Where a designer would need good reasons to not pick a RISC-V based part. Not unlike how low end Cortex-Mxx seem to be everywhere these days.
Higher up, licensing is only a tiny % of overall costs. (Peak) performance, GFLOPs/Watt etc is what matters. RISC-V is still (somewhat) behind the curve there. Which isn't surprising given how much engineering & optimisation has gone into x86 & Arm over the years.
But being a shared/open architecture, may open new doors. For example: right now, highest-performing parts are always closed (commercially licensed IP cores. And/or manufactured in-house).
For RISC-V otoh, it's entirely possible that at some point, the highest-performing cores are open source ones. Not saying that'll happen! But it's possible.
If so, eg. Arm could only match that by open sourcing their latest & greatest. Which ofcourse would evaporate their business model.
So the licensing alone could have RISC-V go places where proprietary IP cores can't go. Exiting times...
It’s mentioned in the article. You generally expect something like a 20% range drop. With the range that you get these days (looking at you KIA EV6), it’s really not a problem.
Also, an EV never fails to start in cold weather. Which is a definite plus.
Most of Norway (or at least the parts where most people live) doesn't actually get that cold, thanks to the Gulf Stream. Certainly not as cold as the colder parts of the northern US or Canada can get.
So passwords are bad because users can't be trusted to chose strong passwords but for passkeys they suddenly are trusted to keep secure, comprehensive, backups?
Passwords are bad because normal people can’t remember strong passwords AND because they can be phished or leaked. Phishing is the most commonly-mentioned benefit for passkeys because it’s widespread and cannot be eliminated from a password-based system.
“Secure, comprehensive backups” sounds scary until you remember that it’s only ever meant not disabling a checkbox for iCloud, Google, or Microsoft users.
The glee with which people say "and it's on your phone!" is what gets me.
Right: it's on the small device I take everywhere and use for everything. The one most likely to get lost, stolen or completely destroyed, and absolutely has to be replaced in about 5 years.
That device. You want to permanently lock data to that thing?
(My phone is basically disposable in terms of my expectations for it's future survival, and man do I not like the Android recovery options still)
> That device. You want to permanently lock data to that thing?
This is why no passkey implementations do this: the mainstream implementations all require synchronization and if you read e.g. Apple’s iCloud documentation note that the offline recovery mode is designed for the case where all of your devices are lost:
Passkey synchronization provides convenience and redundancy in case of loss of a single device. However, it's also important that passkeys be recoverable even in the event that all associated devices are lost. [...]
To recover a keychain, a user must authenticate with their iCloud account and password and respond to an SMS sent to their registered phone number. After they authenticate and respond, the user must enter their device passcode.[...]"
And we get back to knowledge based auth in the end.
Yes, but that’s like saying there’s no difference between a bicycle and a dump truck because they both have wheels and can go off road. Passkeys make an immediate, significant improvement for security and ease of use, and the disaster scenario is no worse, often better.
Recovery flows being based on knowledge based auth that requires multiple pieces of knowledge does not in any way reduce the extremely meaningful security improvements that passkeys bring for both users and Relying Parties on a daily basis.
> Right: it's on the small device I take everywhere and use for everything.
So don't put it on only your phone, put it on your phone, your laptop, your desktop, and maybe a physical hardware token. Lose all but one and you're still fine.
I lost my phone. I don't bother with the cloud sync'd passkeys. I didn't lose any of my identities, because I had access through other devices.
I don't know how other providers deal with recovery, but if you use iCloud Keychain for storing/syncing your passkeys, Apple has a very impressive amount of recovery options, including an option for recovery even if you lose 100% of your devices.
See the section titled "Recovery security" in this support article:
If the giant meteor comes crashing down to destroy everything around me, I don't think I'll be that concerned about getting locked out of a video streaming site.
Even if I have a house fire. Small chance it'll actually destroy all my devices and paper recovery codes. And the odds of having that house fire is also pretty low.
If a tornado destroys my house, chances are my hardware tokens will survive. More of a question of where they ended up. A tornado destroyed my brother's house, his iPad ended up just fine.
I don't really live in an area where landslides are possible. If I did I'd probably want to plan around that with the passkeys. But that's true for a number of things at that point though.
What if every device hosting a password safe breaks?
Most people do not have a surplus of devices. They might only have a single phone which carries their life. A phone which at any moment could be lost, stolen, or destroyed.
A password safe, I can trivially backup however I wish. The cloud, a USB I keep at mom’s house, print it out, whatever (in fact, I do maintain encrypted offsite backups in a couple of locations).
> A password safe, I can trivially backup however I wish
Boy do I have good news for you then. Passkeys can often be stored in many available password safes. Bitwarden, KeepassXC, LastPass, 1Password, Dashlane, and more all support passkeys. Make one on whatever device, one in your password safe, and you'll have redundancy.
And I'm not talking about people needing to carry two $1,000 phones or a $1,000 phone and a $1,000 laptop. You could have your second key be a small, cheap (<$40), durable authenticator. Another thing on a keychain, another card in a wallet. Really that big of a deal?
And if that's truly impossible for you, then sure I'll agree passkeys might not be for you. I agree, some people like those who are homeless have a hard time keeping any material goods safe. I'm not arguing every account for every person needs to be only passkeys. But people here are acting like it's something impossible for nearly anyone to use safely. And I don't think that's based in reality. I think a lot of people could use them safely if they wanted to, but there's a massive amount of FUD about them.
100ms optimization is a lot different for a CPU or a human brain. I'm not defending having the entire system log dumped out on every prompt but a few amenities are worth a few milliseconds computation time for a human.
Besides, I don't see how, for example , having your prompt take those 100ms to print a git branch or status breaks your "flow" yet having to type out the commands yourself and taking longer doing it doesn't.
Its a balance between bloat and and usability like so many other things, but, to me at least, being on either extreme of bloat or extreme-minimalism seems counterproductive.
reply