Hacker News new | past | comments | ask | show | jobs | submit login
Riding the RISC-V wave (semiengineering.com)
173 points by lawrenceyan on Sept 14, 2020 | hide | past | favorite | 78 comments



With Nvidia buying ARM, RISC-V might get a big push and investments from parties who preferred the independence of the former ARM Ltd.


Let's hope so. Although Nvidia could go full-blown openARM initiative, which could effectively kill RISK-V.


I expected Nvidia to first continue running ARM like it does now and then slowly push ARM+Nvidea chips whileeven more slowly hindering ARM+other GPU chips.

The end goal would be to make ARM+other GPU technical viable but non copetive so that most customers will have to buy ARM+Nvidea to be competive leading to a hard to touch quasi monopoly of Nvidea for android phones (maybe except cheap ones) and and embedded ML.

Oh and at some point they probably would restrict new "custom silicon" contacts to prevent something like Apple silicon from happening again.


I guess call me skeptical. ARM+Nvidia GPU (Tegra) already runs absolute circles around everything besides Apple. They didn't need to acquire ARM to make a chip that's still superior to just about everything else on the market 5 years later.

I'm willing to bet this is entirely around controlling their own future, not changing the way ARM is licensed. With Intel at least acting like they're going to REALLY try to enter the GPU space at this point, Nvidia needed to acquire AMD or ARM to solidify their position. AMD was probably the preferred path but divesting of the GPU business would've been difficult if not impossible with all the embedded graphics chips they've put onto the market. Plus I would imagine the IP pollution potential would open them up to lawsuits from whoever acquired the GPU assets. And if Intel were to be the acquirer (seems like the most likely suitor) - they'd actually have to be seriously concerned about their GPU leadership.


Sorry, but if ARM+Nvidia GPU already runs rings around everyone else besides Apple and Nvidia already has access to all of Arm's IP (and there was no threat of it losing that access) then why does it need acquire ARM to 'solidify' their position?

In reality it's not winning in the mobile SoC market and owning Arm will enable it to hinder its competitors.


Because they want to own their own destiny? The same reason they could've OEM'd Mellanox switches but chose to buy the company instead.

What exactly would their end-game be if ARM were acquired by say... a Chinese firm who DID decide to end licensing terms. Then what?


Arm was never going to be acquired by a Chinese firm - it would have been blocked.

The problem is that controlling its own destiny gives it control over all its competitors destinies too - and that's not good. I have nothing particularly against Nvidia btw - the same issue would apply for any major SoC designer buying Arm.


Maybe for data center chips, if im not mistaken Nvidia already pretty much own that marketsegment.


> They didn't need to acquire ARM to make a chip that's still superior to just about everything else on the market 5 years later.

Technically superior, yes. But at a price point for even the high-end flagship phones? No. The Tegra are quite expensive.


Considering the Nintendo Switch is based on a Tegra SoC (likely the Tegra X1), and only costs $300~ I don't think cost is the primary issue. nVidia is mostly targeting platforms with higher performance requirements over building lower performing chips used in phones. They have different thermal and power draw requirements.


That's just not even a little bit accurate. The K1 tablet retailed at $199 - they absolutely could sell it at a price that meets a high-end smartphone price.

https://www.droid-life.com/2015/11/17/nvidia-releases-the-sh...


The K1 tablet was Nvidia trying to get rid of a bunch chips they were unable to sell.

You don't see Qualcomm out there making their own phones because they couldn't sell the chips they manufactured.


> The K1 tablet was Nvidia trying to get rid of a bunch chips they were unable to sell.

Based on... what? The only reason there wasn't an X1 tablet was because of the exclusive deal with Nintendo.

I don't see Qualcomm out there making phones because they make far more money extorting others. If they didn't have the ability to push their patent portfolio down the throats of anyone who wants to make a phone that can actually connect to a cellular network, they would absolutely be making their own handsets.


> Based on... what? The only reason there wasn't an X1 tablet was because of the exclusive deal with Nintendo.

The fact that it's the same innards as an original Shield tablet, just rebranded and launched for a bargain basement price on what was supposed to be a flagship quality chip? A chip that didn't get any major users?

And can you cite this exclusive deal with Nintendo from somewhere other than a rumor site?

> I don't see Qualcomm out there making phones because they make far more money extorting others. If they didn't have the ability to push their patent portfolio down the throats of anyone who wants to make a phone that can actually connect to a cellular network, they would absolutely be making their own handsets.

Why would that stop Qualcomm? If they have so much pull and vertical integration is what you want to be doing, then why haven't they done that in addition to selling their chips to others? It makes sense for Samsung since they literally manufacture a great deal of the screens and other components anyway, and for Apple since they have the margins to pull it off. But it doesn't really make sense for anyone else except to dump extra inventory that you couldn't sell as glorified dev kits.


> I guess call me skeptical. ARM+Nvidia GPU (Tegra) already runs absolute circles around everything besides Apple.

What? No they do not. All Tegra CPUs are pretty damn lousy compared to contemporary alternatives when fit inside cellphone power envelope.


> and embedded ML.

I believe that's where they'll hit the hardest. They can keep GPU at the same competitive state, could even make it "easier" for competitors, but if they get a bit of advantage over ML, that's where phones and everything else will be kind of forced to go Nvidia for both GPU and ML.

They don't have much competition as far as I am aware in that sphere, so they'll be able to argue that GPU were able to work with ARM without issues, even did X and Y to make it even easier for them while masking that they made it harder for everyone to compete on ML.

That's just conspiracy theory though, I still have hope they understands the value of an open environment like ARM and buying them was just a way to protect that environment.


nvidia and open aren't compatible


Interesting to see where this is gaining traction and where the challenges are https://semiengineering.com/where-risc-v-is-gaining-traction...


I wonder how much wave it is going to be if we start getting RISC-V ISA "distributions".


About as successful as Linux I guess, which now powers the vast bulk of all CPU's capable of running it.

Having Ubuntu on Desktops, RedHat on some corporate servers, Debian on others, Pine in VM's, raspberian on rpi's, openwrt on routers, Android on mobile phones, yada, yada, yada doesn't seem to have done Linux any damage. Quite the reverse. It's almost certainly a major factor in Linux wiping the floor against Windows. Microsoft doesn't have the manpower to turn Windows into something that address all those niche's. While it's true Microsoft have succesfully defended the desktop for now, they have in the mean time lost the war on every other front.


Azure. And I bet Ubuntu will eventually be owned by Microsoft.

So basically that leaves IBM and Microsoft calling the shots what Linux is supposed to be.

Plus in what matters in embedded space I will give about 10 years for MIT OS like Azure RTOS and Zephyr to wipe the floor of GPL based Linux distributions.


Thanks for mentioning this.

Never heard of Azure RTOS before, but it seems MS went and bought Express Logic. That's quite big.

But it doesn't seem to be MIT?

https://github.com/azure-rtos/threadx/blob/master/LICENSE.tx...


Not sure, my point was that most companies are doing their best to avoid GPL based OSes, going forward.


These are interesting predictions.


Oh, yeah Xenix 2.0, everything old is new again.


Are you sure? Last update to Windows Linux subsystem makes you feel like running Linux natively. It's a huge step and it will only improve.


> Last update to Windows Linux subsystem makes you feel like running Linux natively. It's a huge step and it will only improve.

As long as the system collects and exfiltrates data without me explicitly allowing this, and does OS updates just before somebody wants to give a presentation, it is not the feel of running Linux. The feeling of running Linux is the user being in control of his/her device and his/her data. A 1990 MS DOS installation is closer to that feeling than any recent Windows.


Unfortunately that is true. I am not optimistic that it will change, but it is miles better than Apple in this regard. Community should put more pressure on Microsoft to change these anti-patterns.


What does WSL have to do with:

> Ubuntu on Desktops, RedHat on some corporate servers, Debian on others, Pine in VM's, raspberian on rpi's, openwrt on routers, Android on mobile phones, yada, yada, yada

?


Ubuntu is the WSL posterchild that might eventually get acquired by Microsoft.

Red Hat is IBM nowadays.

Android might just be running on Fuchsia in a near future, and Linux kernel is not exposed to userspace, so it is pretty much irrelevant what Android runs on.

BSD, NuttX, RTOS, Azure RTOS, Green Hills, QNX, Zephyr, Harmmony OS, yada, yada, yada


Microsoft already had the opportunity to bid for Red Hat and didn't so there's little reason to think they'd be interested in the much, much smaller Canonical.

I doubt Shuttleworth would consider selling anyway, least of all to Microsoft.


Like Internet Explorer did?


If you are good at OS, doesn't mean you have also be good at browsers.


Was Microsoft ever good at building operating systems? I know they were good at selling them.


Considering Windows runs on pretty much any x86 configuration, it’s pretty impressive. Some obscure configurations can have problems with Linux due to lack of drivers, but Windows manages.

So, one can argue that the quality of Windows is changing, but building an operating system is no easy task. And Microsoft’s commitment to backwards compatibility is quite an achievement.


I just can't help but point out how much more hardware Linux supports out of the box than Windows... is this actually your real life experience or are you just saying this because it's the common wisdom? Excepting the absolute bleeding edge for hardware the kernel has a ridiculous number of drivers built in. I've far more frequently had to find random graphics or network drivers in order to get Windows to install than Linux. Plus Linux supports all that non-amd64 hardware too? The latest Office doesn't even install on Windows 7, nor 32-bit hardware.

I don't mind, really, Windows 7 isn't even supported anymore. But Linux would install without a second thought and run the most updated version of almost everything (excepting things like packages not compiled for arm yet or something). Saying Windows should get lots of credit for backwards compatibility feels a little disingenuous when compared to Linux... which actually still runs on hardware from the 90s.


> The latest Office doesn't even install on Windows 7, nor 32-bit hardware.

When people talk about Windows backwards compatibility they're mostly referring to running on apps on a new OS, not new apps running an old OS. I bet you could get very old versions Microsoft Office running on Windows 10. I've run Visual Studio 6.0 on a Win10 machine no problem. And yeah linux nowadays has a lot of drivers included with the kernel (I've bene shocked at some of the supported hardware) but vendor provided drivers of often super outdated. I have a decent chance trusting old driver binaries to run on windows. On Linux you likely have to be a driver developer yourself to just to get some vendor's driver source for an old 2.X kernel to compile, let alone run.


It's kind of off topic so this'll be my last comment about it but I accept your point that backwards compatibility means running old software on a new OS. I was incorrect.

It has been a very, very long time since I've run into hardware that I had any incentive to install vendor provided drivers for in order to get a computer running with Linux, with the exception of VMWare kernel modules and NVidia drivers, but Nvidia cards still run with noveau so the computer will still work so that you can get the proprietary drivers. And VMWare is proprietary software, so that's no surprise.

I just have not done it for actual hardware other than Nvidia graphics cards in at least a decade. Maybe I've gotten lucky in the hardware I've bought or something, but I haven't had any drivers broken badly enough that I even would file a bug report except for my Nvidia proprietary driver that is buggy as heck.

But I accept that my experience is also an anecdote.


Two things Windows excel compared to Linux: WiFi driver and sound driver. Even recent laptops come with Realtek WiFi that could be hit or miss with Linux.

Of course Windows excel in apps backward compatiblity better than Linux (it's not about hardware). You can't just run old Linux games CD. You would have a better time using Wine with Windows version of the game.


Windows development stack is the closest to Xerox PARC workstation ideas across Interlisp-D, Smalltalk and Mesa/Cedar, and the best desktop OS in security by default given all processes put in place after XP SP2, so yeah quite good.


Interesting. Ok, I've been off the Windows eco system for decades but I clearly remember my relief at not having pretty much daily BSODs, broken updates, etc, not to mention all the malware.


that must have been a long time ago. if its any consolidation i have to remind myself to reboot my windows machine every month or so just to do it


You trivially can't see the damage it's done at all—some measure of success does not imply peak success.


There are already profiles for ISA configuration. But that is a feature not a bug.


When that C code cannot run because of proprietary extensions, I guess it is a feature for the compiler vendor using an in-house fork of clang.


What is the alternative. If your goal is for RISC-V to be universal you have to support lots of things.

RISC-V strategy make sense, first standise the core that allows you to run all tooling. At necessary thigns like floats and so on. And the move on into more interesting extentions like Vectors.

If a company does something unique, the can take the standard compiler and have standard way to add an extra extention.

Of courese because the world is not perfect you will have some sudo standards and companies that have their own version of some of these.

Profiles is a way to manage that and to find configuration of extentions that often work together for desktop, embeeded or something like that. Seem relativivly well thought out for something that has such braod application.


>> Profiles is a way to manage that and to find configuration of extentions that often work together for desktop, embeeded or something like that

In theory yes, what happens in practice is for your application, you only need a few operations from N different extensions and now you have to implement all those extensions, if you want to be compatible.

The alternative is you develop the few ops as custom isa and break compatibility.

So you can either be non compatible and lean or compatible and bloated. Guess what will processor/chip designers will prefer given the energy constraints.


If you have such a costume chip that you optimize on an instruction by instruction level software comparability probably doesn't matter that much.

> So you can either be non compatible and lean or compatible and bloated.

That's a waste overstatement. The profiles are optimized for specific fields already and you have to evaluate if you want to take advantage of all the software and infrastructure around that profile of if you want to redo all the work just so you can cut out a few specific instruction.

You act like as if every company makes its own costume chips with the minimal set of instructions they need, but that is about 0.01% of the market or less. For the waste majority of use case simply not required and standard embedded profile is perfectly reasonable. And the commercial chips and commercial software will also target that profile.

If you really want to redo everything with your costume chips and compilers, software and so on then that's your choice and RISC-V should not prevent that.


That's already the status quo for many specialty processors.

Either they're doing something unique enough in hardware that you pay for the development seats and special interfacing hardware, or you go with a generic part and get to use all the cheap/opensource tools.


It is a bug not a feature. It leads to hardware bloat which will lead to people developing custom instructions which leads to incompatibility and now you don't have a generic universal isa that you wanted.


>> “Often people say, ‘Open source, it’s free.’ But it’s not free,” says Dominic Rizzo...

It's Free as in freedom, not price. Think of it as unencumbered. Risc-v is not open source like software, there are some open sorce implementations though.


This quote was my favorite part of the article, because it applies to the whole Open Source concept. It's amazing to me, but there are still business folks who think "Let's release this as open source so other people will do the maintenance for free."

Here's the full quote from the article:

> “Often people say, ‘Open source, it’s free.’ But it’s not free,” says Dominic Rizzo, project lead at Google and project director for OpenTitan. “This is why we find the most successful open-source projects are ones where people have a long-term vested interest and are working together in a collaborative fashion. This contrasts to the style of open source where people developed something and kicked it over the wall as open source.”


What is not open source about RISC-V? As far as I can tell, there's the matter of using a trademark, and the control of the project is not given to the community.

But the specifications themselves seem free as in freedom (CC-BY-4.0):

https://riscv.org/technical/specifications/


> and the control of the project is not given to the community.

They also have private mailing lists, but I agree, it is certainly FOSS.


Hardware is also weird. Copyright protections don't protect hardware. Sure the design files are opensource but the design itself can only be protected by patenting it. Better hope nobody patents your designs...


Not reinventing the wheel also helps. Billions are spent every year to design things that have already been designed, but aren't free to use.


I'm skeptical about RISC-V or any other open CPU design providing the quality of commercial designs. Testing and verification is not free. ARM tests their designs with millions of hours of simulation of heavy multithreaded workload, and chip implementors would have done the same with the actual silicon. Licensing costs of commercial ARM chips are a drop in the bucket compared to this necessary expense to ensure reliability.


Open does not shuts out commercial and enterprise companies.

Providers of RISC-V HW or Designs, like Alibaba, SiFive or Western Digital, do lots of testing too.

It would be like saying that Linux does not provides the quality of commercial closed OS (kernel), this is just wrong in multiple ways:

1. Linux is also sold as base of some commercial OS (e.g., RHEL, SUSE)

2. Lots of commercial companies revenue depends directly or indirectly on it, so there's a massive testing effort. In it's sum it so massive that I cannot believe that it's matched by any closed commercial kernel (no actual numbers here, sorry, this is really just personal ball parking).

The same can happen to RISC-V, the combined testing effort can easily outnumber the one of ARM in the future.

So while yes, testing and verification is not free, it is just not an argument at all against any open CPU design, be it openPOWER, RISC-V, open MIPS, ... Rather, it speaks again for openness as combined test effort will be hard to be matched by a single closed player.


We've gone down this path with virtually every other bit of computing technology: operating systems, databases, etc.

Open always wins....given enough time. No single company can compete with a collective open ecosystem at scale.


Open always wins, but that doesn't necessarily mean that it'll be qualitatively better, which is what your parent comment seems to hint at.


Responding to my own post to add more detail...

Open source CPU design is fundamentally different than open source software design. In the latter costs are extremely low - just the cost of a computer per developer. That developer's computer need not be replaced for years. There is no significant incremental cost for a software bug - just recompile and in a few minutes you're off to the races with a new executable which can be distributed over the internet for next to nothing. Contrast that with CPU design - every time a hardware bug is found you'd have to fix the design, verify it in software simulations, fabricate a new wafer, package it, install it in test hardware, and then perform hardware verification. This is 5 to 6 orders of magnitude slower and more expensive than software. Sure, corporations can perform this open CPU design, verification and manufacturing function. But in the end for a CPU to have a certain level of speed and reliability, you'd have to spend at least the same amount of money as the commercial CPU makers. Companies that produce an open source CPU chip are incurring huge monetary risks - and would have to be compensated for this risk if their chips have bugs and cannot be sold.

The only way for an open source chip design to be remotely competitive would be if they were to embrace FPGA technology. But FPGAs run 4 times slower than purpose built ASICs and are at least 10 times more expensive per unit in volume.


But I can tell you from working in the industry that RISC-V is already competitive with ARM in some segments. You're spot on with your analysis, but I don't think you understand that for some designs, the CPU and the software are developed and verified in tandem. If your design works with the firmware you intend to run on it in simulation, it's very likely to work in hardware as well. This is why you see RISC-V adopted in things like hard drive controllers and such first. You're starting to see RISC-V used more and more in internal processors that's never going to be programmed by a third party.

This also means you'll see more and more vendors of RISC-V cores and RISC-V verification suits. Chip designers are just as willing to pay a license and support fee to these as to ARM, as long as the cost is lower. So I'm absolutely certain that you'll eventually see RISC-V cores verified to the same level as ARM.

Also remember that open source RISC-V cores is more likely to get free verification efforts from universities, students and hobbyists.

> The only way for an open source chip design to be remotely competitive would be if they were to embrace FPGA technology. But FPGAs run 4 times slower than purpose built ASICs and are at least 10 times more expensive per unit in volume.

Ah, yes, but you can verify on FPGA and ship on ASIC. That's why even hobbyists can now work on developing and verifying RISC-V cores.

These days it's really not that hard to verify a microcontroller-level CPU core. It might take a long while until you get a state-of-the-art superscalar multicore RISC-V CPU for servers and desktops, but I think it'll eventually happen for RISC-V the same way it did for ARM. Hell, the open-source designs are already there, and they're pretty well verified, which is way further than ARM was at the same stage (they had zero incentives to do anything that wasn't commercially viable after all, unlike the research communities developing RISC-V cores), you just need further optimization and verification.


Certainly university-grade hardware isn't commercial-quality but just as most Linux development is now done by paid professionals we are starting to see production cores like SweRV that happen to also be open source.


I could totally see a sort of verification@home distributed effort to help test open-source CPUs.


One of the reason for the interest in RISC-V is the US-China tech war and the US long arm banning of China's biggest tech companies access to ARM. However RISC-V is dominated by US companies and even with some parts open source, it will still be subject to sanctions.

A true alternative to ARM need to come out of China.


The European Union is also investing in RISC-V to become independent when it comes to chips [0, 1]. In the presentation they gave at the RISC-V Forum they explicitly mention tech embargo as one of the reasons why they started this project, together with the need for secure cryptographic functions.

[0] https://www.european-processor-initiative.eu/

[1] https://riscv.org/2019/08/how-the-european-processor-initiat...


That's a direct consequence of the Trump administration bungling its international image.


I'm no fan of Trump but it's worth noting that we tapped Angela Merkel's phone under Nobel Peace Prize Winner President Obama.

Better PR is nice but it doesn't change the underlying incentives.


Sure, but that is not what caused things to happen in 2018/2019 at that level. It is the capricious trade wars with Germany, China, and whoever else that caused this.


On the other hand the tech war is a warning signal for us to decrease our dependency on US technology and going back to building our own stacks.

Ironically these kind of decisions might just give enough boost to Tizen and Harmony to hit hard Android's hegemony.


Well, I don't know, maybe the goal of RISC-V is not to make China happy? Just an idea.


Im pretty sure the Chinese would rather just use ARM and keep on moving along their current trajectory and existing long term plans.


Yep. And that's why these massive engineering companies that now are forced to find ARM-alternatives are going to go for something else.


Though the foundation did move to Switzerland for sort of this reason:

https://www.reuters.com/article/us-usa-china-semiconductors-...

U.S.-based chip-tech group moving to Switzerland over trade curb fears

SAN FRANCISCO/WASHINGTON (Reuters) - A U.S.-based foundation overseeing promising semiconductor technology developed with Pentagon support will soon move to Switzerland after several of the group’s foreign members raised concerns about potential U.S. trade curbs.



I have no doubt that there will be a Chinese developed processor, I don't see why it would be open or why it would preclude risc-v from being used as well. I have no doubt that both would ultimately be cheap and commercially appealing.



RISC-V is based on Switzerland, not US.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: