Hacker News new | past | comments | ask | show | jobs | submit login
RISC-V: More Than a Core (semiengineering.com)
135 points by walterbell on Oct 15, 2018 | hide | past | favorite | 34 comments



I'm concerned that the many RISC-V projects seem to be stalling, I'd love to be convinced otherwise?

* It's been almost 2 years since SiFive released their arduino-ish dev board and it hasn't been updated, surely there are huge gains in 2 years in such a fledgling ecosystem? I feel this is important because this board seems the obvious entry point into the RISC-V ecosystem for enthusiasts.

* The 'BOOM' processor which showed many efficiency gains doesn't seem to be getting anywhere further than last time i checked on it which was perhaps a year or more ago. -edit- looking on their github releases, 2.1 was 6mo ago, and 2.1.1 was recent but shows minor changes, the project unfortunately looks stalled to me.

* LowRISC which has one or two of the RaspberryPi co-founders on board, which seems to be a good sign as far as 'getting something out the door' - i can't seem to see any output from their project?

Please prove me wrong, i'd like to get involved somehow but i have to be careful where I invest my time :)


There has been a decent amount of projects recently. Andes Announced 1.2 GHz CPUs made on 28nm die, and a Chinese company are taking orders for a 400MHz dual-core AI chip. SiFive at the start of the year did start selling HiFive Unleashed, but it is very expensive.

The Berkeley processor is mostly a research CPU for FPGAs. I would however love to see it on actual silicon.

At a guess i'd say it will be a few years until we see RISC-V popping up in more places.


"I would however love to see it on actual silicon."

Qty 1, but... https://mobile.twitter.com/boom_cpu/status/10324470709283962...


No kidding, that's impressive! That was a couple months ago now, i wonder if they've got it connected to a board/RAM and working ?


This is a pretty reductive analysis compared to the massive amounts of activity in RISC-V

- SiFive is a costume silicon company, and they have development platforms for the nodes their working with. It makes little sense for them to push this far beyond in terms of dev boards. Overall SiFive is doing much for Software and RISC-V Standard.

- BOOM was never a large open source project, but rather the work of a single grad student and he is a student no more. However there are certain changes coming from Esperanto and there was also more tapeouts. See Hot Chips 30 talk BROOM.

* The LowRisc effort has been slow. One of the main people who worked on it is working on the LLVM compiler and the LowRisc project gets some amount of money for that work.

There are way to many project to list here and there I don't know what you are interested in doing.


Thanks for the response! I didn't realise BOOM was just one person. And i did notice that LowRisc has a lot of LLVM-related commits. Perhaps i've got the wrong end of the stick here. Maybe RISC-V is more 'subterranean' than is obvious, due to its embedded aims. Have a good one, thanks for the explanation :)


I can give some insight on BOOM - Chris Celio, the grad student who built BOOM, recently graduated, but there are some new students now who are picking up on the effort. Expect to see more work on BOOM in the coming year.

More recently, we've integrated it into our FPGA simulation flow, FireSim: https://github.com/firesim/firesim

You can see BOOM doing some "real stuff" here and spin up your own (on an FPGA on EC2 F1): https://fires.im/2018/08/19/firesim-1.3.1-with-BOOM-support....


Agreed WRT all those projects. SiFive seems to be encouraging independent proprietary developments instead of taking funding for a single awesome one. If all the money spent on closed projects was pooled, they'd be willing it by now.


While RISC-V continues to grow, there are still gaps that may limit where RISC-V is used in designs https://semiengineering.com/risc-v-inches-toward-the-center/


Sure, but filling those gaps is the ongoing work of the foundation. The vast majority of ARM penetration is highly generic, ditto MIPS and Tensilica (e.g. my aftermarket camera lens has some 32-bit MIPS core in it, and doesn't call into MSA; this code could be compiled, assuming equivalent peripherals, to run on a standard RISC-V today and perform the same task).


The article says:

> One of the big benefits of RISC-V is that the architecture is open source.

To which I reply:

...Yeah, until Intel / AMD / Qualcomm / WhatHaveYou Corp.™ say that the implementation below is proprietary and must be defended with a copyright law. And probably compilers for Intel RISC-V CPU and AMD RISC-V CPU will be different because of a vastly different microcode. :(

I want to believe in RISC-V but I am afraid the current incumbents will find a way to capitalize on the idea without showing any goodwill or actually working together... in just a few short months. Never underestimate a team of expert lawyers and marketers I guess.

And I really wish that I am wrong. I will follow any RISC-V news eagerly.


I think that's a far smaller risk than fragmentation. Without one company to control the ISA are we going to end up with a gazillion variants and extensions? How do you compile a program for that?

Hell there are already a ton of variants defined in the base standard. I guess at the moment they are targeting embedded systems where it's reasonable that you compile your program specifically for one chip. Hard to imagine that working on the desktop though.


I suppose “variants” is not an inaccurate description, but it feels misleading (although you could argue about intention vs reality). The spec calls them “extensions”. Support for an extension can be checked at runtime, and unsupported instructions can be trapped and emulated. The base ISA is frozen, and the common extensions (floating point, atomics, vectors) are standardized. The spec also defines a subset of the extensions that general purpose processors are expected to support.

It may be a hope born of bias (I’ve worked with the architects of RISC-V), but I don’t expect fragmentation to be one of the primary issues that RISC-V will face.


I am no expert but...

> Support for an extension can be checked at runtime, and unsupported instructions can be trapped and emulated.

...isn't that exactly fragmentation?


I’m no expert either, and maybe there’s a precise definition of fragmentation that I’m not aware of. My understanding is that fragmentation is about software.

The extension scheme allows you to use the same binaries across all platforms. If you care about widespread compatibility, you simply compile-in software emulations of potentially unsupported instructions. If a platform ~does~ support the instructions, the emulation is never used and you get a performance bonus.


I'm not even sure where I'd look to find a platform target specification (and there SHOULD be a standard platform target).

Just basic things like:

    * This is where the chip will try to run code if powered up/reset.
    * A standard address and format for a "bios configuration" block, or pointer to one.
    * Required standardized platform description.
Probably a lot of other things, but at least with the above a generic (if bloated) kernel could include the kitchen sink and bootstrap further according to the stored configuration.

Somewhat ideally the configuration should be in two blocks; a small 'read only' (or at least rarely updated) one, and the larger supplementary block that can be left erased without bricking the system (in fact, erasing it would be the 'simple' unbricking method, but it should have a shadow copy of the main block so that when re-programming that a partial erase/program might be recovered by the backup copy).


Thanks, that depressed me even more!

You are quite right. I am afraid the usual charade is in full motion already. Nobody wants to work together and standardize stuff. There's profit in not cooperating -- vendor lock-in and patent fees come to mind -- so people will continue gravitating outwards from the altruistic vision of us the technologists.


> A previous project, open core 32, was a vibrant open-source hardware group in Europe. But what they made open was an implementation of a processor. With RISC-V, what is open is the instruction set, and you can use that to implement anything from the smallest IoT device to server-class processors.

So open core 32 was an open implementation, RISC-V is an open idea. Sometimes an idea travels farther, faster, if it doesn't have an implementation. And the implementations that do exist, have a huge population of creators (rapid fans) to assist in its spread and usage. Brilliant. RISC-V isn't the Linux of hardware, it is the Unix.



I see a sound byte from SiFive. Would have been nice if they had reached out to Shakti for their comment as well.


That IBS chart seems a bit baloney, why does the cost and cost share of software grow so dramatically at smaller nodes? It seems like most of the cost share increase would come from validation and physical design.

I personally think that at this point, success (and hardware interest) in commodity RISC-V server hardware is mainly contingent on the availability of major compilers and runtime libraries, but that is necessary regardless of process technology. I don't know of many operators/users of server hardware who have any considerable investment in any ISA-locked software (except the odd proprietary package which would not honestly be too difficult to convince vendors to basically just recompile).


Those "chip design estimates" are nearly always bs. For example, their "estimate" for what a 7nm chip would cost to develop was the next gen nvidia GPU. Literally one of the most complicated and expensive chips to design and yet for some reason they chose that to be representative of average.


>why does the cost and cost share of software grow so dramatically at smaller nodes?

My guess is that the main reason is market segmentation. If you're willing to work at a smaller node, then you're willing to spend more money, and the software vendors can charge more.


I'm pretty sure that is the development cost for the compilers, etc. You're not paying some other company to do that.


Yeah, I initially got the impression they were talking about the design automation software for the node, but I'm not really sure if that's the case.


This site managed to disable right clicking?


Yes...annoying. It's in the main html as an embedded script:

  function nocontext(e) {
    return false;
  }
  document.oncontextmenu = nocontext;
You can paste "javascript:void(document.oncontextmenu=null);" into the address bar to get it back.


In Firefox Shift+Click bypasses this shenanigans as well.


I'm seeing no shenanigans in Firefox. The context menu works for me.


And it works for me on both Firefox and Chromium, uBlock disabled. I can't find the function in the HTML either. Was it just removed?


Yes, looks like someone removed it. It's still in the source on the wayback machine cached copy:

https://web.archive.org/web/20181015143110/https://semiengin...

That cached copy also seems to show the site was hacked for a while. Search the source for strings like "provigil free trial coupon".

Edit: It's still hacked. See https://www.google.com/search?q=site%3Asemiengineering.com+v... Clicking on any of the results takes you through a redirect to a shady pharma site.


Would be nice if chrome had this. I know there are extensions, but having this in core would be better.


I hate it and always set dom.event.contextmenu.enabled to false in Firefox about:config.


Works fine for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: