Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: v8-riscv — Port of JavaScript V8 engine to RISC-V (github.com/v8-riscv)
123 points by partingshots on Jan 6, 2021 | hide | past | favorite | 54 comments



Dumb question, but I’m curious what’s the purpose of risc-v? Is it only relevant for experiments or has real applications?


Good question, and the answer is still a matter of opinion.

I think the HN crowd is mostly excited because it is an ISA that fits two important criteria:

* You can build a RISC-V CPU into your own digital designs for $0. There are also a lot of free designs which use open-source buses like Wishbone if you don't want to build your own.

* Linux/GCC/etc. will probably support a compliant RISC-V implementation.

And the cynical/realist crowd is harrumph-ing after a few important questions, like:

* Many important features are not yet finalized, including things as basic as a nested interrupt controller. So, how compatible will modern RISC-V implementations be with future software?

* Will "good enough" performance actually be good enough? It's still not clear whether a RISC-V CPU will be competitive with ARM et al on power and performance on a similar process node. It might be, but the ecosystem needs to mature.

So it's one of those things that always seems to be 5 years away, like practical fission or the year of the Linux desktop.

But if we ever do get to a point where you can use something like LiteX to generate a RISC-V SoC with N CPUs and driver support for its major peripherals on Linux, that would be exciting. Fabrication on 45-130nm process nodes might also be affordable for small startups by that point, too.


Nr.1 reason is that you need a open ISA in order to have an open processor. Therefore you can only have proper OpenHardware movement with an open processor.

The RISC-V foundation was only created in 2015 before that it was basically a research project so saying anything RISC-V related is 'always' 5 years away is a strange statement.

And I also don't know where you get 'good enough', RISC-V is not a processor, its and ISA and we know that ISA have relatively little impact implementation speeds. We already have RISC-V processors at the same node as ARM processors that beat them in many criteria.


> we know that ISA have relatively little impact implementation speeds

But this is not true, decoders cannot make up for everything. ISAs can also do a lot to improve software security, not to mention ease of use. RISC-V is just not well designed. Though it might be good enough…

https://gist.github.com/erincandescent/8a10eeeea1918ee4f9d99...


That's the opinion of one ARM engineer I've never heard of.

Here's the opinion of probably THE most important ARM engineer of the 1990s and 2000s, Dave Jaggar who developed the ARM7TDMI, Thumb, Thumb2.

https://www.youtube.com/watch?v=_6sh097Dk5k

Check at 51:30 where he says "Are there any ARM snipers? No ... I would Google RISC-V and find out all about it. They've done a fine instruction set, a fine job [...] it's the state of the art now for 32-bit general purpose instruction sets. And it's got the 16-bit compressed stuff. So, yeah, learning about that, you're learning from the best."


Did he also design AArch64? Because I've noticed it can be used to design very fast processors, yet abandons most decisions in ARM7 and Thumb.

I was just curious about how SIMD worked in RISC-V so I looked it up… and apparently RISC-V doesn't have SIMD, instead it brings back microcoded large vector operations from the 70s, and the RISC-V designers called SIMD bad and compared it to the opioid crisis because it means there are too many instruction opcodes?

https://www.sigarch.org/simd-instructions-considered-harmful...

So I'm now even less impressed and also slightly offended. This is not realistic, short vectors are important and easy to do in hardware.


Many codes can be expressed much better with Vectors and there is also much ability to further extend it in the future. You can do short vectors with the RISC-V Vector extension and you can implement them in a simply way as well.

Have you worked with both or did you just form your opinion based on what you are used to?

Again, RISC-V is not one thing, its modular. The 'V' extension is considered to be the right thing for most serious processors.

However there is also the 'P' extension, for embedded that does Packed SIMD.


Performance is not one thing, we have processors from so small they can almost not be seen to gigantic AI chips. RISC-V made some design choices and some people like some of them and some people don't like others, just like with anything else in the world.

Christopher Celio as part of his PhD did ISA comparison and found no way that RISC-V is systematically worse, in fact in many situations its better.

David A Patterson, Dave Ditzel and many other long time engineers in this space seem to really like it.

And btw, you could make a much longer The Good, The Bad, The Ugly list about ARM as well. By that definition no ISA is well designed.


Isn't fission already practical?


Maybe. I'll believe it when a large-ish city runs off of it.


I mean, France and Japan got 70% of their power from it not too many years ago...


The author probably meant fusion.

70% of France's electricity comes from fission, but that's only electric power.


Yeah, that - thanks. I'm not a nukular physicist.


It's an ISA whose purpose is to be an open alternative for people developing microprocessors. It's actually a (large) family of ISAs, suitable for diverse applications.

The major advantage over rolling your own is that you don't have to develop the tools, and you can leverage quite a lot of software (still growing daily).

The major advantages over going with Arm is that you have the freedom to extended the architecture (in a well-defined manner) and the ISA is completely open and free. You can implement it yourself, use on of many open source implementations, or license the IP from many companies, including SiFive.

Nearly all machine learning startups use RISC-V for this reason, including Esperanto Technology.

Off the shelf chips are slow to come out, but there's a steady stream. The next generation of the ESP32 is RISC-V based and there's also BL602 in the same class.

For desktop (RV64GC)[1], I'm not aware of any chips for sale, but SiFive sells motherboards with their chips.

I expect a lot more RISC-V silicon in 2021, including RV64GC capable parts.

[1] see my rant elsewhere of why the "C" part is a !@#$ disaster.


It probably was born as a practical academic experiment. It will be/has been used as an appealing alternative to cortex microcontrollers that are embedded in SoCs, storage devices, and add-in cards. But there's real commercial work trying to scale it up to bigger workloads too. I think these products are pretty early stage.

I think Google is reluctant to push for Android support for a RISC-V applications processor. The nature of the different ISA extensions would mean that they would need to bet on the set of extensions that should be embraced (or a bunch of work to do some 'fat' binaries - or maybe 'obese' if there's a lot of supported combos).

Regardless of Android's fate, yes, you can buy a RISC-V board that will boot linux. That's a pretty good bellwether for 'real applications'. But the performance likely pales in comparison with modern ARM or x86_64 alternatives.

EDIT: see SiFive's "HiFive Unmatched" [1] for an example - DDR4, gigabit ethernet, 8 lanes of PCIe g3. It wouldn't surprise me if that's one of the best performing generally available RISC-V systems.

[1] https://www.crowdsupply.com/sifive/hifive-unmatched


Check out - https://github.com/google/riscv-dv

There’s pretty strong support for RISC-V and open source hardware in general at Google.

Are they doing it to try and capitalize / control the market? Somewhat. But I don’t see it as necessarily a bad thing. The ecosystem benefits from it ultimately which is my primary concern.


Oh yeah I'm sure Google in general is a fan just like all the other companies. But at some point in the (distant?) future, we will see Android phones and apps with RISC-V apps processors. Specifically the impact to android developers is what I'm curious about -- that's the challenge that google has to balance. Today there's support for x86_64 and arm, do they want to add the burden of rv64gc?


The HiFive Unmatched will certainly be the best performing freely available RISC-V PC for the moment. Alibaba's XT910 might be similar but so far they're only using it internally.

The U74 cores in the Unmatched are very similar uarch to the ARM A55 (as found in boards such as the Odroid C4) minus NEON. That's a good step better than the A53 found in the Raspberry Pi 3, but not as good as the out-of-order A72 in the Pi 4. Overall performance on many tasks may be similar to or even better than the Pi 4 as while the CPU cores are slightly worse the RAM and I/O are better, with the Unmatched having 16 GB DDR4, an M.2 slot for SSD, and PCIe for a real graphics card (SiFive demonstrates it with a Radeon RX 580).

Obviously it's not price-competitive with a Pi at this point, as it's still low volume.

The best RISC-V cores are around 5 or 6 years behind ARM in performance, but catching up. SiFive claims A72 equivalence for the U84 out of order core they announced 14 months ago (which will probably start to be available in silicon around the end of 2021) and AliBaba claims A73 equivalence for the XT910.

Obviously no one has an Apple M1 equivalent RISC-V core at this point, but neither do ARM's customers.


You could also alter behavior at runtime based on detected features…


Sure, but that's not the hard bit. The toolchains that I've seen so far all generate code for a static set of RISC-V extensions and that set of extensions become the target platform's identity. Sort of like we've seen variations like 'armhf+gnueabi' before, except with RISC-V several of these 'extensions' are pretty much mandatory to get performance parity. The resolution of these ISA extensions is much higher than other architectures. That's part of the appeal, for sure. You can design a Just Right RISC-V without wasting any power or area supporting features you don't need.

But until or unless the extensions stabilize to a canonical popular set (maybe this has already happened, or is happening soon), I wonder if the extension soup will cause some to be hesitant to adopt.

Whatever hesitancy I am imagining is not holding back some super swift progress on all kinds of awesome RISC-V stuff.


> But until or unless the extensions stabilize to a canonical popular set (maybe this has already happened, or is happening soon)

It already happened. The set is RV64GC.


And that happened YEARS ago.


When you configure RISC-V gcc you tell it what default set of extensions you want it to use, but you can always tell it a different set with -march. I think that can't even be disabled.

It may be that the toolchains you've seen were built with only one set of libraries. If you specify --enable-multilib when you build the toolchain then libraries for a dozen or so common combinations of extensions are built.

If you're doing something embedded then you can build precisely the core and toolchain you want. If you're doing an OS for packaged software then you'll specify one (or a very small number) of combinations of extensions. Linux distributions such as Debian and Fedora decided on RV64GC a few years ago already.

By the end of this year the B and V extensions (and a few smaller ones) should be ratified and the Linux distros will make a second set of supported extensions.


RISC-V was originally a teaching ISA but because of it's licence and open design process it's increasingly being adopted by industry.

There are other open ISAs available but they are generally bigger targets (POWER is open now but it's a result of decades of IBM-work so it's not as initially simple as RISC-V).

Also one thing that even on HN seems to be missed - RISC-V IS THE ISA not the silicon. All it means is that you don't have to pay the designer of the ISA any royalties to use it a la Arm, the CPU itself is most likely still a black box.


Sorry if this sounds cringe, complete hardware novice here

> RISC-V IS THE ISA not the silicon

Doesn't that still require the chip-maker to build the chip with an interface that allows RISC-V instructions to run on it? It's not possible to "flash" a existing intel x86 CPU and run RISC-V instructions on it right?


Yes. That is correct. The expectation is to run these on RISC-V hardware (mostly emulators are used now due to limited hardware availability but that is not the end goal).

* Technically it may be possible to run RISC-V on Intel/AMD CPUs by modifying the microcode but not that much is known about what exactly the microcode does and it will likely not be very performant.


> RISC-V was originally a teaching ISA

Where did you read this? I've always heard it started as part of Par Lab, a research project, and was aimed from the very start at being an open-source ISA that academia and then industry could use.


I could be wrong, or I'm conflating academia with teaching.


I think that's actually a bit of a contentious question; I think there's a lot of people that'd argue that it's definitely part of the future of open computing, and I think it's extremely likely that it stays a part of computing for low-cost embedded devices.

On the other hand, there's a strong argument to be made that it's more difficult to make a RISC-V processor with comparable performance to a desktop-grade amd64 processor than to make an ARMv8 processor that with the same. (Personally, I'm a big fan of AArch64 being a fixed-width encoding, and both programming by hand in it and find writing assemblers and disassemblers is super-easy as a result.)


Riscv is also fixed-width, at least to the same level as aarc64.


My understanding was that the RISC-V encoding used lengths as decoded in e.g. [0], whereas AArch64 is uniformly 32 bits per instruction; is this incorrect?

[0]: https://www.embarcados.com.br/wp-content/uploads/2017/05/RIS...


That's correct, the compressed extension (part of the desktop spec over my loud and persistent protest) is part of the desktop platform. This adds 16-bit instructions to the 32-bit instruction set.

There are many reasons why this was a huge and very shortsighted mistake, but the most painful part of this is that you can now have an instruction spanning two cache lines. This has many awful implication and precludes many tricks that would make implementations faster. But it was important for Andrew Waterman to get his thesis work be mandatory part of the Unix platform, so here we are.

ADDENDUM: It's near trivial for a linker to pad such that instructions don't span cache lines and that would have made this a lot less painful, but RISC-V refuses to specify the line size (it's implementation dependent), so now we suffer this nightmare.


I'm on the other side. I think ARM ignoring their ultra-successful Thumb2 -- which basically made the company --when designing Aarch64 was a huge mistake.

Pretty much every embedded RISC-V user values the C extension highly, and many even think it doesn't go far enough because it leaves out things such as push/pop multiple resulting in the code being 3% or 5% bigger than Thumb2 code (but much smaller than anything else in current use). Hence there is a working group creating an optional extension including things like that, based on a customer extension Huawei have already deployed in the field.

Having both 2 and 4 byte instruction lengths is a little harder to deal with than having only 4 byte instructions, but it's obviously far easier than having anything from 1 byte to 15 bytes. If you really don't want to do it then you're free to build a CPU without the C extension. All the tools support that. You just need to compile the software you need yourself -- or join a community of like-minded people.

Similarly if you want to modify ld to have an option to avoid instructions crossing 32 or 64 byte boundaries (configurable) then that's both pretty easy and I'm sure would also be accepted upstream. You wouldn't need to pad with a NOP or anything like that -- just leave one instruction in uncompressed form.


It really is the worst and most awful decision that RISC-V absolutely must retract, the sooner the better. We cannot have performance and compressed at the same time. There are other things about the C extension that make it extremely bad for emulators too.

The good news is that we can ignore rv64gc completely and just build everything rv64g, if you control everything yourself. Personally for my RISC-V use I actully do control everything and because of this I can pretend the C-extension doesn't exist.


You would have to build and maintain your own Linux distribution, that is the problem.


Or just use Gentoo (or presumably Nix), or another distro with support for custom build flags for packages, right?


I don't know if any of these have RISC-V support, but I would be interested in hearing about it. I would actually love a good distro for RV32G which is arguably a better match for FPGA softcores (and dev boards with > 4 GiB of external memory are rare)



So what would the RISC-V foundation do about this problem?


It's already used in embedded processors commercially, and usage is growing.

For example, there are RISC-V processors in some nVidia GPUs.

Western Digital is known for using RISC-V in drives.


It is unencumbered by licensing and patents and offers good enough performance/watt. Chinese have been investing heavily in RISC-V as alternative to ARM and first IoT devices sporting RISC-V architecture should be available later this year.


There are IP available for FPGA and whole toolchains. This is great if you need some specialised processing too slow to be implemented in software, you can add your own cpu instructions or map custom data pipelines to the CPU i/o space. The IP are not too performant, so there are trade offs to consider.


not necessarily a dumb question.

as I see it, it's an open cpu, and does not require licensing fees. a huge amount of devices ship with arm cortex devices, each one low power and with money going back to arm for the licenses. risc-v provides for an open system that is performant and free of license fees.


It provides for a open/royalty free system that is potentially performant.

RISC-V alone provides little in terms of helping create better CPU implementations. An ARM core could probably be reworked into a RISC-V core and vice-versa -- it's ultimately "just" an instruction set. It benefits the CPU manufacturer, as they don't have to pay licensing fees and can therefore earn a buck more in profit.

RISC-V isn't really intrinsically more open (from an end user perspective) than many of its competitors -- it does, however, have a lot of hype and social intertia behind it, which may be enough to propel it to greatness.

One benefit of RISC-V is it offers a modular instruction set, where a "RISC-V" CPU can implement a subset of functionality and add new instructions in a well defined way. One downside to this is the possibility for the ISA to be splintered in practice if the major vendors end up not supporting a uniform feature set.


A RISC-V CPU is inherently more open even from an end user perspective. That is not practically relevant for end-users if you consider somebody using for example a USB stick an enduser.

However you could take the code on your device and run it on a alternatives providers implementation or on an open implementation.

But just like end users of software don't really get practical benefits from open source, its the same for hardware.

The advantage is for the industry and developers.


> it's an open cpu,

Not true. ISA != RTL


I thought a big benefit of compiled languages was that they are architecturally agnostic.


v8 includes a JIT for every platform it runs on. The code itself needs to run on the target platform (memory model, threads, etc) and the output of the JIT needs to target the execution platform. Given how portable V8 already is, probably not a whole lot work needed to get it to compile and run on RISC-V, but the codegen would need to be written.

https://github.com/v8-riscv/v8/tree/riscv64/src/codegen


Not as far as I'm aware. The benefit of a compiled language is that it is translated to machine code once, not every time you run it, which generally gives perf gains.

But machine code, or rather the instruction set that a CPU supports, is still different for different architectures.


> translated to machine code once, not every time you run it, which generally gives perf gains.

I think the real dominant factor is that ahead-of-time compilation generally isn't competing for resources with the application. AoT compilation is usually done on a completely different machine, or at least at time other than when the application is run.

If your applications were recompiled every night while you slept, you wouldn't mind so much. In fact, that's what the latest Android Runtime does. Running applications dump profile information to the filesystem, and when the phone is plugged in and idle, the runtime goes back and re-optimizes the binaries for your usage pattern. Presumably, there's something like a statistical distance metric[0] to estimate how much the profiles changed since the last compilation, to avoid wasteful recompilation.

[0] https://en.wikipedia.org/wiki/Statistical_distance


And as of version 10, PGO data gets shared with the store, which probably isn't something HN crowd will enjoy. :)

The idea is that long term each application for a specific device gets the optimal compilation outcome.


I've long thought that JVMs should do something similar within large organizations. There's a bit of a black art to exactly which optimizations in which order are optimal, so it would help if there were some small amounts of randomization in that regard, along with performance monitoring/profiling and sharing across hosts / runs.

Of course, it's quite different when it's outside the organization if there's no way to opt out.


They already do, is what IBM calls cloud JIT on their JVM implementation. :)

https://blog.openj9.org/2020/01/09/free-your-jvm-from-the-ji...

Also the JIT cache that Hotspot now has, the original implementation was part of J/Rockit JVM.


Wait, I didn't realize they implemented a JIT cache before their AoT compilation option. It seems like pre-populating a JIT cache, dumping it to disk, and throwing that in as an extra resource in a JAR would have a lot fewer caveats than at least their first implementation of AoT Java compilation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: