Hacker News new | past | comments | ask | show | jobs | submit login
Western Digital Plans to Ship More Than One Billion RISC-V Cores a Year (wdc.com)
371 points by deepnotderp on Nov 28, 2017 | hide | past | favorite | 145 comments



If you're interested in what's going on at the RISC-V Workshop, you might want to follow my live blog here: http://www.lowrisc.org/blog/2017/11/seventh-risc-v-workshop-...


Let me thank you for providing the excellent write-up. I eagerly wait for news from lowRISC in hope for a SBC with general purpose Linux support. This would open up the hardware in general, and thus be a major step in a good direction.

I do have a question, though. Is there any plan for working on an architecture for massively parallelizable workloads, like graphics, artifical neural networks and simulations? Especially the talk by Dave Ditzel seems relevant to this. Even without competitive performance, that would be another major step. It would not only improve the situation by the amount of the necessary efforts for that, but by raising the bar for all the other vendors by being compared to that.

There is a factor between your work and the impact on the ecosystem, and I guess it is significantly higher than one.


You really do an amazing job writing these up. Back when I used to work at ARM and attended the 3rd workshop, I ended up just pointing people to your write-up rather than my own notes.


From the headline, I would guess WD is putting a user accessible CPU in each of their disk drives, idea being that if you have a CPU living close to the drive, then e.g. map+reduce workloads can be more efficiently executed. Instead of going with ARM or Intel, I guess the CPU's are using some less famous architecture called RISC-V.

Then I read the article, and the article is so full of buzzwords and genericisms that after reading the whole thing, I don't know if this guess is correct.


In order to move the reader head inside your drive and to communicate with the host CPU you need microprocessors in your hard driver, really tiny ones. Now, instead of paying ARM for licences to use them WD is using open source processors that don't come with fees besides what it takes to manufacture them.


This is the right answer...they already use ARM, they want a one-time-fee license instead of royalties. Trying to shave a little margin.


WD made it clear in there talk that this wasn't about saving on costs, but rather having control over the innovation in the data space.

I don't see any reason to doubt them. It's an incredible risk to switch their entire company over to a still-growing ISA. But being able to design and modify any particular core as they see fit without having to talk to lawyers or negotiate a new contract... that's an incredible power.


RISC-V is certainly more unencumbered, but there's always the ARM architecture license too. Pricey, but you can modify and build on a relatively more mature ISA


An architectural license doesn't give you free reign to extend the ISA, just to create your own implementation of the standard ISA.


Is this confirmed though?

Were they using ARM? ( Proberly, but may not be in everything they use ) Are they switching over from ARM? Or simply moving their inhouse controller to RISC-V?

The one time license fee were suppose to be a lot cheaper if you are shipping in billions of unit.

Because if they are switching from ARM, I think of it as ARM being lazy and not winning the battle they ought to win, purely from a business perspective.


See this: https://www.malwaretech.com/2015/04/hard-disk-firmware-hacki...

Not definitive for the whole product line, but at least evidence of one popular WD drive with a Marvell/ARM controller. Google "Western digital Marvell" for more...like https://www.prnewswire.com/news-releases/marvell-achieves-si... (over one billion WD units with Marvell/ARM chips on board)


Western Digital's SanDisk SSDs almost exclusively use Marvell controllers, too.


[flagged]


Was I supposed to see the CSI Miami guy when I read that?


I know these comments aren’t really allowed but I laughed


Tiny microprocessors are there in almost all devices supporting DMA since long time right? So what this RISC-V versions will bring new to the table?


> So what this RISC-V versions will bring new to the table?

Not much, except being open, so WD can produce them without paying anyone royalties.


But in that case there would not be much of a point in making such an announcement, right? From a user perspective, I do not care at all what ISA the microcontroller inside the HDD/SSD uses, if it is not user accessible.

Unless they pass the savings on to their customers, that is. And even then, I am not so sure. Shaving a couple of cents of the price of a disk drive does not seem like a big deal to me.

Their stockholders might care, though.


Depends on what you care about, I suppose. I find it interesting, because I am interested in low-power processors.

This move, if it works well for WD, could lead to more attention being paid to a more open competitor to ARM, which would provide some competition and put downward pressure on ARM pricing. That, in turn, could have some potentially interesting second-order effects.

But yeah, if you only care about consumer prices and visible features, this is probably pretty boring stuff.


Mmmmh, now that I think of it: Does WD have their own fabs, or do they buy their chips from other vendors.

And if it's the latter - would WD buying a couple of billion chips a year have any effect on prices?

And now that you mention it - a company like WD announcing they will use RISC-V in their disks means they are serious about this, which in turn might make it easier for other to consider RISC-V a serious option.

I am very excited about RISC-V in theory, but unless somebody builds a "Raspberry-V", so to speak, it will probably be a long time before I get to play with one of these. I also think a high-performance implementation of RISC-V could make for an attractive component of a desktop machine / workstation. The Talos Raptor / II seems to be a sweet machine, but it is totally outside my budget. A less-high-end machine built around a RISC-V might change the equation.


> Western Digital plans to transition future core, processor, and controller development to the RISC-V architecture. The company currently consumes over one billion processor cores on an annual basis across its product portfolio. The transition will occur gradually and once completely transitioned, Western Digital expects to be shipping two billion RISC-V cores annually

I think that paragraph captures pretty well what they are doing; basically swapping out their current (proprietary) cores for RISC-V cores. I don't see any indication that the processors would be any more user accessible than current controllers. Considering the numbers presented, simply doubling the number of cores seems fairly conservative estimate, they will probably do that without any major paradigm shifts.


That said, it's not like they're user inaccessible either.

http://spritesmods.com/?art=hddhack


To be fair, it is strange that they broke out core, processor, and controller as separate items in that verbiage. They're probably referring to the application processors in My Cloud, external drives, and so on.


It doesn't seem so. I think they're just switching the internal processors that do LBA translation, error correction, bad-block marking etc over to RISC. And then their marketing department took that decision and ran with it in a completely different direction.

The key line is "... transitioning its own consumption of processors – over one billion cores per year – to RISC-V."


...which is a message to investors meaning "we're dropping the cost of our product without dropping prices".

If that ARM license is half a dollar, a billion devices per year is a lot of profit.


Your point still stands, but it’s probaably less than fifty cents unless they’re building something really high end.

Oldie but goodie: https://www.anandtech.com/show/7112/the-arm-diaries-part-1-h...


It would be surprising if prices didn't continue to drop.


Far more likely is that the SSD controllers that WD-SanDisk will create (that are the value-add difference between commodity NAND and good SSDs) will now use RISC-V cores. Samsung has a 5-core controller in its drives; I would guess that licensing costs are a pretty hefty chunk of the BOM for creating the controller


>> Samsung has a 5-core controller in its drives

Samsung is developing their own RISC-V cores too...


Hard drives already have moderately powerful processors in them. So there's no reason to assume any change in feature set.


> I would guess WD is putting a user accessible CPU in each of their disk drives, idea being that if you have a CPU living close to the drive, then e.g. map+reduce workloads can be more efficiently executed.

I don't see how this would be better than our current systems architecture. Is the interconnect between the disk drive and the main CPU/memory really the bottleneck?

Even if the CPU lives inside the disk drive case, it would still be limited by the same read/write speeds as a CPU 20cm away.


I was wondering, why they launched a huge recruitment campaign in Shenzhen for IC developers recently. I never saw them being big at chipmaking.

Now things got clear, they were hiring staff for that.


That would mean that the bandwidth from the hard drive is a large bottleneck, which I have a hard time believing is true.


AFAIK Doing error correction on 4K blocks of data is fairly non-trivial. Using custom instructions may be a benefit here too.

https://www.seagate.com/tech-insights/advanced-format-4k-sec...


The parent post was talking about user accessible CPUs for running instructions closer to the data on the hard drive. Error correction being trivial or not, I don't think that is user facing software.


If RISC-V turns out a competitor to ARM based Raspberry PI than it would be great.

If this initiative turns out to be a smart hard disk (HDD) that runs yet another full CPU with Minix like the infamous Intel ME gate. Then we don't need it, the world needs not another spy device, aka insecure hardware that has full acccess, yet is invisible to the users (= owner) of the device.


I guess this will be nice for industry, which may pass the savings along to the consumer, but as far as having auditable hardware that you have some control over, I don't see how this is any better than the ARM SoCs we already have--unless you're going to roll your own system on an FPGA.

That, and I'm kind of disappointed everyone has drunk the RISC kool-aid. I think a lot of RISC "performance" has more to do with compilers catering to the least common denominator than anything else. If you had a language/compiler that took better advantage of a stack architecture, or even a CISC architecture, the performance would probably be just as good if not better.

I was particularly impressed by Baker's old paper[0] on stack architectures in service of his Linear Lisp idea.

[0] http://home.pipeline.com/~hbaker1/ForthStack.html


RISC-V benefits is mostly an open source license that is free of patents. I think the biggest reason for it is academic... there needs to be an open platform for academic research. I'm sure it is next to impossible for an average university to do that on ARM or x86 architecture.


Were there any academic obstructions to OpenRISC or OpenSPARC?



RISC-V is just ISA. I don't think ISA's can be patented. Just some specific instructions.


Yes, they did the research for prior art to make sure that is the case for their instruction set.


ISAs can be patented. Intel is patenting their latest AVX/SSE instructions.


ISA spec is copyrighted. Instructions aren't patented directly. Instead, the best ways of implementing them are patented.


Seems like Microsoft is potentially being targeted for writing an x86 emulator https://newsroom.intel.com/editorials/x86-approaching-40-sti...


> That, and I'm kind of disappointed everyone has drunk the RISC kool-aid.

Well, the thing is, RISC "won" the "RISC vs. CISC" wars, in the sense that more or less every ISA designed since has been RISC [1]. Of course, CISC also won in the sense that x86 is still around, and Intel is of course fabulously successful. So at least for high-end cores designed with a big budget, the extra decoder complexity doesn't appear to hurt that much. But if you're doing a new ISA from scratch, no need to repeat the mistakes of the past.

Now, one can always hope that something better comes around. I'm not particularly hopeful that stack machines would be it; Forth has been around for how many decades now, if it would be such a good idea I think it would have already made its breakthrough. But there's plenty of research-y stuff out there (I admit to not being very familiar with most of it). Such as asynchronous (clock-less) logic, non-binary (ternary) logic, Mill(?), reversible computing, dataflow architecture, neuromorphic computing, quantum computing, graph reduction machines, and whatnot.

[1] In the sense of

- Load-store architecture

- fixed-length instructions (yes, ARM thumb and RISC-V C slightly break this, but still)

- ISA designed primarily as a compiler target rather than for human ASM programmers.


RISC won fully. Intel decodes its CISC into an internal RISC (micro-ops) in the hardware. And despite years and years of optimizations, they can't reduce their power requirements to ARM levels.


To be fair, Intel's old x86 ISA is kind of a mess, so microcoding everything back down to an internal RISC may have been the only way for them to even keep the thing manageable.

As CISC goes, 680x0 seemed a little saner to me, and it had more registers so you didn't have to go to memory as much. Back when I did MIPS programming, I remember getting more of a boost out of having more registers to work with than I did from the shorter instruction cycles.

So the variables are all very entangled... is the edge due to RISC? register count? 40 years of cruft (in Intel's case)? better compiler support? something else? I just feel like the whole thing deserves a little more investigation...


Like you said ARM and x86, RISC and CISC, are the winners. I just wonder how much of that victory is due to circumstances of the time (like weak late-1980s compilers) and how much was due to clear technical superiority.


Well, if you're asking me, I'd say Intel is successful despite the technical shortcomings of the x86 ISA, not due to any technical advantages of it. Intel has the benefit of huge volumes (thanks to x86, yes), and they are very very good at chucking out big silicon wafers economically with low defect rates.

Thanks to those advantages, Intel can overcome the disadvantages of the ISA. Which aren't that big in their main markets, that is relatively high end cores.


That, and I'm kind of disappointed everyone has drunk the RISC kool-aid.

Agree completely. One only has to look at the prominence (or lack thereof) of MIPS, the other "pure RISC" architecture, to see that it's not known for being anything other than cheap. Plenty of low-end Chinese routers, phones, tablets, and various Android-running devices use MIPS; and their performance (or once again, lack thereof) is notable. ARMs are, internally, much closer to x86 than MIPS or RISC-V.

That said, there's always a place for cheaper and simpler 32-bit cores in applications like HDD controllers, where high performance and efficiency is not a primary goal.

If you had a language/compiler that took better advantage of a stack architecture, or even a CISC architecture, the performance would probably be just as good if not better.

Stack architectures are pretty easy to generate code for and have a (slight) advantage with code density, but their memory access patterns are hard to optimise for, and they are even harder to make superscalar.

On the other hand, I think a "CISC-V" could become an interesting and possibly quite competitive alternative to x86.


> Plenty of low-end Chinese routers, phones, tablets, and various Android-running devices use MIPS

I don't think MIPS is common in phones or tablets anymore, those have moved on to the low end of ARM.

However, MIPS is still used in many networking devices and was up until recently, also used in set-top boxes (STB) like the kind you get from your cable provider.

> their performance (or once again, lack thereof) is notable

You don't need gobs of CPU performance in networking devices. All the layer 2 packet handling is done in hardware, and for layer 3 routing the MIPS core(s) are powerful enough to offer 100Mbit NAT performance, which is 99% of what home internet users need currently.

Most managed switches today are either using an updated version of the PowerPC 630 or some ancient and low clocked ARM core. [1]

You don't need gigahertz CPUs in these devices because the CPU is only there to run the management OS (typically Linux) which then configures the switching/routing hardware.

> One only has to look at the prominence (or lack thereof) of MIPS

There are billions of MIPS devices out there. Most homes will have one in their WiFi router, and others in their STB. The only reason MIPS isn't "prominent" is because the products aren't advertised as containing a MIPS core, and people don't know it's MIPS.

[1] https://h50146.www5.hpe.com/products/networking/datasheet/4A...


I'd figure at least from the perspective of the JVM and the CLR, you'd have less of an impedance mismatch on a stack machine. And with most compiled languages being conceptually very stack-y, I doubt it would hurt there either.

That, and you could save on instruction bandwidth since the operands could be implicit stack offsets instead of having to be specified in the instruction. (I believe Moore's GreenArray chips packed four instructions to the word.)

It might be a dead-end, or there could be some serious potential that is just being overlooked. Everyone is so accustomed to register machines (CISC or RISC) these days that it may be a while before the idea is reevaluated.

edit: Sorry! Just read this back, and realized I just repeated what you wrote in different words.


I believe the "we just need a better compiler" thing has been tried with e.g. Itanium and turned out to be harder than expected.


Itanium had many things that needed sorting, and it would have taken one hell of a compiler to sort it all.

The RISC "victory" was called in the early 90s, and it was mostly benchmarked off of late 1980s compilers that were targeted to least-common-denominator register machines, so most fancy CISC instructions were never emitted, and stack architectures were barely even a consideration.

Even on RISC-to-RISC comparisons, having a compiler that caters to your specific ISA makes a huge difference. So, if it was a victory, I wouldn't call it a clear one.


I think anyone who tries to do a different architecture will have to put forward huge R+D themselves for proper code generation. Likely in the form of an LLVM backend, webasm JIT, maybe JVM, as well as on chip hardware techniques that at least fill the same role as OO execution.

I can't think of any hardware architecture that focused on super efficient execution and let the instruction generation chips fall where they may. Maybe the Cell in the PS3. Every other successful chip has seemed to try to deal with whatever instructions it is given as best it can.


I haven't been following the RISC-V story too closely, possibly because I didn't want to get my hopes up only to be dashed. From the article, it sounds like these cores will be developed solely for use in data storage. Can someone with more knowledge tell whether this will help provide the kind of production volume needed to make consumer products (like laptops and desktops) more likely to be viable? Are general purpose chips likely to be one result of the development of RISC-V, or have I missed something fundamental?


Probably we won't be seeing RISC-V application processors for quite a while. There's a lot of stuff that can just be recompiled but there's also a lot of hand-tuned assembly that goes into making a JIT or media codec fast. That's why we're seeing initial adoption in the embedded space, where either there's just a small amount of code to recompile or you were going to rewrite the assembly anyways for the next product.

In the long run using RISC-V in a laptop is a possibility. And there might be some limited production $2000 500MHz FOSS laptop soonish. But in 15 years, say, I could see RISC-V being where ARM is now.



FreeBSD has been in the works since 2016...

[0]https://wiki.freebsd.org/riscv


In the context of JIT, there is ongoing work to port at least OpenJDK and JikesRVM.


whether this will help provide the kind of production volume needed to make consumer products (like laptops and desktops) more likely to be viable?

It probably won't, despite a lot of wishful thinking to the contrary.

Are general purpose chips likely to be one result of the development of RISC-V, or have I missed something fundamental?

Out of the whole RISC-V ecosystem it looks like only SiFive is working on that, so it will take time.


> Out of the whole RISC-V ecosystem it looks like only SiFive is working on that, so it will take time.

If you look at Qualcomm's strategy with x86 competition (using Dynamic Binary Translation), it's not hard to imagine that they might consider building RISC-V application processors; especially once they've proven their ability to deliver enough compatibility and performance with DBT to compete on ISAs for which their device is not licensed (and especially if they are sued by Intel and win, one of those things where you'd jump for joy if you saw a C&D in the mail).


Can Qualcomm do that? Tegra K1 from nVidia was supposed to be x86 in similar way. They couldn't get a license so it became an ARM core.


> Can Qualcomm do that? Tegra K1 from nVidia was supposed to be x86 in similar way. They couldn't get a license so it became an ARM core.

NVIDIA bought Transmeta; Transmeta basically proved (by being sued into insolvency) that they couldn't compete on (up to date, still under patent) x86 with hardware or whole-system software DBT for licensing reasons. NVIDIA's K1/Denver products are very similar to Transmeta architectures still, but for ARMv8 instead of x86(or, more interesting, AMD64), and in this case they have an architectural license.

What Qualcomm is doing is different. Qualcomm is doing software-only Dynamic Binary Translation, and they're doing it on a per-application basis (similar to the Mac 68K emulator, Rosetta, WOW64, or QEMU user mode).


Esperanto Technologies is working on high performance general purpose RISC-V processors.


The announcement does say "we are providing all of our RISC-V logic work to the community." Whatever that means.


It means essentially “open source,” its just that this time the source code is in a hardware definition language.


It helps that the ISA is supported in more places, even if for awareness alone. Compare where ARM was 10 years ago, where it was 5 years ago, and now we're discussing having competitive alternatives to Intel and AMD in servers.

As new developments seem to happen at an accelerated pace, RISC-V should also see more accelerated adoption. It won't take 30 years to get get to where ARM is today. Maybe only 10, or less.


10 years ago was when the iPhone came out, ARM was already fairly established, better to look at intel’s strongarm acquisition almost 20 years ago, when ARM’s future was much more in doubt.


You should read up on SiFive.


Specifically their Freedom products [0], which are multicore, 1GHz+ CPUs, with support for standard interfaces like PCIe 3.0, USB 3.0, GbE, DDR3/4.... and ship with Linux support.

Not about to disrupt Intel, AMD, or ARM in the laptop/desktop/server space just yet, but relatively high performance, modern RISC-V SoCs are definitely out there.

[0] https://www.sifive.com/products/freedom/


> and ship with Linux support

Just to point out that they are not actually shipping that fancy HW yet, with or without Linux support. It might materialize one day, but that day is not today.


According to the workshop summary, they plan to ship a Linux capable dev board with hard U54 silicon in Q1 of 2018.

http://www.lowrisc.org/blog/2017/11/seventh-risc-v-workshop-...


They do have a way to set up FPGAs for it though, so it's clearly not entirely vaporware


I think the level of industry enthusiasm for RISC-V is so palpable, in part, because the messaging from day one has been unequivocally: RISC-V will be the standard ISA for every form factor, in every market.

Can't wait to put a RISC-V SBC in my ThinkPad X220 chassis. :- )


I'm so excited that we're feasibly within a year or two of being able to develop embedded devices in Rust[0] on RISC-V microcontrollers [1] running on open source RTOSes also written in Rust[2]. It's currently already possible but still requires quite a bit of hacking. Plus the RF stacks (Bluetooth in particular) aren't there yet. What a time to be a developer. PS RISC-V on my X220 wouldn't hurt either.

[0] http://blog.japaric.io/quickstart/

[1] https://www.sifive.com/products/hifive1/

[2] https://www.tockos.org/


> I'm so excited that we're feasibly within a year or two of being able to develop embedded devices in Rust[0] on RISC-V microcontrollers

I'm happy to report that the future is yesterday![0][1] (sort of)

> Plus the RF stacks (Bluetooth in particular) aren't there yet.

Espressif is a RISC-V foundation member, and you know what that means. (hint: it rhymes with could-pie den-sill-hiccup)† :- )

[0]: https://abopen.com/news/rust-comes-risc-v/ [1]: https://github.com/dvc94ch/hifive

† «Goodbye Tensilica» (sorry Tensilica, I have nothing against you!)


Very nice. I actually hadn't realized Tensilica wasn't an Espressif technology. That would definitely be sweet if they switched to RISC-V


Sadly I doubt that anyone will make RISC-V motherboard with PCIe slots, or in any other way supply hardware that will make it easy to build a RISC-V workstation. We aren't seeing any ARM workstations and PowerPC boards aren't exactly affordable.

The market for these processors aren't workstation or laptop, there simply aren't enough of us willing to buy them.


> Sadly I doubt that anyone will make RISC-V motherboard with PCIe slots

The SiFive Freedom U500 platform (already available to integrators, AFAIK) has a PCIe 3.0 bus, it would be natural to have a PCIe slot (or a couple, if they have the lanes for it) on the dev board.


> a RISC-V SBC in my ThinkPad X220 chassis

afk, changing pants


Well what I find interesting here is if they are able to manufacture chips with 7nm litography. Aren't intel and samsung who I belive to be the leaders in the field, still at 14 nm? In case WD manages to beat them to the market with this technology at least there will be a lot of hype around this.

As far as binary compability is concerned this is more or less a thing of the past. The way I see it js and other interpreted languages provide for the most vibrant ecosystem at the moment. Where we have package management running not over only kernels but also distributions.

Before this developers have gotten really acustomed to compiling into platform independent bytecode. And even windows which in comparison with Linux has been ported to few platforms is by no means impossible to move over to a new instruction set. As have been demonstrated multiple times. Even C itself was developed to make few assumptions in regards to the metal. If you please, excuse me for reiterating facts well known to the average HN reader.

Furthermore with developers more or less requiring to work with open source software as it makes debugging and making use of other developers experiences easier. The probability that you will be stuck with CPU specific binaries of any given program is slim.

Now the problem is "only" to find programmers capable of programming 4096 CPU cores to operate well simultaneously, in a world where it's completely accepted and right, for a text editor to eat up hundreds of megabytes of ram displaying the source code for a hello world program. Also for this to truly make a dent the development has to span all the way from the metal via the kernel and to the actual application.

Unfortunately I am afraid that the open license will mean very little, from a freedom point of view as they take this route out of the pragmatic reasons, briefly mentioned above, not to make an ideological stand. Nonetheless it's a step forward so I'll try to supress my cynicism.


"Well what I find interesting here is if they are able to manufacture chips with 7nm litography. "

They're going to be using TSMC "7nm". At this point nm node names from anyone mean absolutely nothing. Just know that TSMC "7nm" ~= Intel "10nm"


Aha thank you for clearing that out.


My counterexample to this would be CUDA. It is so much more successful than OpenCL (for many reasons) and so much carefully tuned library code and dev tools exists for CUDA, that choosing other options is only done for mobile platforms, where a duplicate port is required.

It is conceivable that a company like WD could implement a linux BSP and pay for ports/tuning of high level tools, but it would be a significant task.

The performance analysis of synchronous systems like MPI and map reduce over 4k cores is relatively obvious, but for next generation data intensive tasks and asynchronous compute it isn't.


So WD is switching their hardware to in-house designed processors after purchasing a RISC-V developer, do I understand this press release correctly?


WD had already been a RISC-V foundation member before their involvement with Esperanto Technologies (who has also been a member for a while). I suspect they saw Esperanto's portfolio and team after meeting at one of the workshops, and bought into it because of preexisting interest in RISC-V.

Just to be clear, I think they bought into Esperanto, but I don't think they acquired it. Much public communication implies that Esperanto Technologies is still generally autonomous[0][1].

[0]: https://twitter.com/rickbmerritt/status/935600820300713985

[1]: https://twitter.com/EsperantoTech/status/935598028773138432



Talk about buzzword bingo.


It's almost a parody.


I haven't dug through all the marketing speak yet but this seems like it's tangentially related to WD's He8 converged servers they've been sampling, which were ARM-based and ran Debian Jessie. [0] Although when I saw them spoke about in Redhat Summit of course they were mooted to be running RHEL. It would be interesting to see if WD Labs is now sampling RISC-V-based boards running Redhat and Ceph OSD software which like the He8.

I found the whole concept of on-board PCB with dual gig ethernet ports fascinating and I believe there's a second generation with faster network speeds. Unfortunately WD never seem to have gone mainstream with it.

[0] http://ceph.com/geen-categorie/500-osd-ceph-cluster/


Language used in the article - edge computing and fast data - suggests WD is going after potential new markets indicated in the a16z presentation [1].

The He8 concept seems to make sense in that context. So it would not be surprising to see RISC-V extended to better support this new category: call it locally-edge-attached-fast-storage.

[1] https://a16z.com/2016/12/16/the-end-of-cloud-computing/


I am similarly disappointed that this didn't go mainstream. At Cumulus we did a follow up experiment to the one in your link, with the WD labs folks using Cumulus Linux switches.

The idea was that you'd run the Ceph monitors on the switches, and the OSDs on the hard drives, and you'd have an entire storage array with no servers needed. Was very neat, but pointless unless you can buy the drives...


WD could easily take on Intel and AMD for datacenter workloads if computation starts moving to the drives. Drives have mass causing more Data Gravity. Data Gravity is literally money.


I like the idea behind RISC-V's open architecture but I had a question. Does it do or even try to do anything about the cloud of uncertainty surrounding Intel ME and the AMD equivalent in x86?


RISC-V is not an architecture, it is just an instruction set. People still need to design the architecture for an implementation of RISC-V.

Intel ME is a co-processor that runs at the same time as the main processor, it is not related to the instruction set.

To answer your question: Intel ME is a problem that happens on an higher level than the document that defines instruction set.


There are "instruction set architecture" and "microarchitecture" (=specific implementation of an ISA). ISA is the one more commonly referred to as just architecture, I think.


I don't think it necessarily addresses that concern, but Machine mode might be the right place to address the concerns which ME addresses on a RISC-V machine; which means that it's at least more likely that it'll be programmable on whatever machine you're looking at.

Just a thought though, it could really go any way. The nice thing though, is that you'll have more than two vendors, which means that the niche of PSP/ME paranoids may be large enough to address for a smaller designer, or through a limited run of licensed design (like SiFive's).


I'm one of the paranoids. This is one of the main things I'm hoping for. Is there any indication of what price point such chips might be offered at? I have no idea how much it costs to set up fabrication, etc., but it seems like ARM chips have done very well.


How long is a piece of string?

It depends entirely on features and performance. Like Intel and arm chips have a 10x range of cost.


ARM chips have a much higher production volume so they are going to be much cheaper than RISC-V chips, at least for the foreseeable future.


That depends on the manufacturer. There's no reason such a thing has to be there.

A lot of people are hoping that someone will put in the work to bring the architecture up to speeds competitive with i7 and mass-manufacture it, despite being legally cloneable.


I'm not much of a RISC-V guy, but using the arch doesn't require you to open up your design (the arch is basically BSD). So, someone who puts the effort into making a super fast risc-v core will still have an advantage over most everyone else as the effort required to create a fast core is a lot different from the effort required to just get something that works.

So, the devices can't just be "cloned" without the RTL/etc for the design, and even if someone got some masks or the RTL via an illicit source it would still be copyrighted enough to keep them from selling the clones..

Of course if "clone" means you spend hundreds of millions of dollars building your own competitive core, then yes that is still allowed..


There is a security working group. If you are concerned about this I highly recommend joining the foundation as an individual (or getting your employer to join), and asking to be put on that WG.

It’s a critically important question and it requires active grassroots involvement to make sure that we don’t end up with a mere clone of ME, or worse a “better ME.”


Does anyone know what is the state of the processor that they are proposing to ship ? Has it already gone to GDS?

Or is this simply at proposal stage right now (mwani1the hardware is still RTL or netlist).


This is so exciting, can’t wait for all the open source contributions to RISC-V WD will make


It is an interesting play. I didn't think WD would be all that relevant, but... I figured with open ISA the CPU would no longer be a significant piece of IP and competition and leadership would move to those with graphics IP. OTOH if tightly coupled CPU and storage was to become a big thing then storage companies would be the big winners (maybe just in some markets). But the fact that they talk about contributing back is interesting. If companies with GFX or storage want their specialty to become the real value, they will benefit by contributing high quality IP to the open CPU movement thereby destroying the market for the old guard (CPU makers).


A primer on the RISC vs CISC debate that helped me understand why this matters: http://cs.stanford.edu/people/eroberts/courses/soco/projects...


everybody makes processors!


"RISC" is a terrible name for anything related to computing


Advanced RISC Machines (ARM) doesn't seem to have suffered from the association.


I have not been following this, what are the advantages?


For you as an end-user? Nothing. For WD? They escape paying ARM licensing fees on every drive. They'll see an extra couple points of margin on every hard drive they sell.


Smaller designs, easier to license designs, simpler and more attractive ISA extension mechanisms, no royalties, no license negotiation periods, no incremental cost to adding more cores of different designs.


It's a bid deal for RISC-V enthusiasts, who are delighted to see it in consumer products.


This was what I exactly predicted one year ago. Risk-V ISA is coming for all of them{x86,arm,mips}.

And this is very smart move by WD to jump into Risk-V wagon.

Update: Why do people downvote? I honestly don’t understand.


Even though I see no reason to vote you down, there are a lot of reasons not to vote you up.

The message from WD is very good news for RISC-V. But to make the claim that it overthrows all other architectures from the throne is not only a bit daring. With this logic, ARM should have crashed Intel a long time ago. There will always be a market for different architectures.

You have also misspelled RISC-V, indicating that you are not really aware of the market and architecture.


Did you read even my comment?

Where did i claim this?

>ut to make the claim that it overthrows all other architectures from the throne is not only a bit daring

It is always fascinating how much people do extrapolate when the want to believe something. It is going to overthrow and it is overthrown is two quite different thing if you can think critically.

>You have also misspelled RISC-V, indicating that you are not really aware of the market and architecture

Again. This just like you other analysis, which is based on flawed logic and not being intelligent enough.

Rest assured I have wrote enough Chisel, and I would bet I am more familiar about interenal of most architecture than most people in topic (since my grad school work is focused on outputing chisel via LLVM).

One extra lesson for you: dont extrapolate and judge based on appearance. Look at what they are saying deep down.

And don’t based your judgement on spelling, particularly in unofficial context. Some people only have time to comment when they ar in bus or something.


I have tried to explain to you why your message might have been down voted. Nothing else. If I interpret your message that way, others will do the same.


I know ( and i upvoted both of your comments)

No hard feelings ;) But i have a right to defend my comment.


Free tip: If you don't want to be downvoted for your comment (#3) defending your comment (#1), don't say in comment #3 about the person who wrote comment #2 that their analysis was based on not being intelligent enough. That's a personal attack, and is absolutely downvote-worthy.


Yes, this mod behavior is pretty astonishing. microcolonel's comment is getting downvoted as well for no apparent reason.

I guess there are RISC-V haters...? Good grief.


Yeah exactly. I would say this is very informative comment but sadly gets downvoted:

>>Smaller designs, easier to license designs, simpler and more attractive ISA extension mechanisms, no royalties, no license negotiation periods, no incremental cost to adding more cores of different designs.


>> Why do people downvote?

Because they can't handle the fact that you're right. RISC-V is coming for all of them. I like your phrasing too, it's accurate but I guess people think it's more pretentious. Only time will tell for sure.


There's a lot of wishful thinking involved in this. It's like Linux on the desktop: doable but very much a fringe thing.


>> There's a lot of wishful thinking involved in this.

Yeah I agree, but the list of giant companies involved in the wishing is what makes it seem like more than a pipe dream. Just think how much revenue ARM will lose when WD, nVidia, Samsung and others all switch to RISC-V in their embedded devices.


Why is it any more fringe than, say, hoping for an ARM laptop?


ARM laptops already exist BTW using smartphone SoCs. The R&D has already been done and paid for.

A RISC-V server or desktop processor would have to be created essentially from scratch.


>> A RISC-V server or desktop processor would have to be created essentially from scratch.

I'd love to see AMD or Intel build a chip on the RISC-V instruction set and use all their existing infrastructure around that. I would not be surprised it they could achieve higher benchmark performance than their x86 offerings. For someone else to achieve the same level of performance will take a while, but there are multiple groups working on it.


Because ARM laptops have been shipping for years, while there isn't even a single RISC-V SoC out there that could even hypothetically be used in a laptop.


There are a few ARM based Chromebooks. In many cases, it isn't too hard to put regular Linux on them.


I remember RISC's back in the late 80's/early 90's. CISC's bullied them away and we've been stuck in Intel's quagmire every since. Anytime there's an attack on the status quo, the established players feign concern and beat back the attack then return to the way things were (remember Negroponte's $100 laptop and the netbook response?)

No idea how this will pan out.


It wasn't that CISC won or that RISC lost, it was that the architectures got so blurry you couldn't tell one from the other. There's so much microcode in a CPU now that the instruction set is just the icing layer on the cake. Internally there's surprising amounts of commonality between PowerPC, ARM and x86 type chips.

Plus PowerPC started to adopt CISC-like instructions, x86-64 started to adopt RISC-like features such as having a multitude of generic registers, and here we are where nobody cares about the distinction.

Don't forget that while Intel won in certain markets, like notebooks, desktops and servers, it's absolutely, utterly irrelevant in other places that ship far, far more CPUs. A typical car may have as many as one hundred CPUs of various types, typically at least fifty, many of them PowerPC for power and legacy reasons. Your phone is probably ARM. Remote controls. Routers. Switches. Refrigerators. Thermostats. Televisions and displays. Hard drives. Keyboards and mice. Basically anything that needs some kind of compute capability probably has a non-Intel processor.

If there's a quagmire we're stuck in it's that we're surrounded by thousands of devices that are likely full of vulnerabilities that can never, will ever be fixed.


Actually most real RISC CPUs have no microcode, and if they do it's really just the same instruction set running out of an exception handler, not hardwired stuff on some other lower level private ISA


Is PowerPC still considered RISC? That instruction set has evolved considerably from the 601 days.

What is a "real" RISC CPU? By what definition?


Well, there's lots of definitions - I'd include anything that generally has:

- single cycle ops - easy to decode ops (fixed size) - load/store architecture - lots of registers to reduce pressure on memory


> and here we are where nobody cares about the distinction.

Exept, you know the people who created RISC-V. They specifically named it RISC-V to reiterate the point the were making of the advantages of RISC.

Its a literal statment to anybody saying 'Your doing it wrong, RISC is better, so again, RISC-V, please use it'.


After microcoding, this is all silly. What matters is how efficiently you can encode and communicate the μ-ops to the ROB. RISC-V, with the C extension (and using only today's nascent compiler backends!), has more-or-less the same μ-op density as x86-64 (with a good order of magnitude or two less complexity in the decoder), and considerably better density than AArch64, which completely lacks reduced width instructions.

It's not that CISC won, it's that CISC (eventually) didn't lose to any great degree.


x86s were about the most riscy of the cisc processors - 99.9% of instructions that access memory perform an access to a single address, no double indirect accesses no move memory to memory accesses, not 21 TLB misses on a single instruction (meaning a program might have to have enough memory to get all 21 page table pages and the underlying data pages (42 pages) to make progress) - that sort of thing.

The CISC->RISC thing largely happened because the ratio of cpu speeds and memory speeds changed, low end CPUs got caches, they moved on chip, instruction decoding started to be an issue, the x86s were riscy enough that they survived that change


>CISC's bullied them away and we've been stuck in Intel's quagmire every since

You know that ARM means Advanced RISC Machine, right?


In the early days.

ARMv8-A 64 has gotten a bit CISC.


RISCV is pretty far from attacking Intel anywhere. ARM is the one that should be both worried about RISCV and simultaneously be a cause of worry for Intel.


Well, modern x86 "CISC" implementations are basically RISC internally with a translation layer on top of it.


I don't think this matters, as long as the internals are completely inaccessible to a programmer. In other words, what happens inside is not what is usually called "architecture" (which is part of the definition of what RISC is).


From my simplistic understanding even in the risc programming class I took in 2001. Risc became a cisc one instruction at a time




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: