Hacker News new | past | comments | ask | show | jobs | submit login
The next Raptor OpenPOWER systems are coming, but they won't be Power10 (talospace.com)
142 points by zdw on Oct 21, 2023 | hide | past | favorite | 111 comments



>Raptor's newest systems are planned for late 2024.

Damn it I got excited and then saw that line. But it is nice we have a new entry to OpenPOWER world. I could at least extend the POWER and FreeBSD combination for a little longer.

I am also wondering who is the backer of Solid Silicon. As much as I love the idea, I would imagine it is a very hard sell to most VC. ( But then VC are known to invest in Cloud Kitchen and FTX saving the world so I guess I shouldn't be surprised )


I can't seem to find any information on Solid Silicon, the company making the CPU.

Anyone have a link or more details on this company? The website I can find for Solid Silicon [1] is delightfully devoid of details.

[1] https://www.solidsilicon.com


Looks like they're a new startup. Their previous product, the X1, was/is a collaboration with Lattice. Looks like this S1 will be another collab.


WHOIS points to Raptor so I'd say it's themselves under a different name.


(author) Raptor is probably just hosting them on their own cloud platform. My conversations with Raptor indicate it's a different company. I've got an inquiry out to chat with them and it's been passed along.



Okay, I know I'm "that guy" that just has to bring RISC-V into any discussion: Why hasn't there been more adoption/movement of OpenPOWER or OpenSPARC? I assume that they have extremely solid toolchains available, and the feature sets were obviously mature enough for high end computing, and they've been 64-Bit for a long time.

I thought that the UltraSPARC T1/Niagara was a really amazing concept, if maybe a few years too early (and definitely in the wrong hands after Sun was extinguished), so I'm just scratching my head why everyone's so gung-ho at starting a brand new architecture from scratch - especially one that seems more suited for ultra-low power microcontrollers. OpenPOWER was founded 10 years ago, OpenSPARC is even older, the T1 code was released in 2006 - so there was time to adopt them as well.

Is the existing ISA too hard to get into lower power chips? Are there any weird patent/licensing issues going on that don't make them as "Open"? Could I please get a modern, high-end SPARC workstation?


>Why hasn't there been more adoption/movement of OpenPOWER or OpenSPARC?

For the same reasons RISC-V had to be created. This is documented in the paper Instruction Sets Should Be Free: The Case For RISC-V[0].

0. https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-...


Ah, it explicitly calls out OpenPOWER:

> Even “OpenPOWER” is an oxymoron; you must pay IBM to use its ISA.

I guess that is also the issue with Power10 here that causes Raptor to go its own way. For SPARC, it mentions the 32-Bit SPARC v8 and says that 64-Bit SPARC v9 is proprietary, which seems weird given that the T1/T2 are based on that SPARC v9, but I guess that just because the CPU Core is licensed under GPL, the ISA itself is still encumbered?

Anyway, that paper is a great read, thanks!

It will have to be seen if this statement will age like milk or like fine wine:

> RISC-V is also 10 to 20 years younger, so we had the chance to learn from and fix the mistakes of previous RISC ISAs

In software, the desire to throw away all the old and start brand new is often a bad idea because a lot of the janky stuff in the old stuff exists for a reason, often not a very obvious one. But no idea how much baggage old ISAs carry that can be truly discarded, achieving high performance is often a messy affair.


> I guess that is also the issue with Power10 here that causes Raptor to go its own way.

I thought avoiding the POWER10 was the result of firmware blobs. [0][1]

"While we applaud the overall extent of source code available for the #POWER10 firmware stack, two key P10-specific firmware components remain closed source at this time. The first is the off-chip OMI DRAM bridge, and the second is the on-chip PPE I/O processor" [2]

[0] https://www.devever.net/~hl/omi

[1] https://www.talospace.com/2021/09/its-not-just-omi-thats-tro...

[2] https://twitter.com/RaptorCompSys/status/1435510763402244105


Have looked at the T1 Code? It far from obvious how you would use that as a starting point for a new chip. You need a lot of expertise for custom memory blocks etc. and there aren’t any tests included if I remember correctly.


No, I'm on the software side of things, so I mainly see what compilers/toolchains I have to work with and what performance I get, but no idea about the nitty gritty of chip design apart from from surface knowledge.


> I assume that they have extremely solid toolchains available, and the feature sets were obviously mature enough for high end computing, and they've been 64-Bit for a long time.

because it is a design that belong to yesterday, initially built by people who came from a completely different age & era. why bother fixing/improving a failed design when there are widely adopted successful alternatives?

> I thought that the UltraSPARC T1/Niagara was a really amazing concept

T1 is dead slow on floating point, the situation is so horrible to the extent that it is just a fraud, period. I mean why would anyone with a reasonable mind expect their fancy 5 figures purchase to be so slow on floating point?

> if maybe a few years too early

or maybe it is a totally failed product that shouldn't be designed in the first place.

> I'm just scratching my head why everyone's so gung-ho at starting a brand new architecture from scratch

because people don't want closed architectures owned by closed companies run by a group of morons from the PR & sales departments?

maybe because people don't want to be the victim of T1's fraud again? honestly, you are kidding yourself if you consider T1 as something amazing. For me, it was a heart breaking experience once the horrible floating point performance was verified.

> so there was time to adopt them as well.

again, people don't want to be the victim of the T1 floating point performance drama.

you hit people hard once, they see your true potential and walk away, that is not very hard to understand, right?


> T1 is dead slow on floating point, the situation is so horrible to the extent that it is just a fraud, period. I mean why would anyone with a reasonable mind expect their fancy 5 figures purchase to be so slow on floating point?

It was designed for the heavily parallel web workloads of the 2000s, which don't use floating point, hence the multiple threads per FPU.


> It was designed for the heavily parallel web workloads of the 2000s

No. According to its official datasheet, "this processor targets commercial applications such as application servers and database servers".

I thought floating point is an essential part of databases.

That being said, given it is a fraud from the beginning, I can accept the fact that the official T1 datasheet is intentionally misleading its potential buyers.

> which don't use floating point, hence the multiple threads per FPU.

are you implying that it was mistake to implement more FPUs in their subsequent T2?

there are certain shortcuts you don't make in engineering. when building an generic processor aiming to take on the application & database servers, you don't stupidly have a single FPU for the entire processor when letting your big mouth PR department talking about how great is the throughput.


> I thought floating point is an essential part of databases.

Have you ever implemented a database? Most operations (filtering, joining, indexing) are purely integer workloads with large working sets that are likely to stall the processor while waiting for data.

That's exactly the scenario that was targeted with that processor design, many threads per core such that the cores are always busy even if some of the threads are stalled on memory accesses.

It wasn't meant for single threaded workloads or HPC workloads, and afaik wasn't marketed for those segments.


> Have you ever implemented a database?

Yes, multiple times, in both American companies and in Chinese startups.

> Most operations (filtering, joining, indexing) are purely integer workloads with large working sets that are likely to stall the processor while waiting for data.

That is not a justification for having only one FPU. Let's use logic here, it is not that hard.

> That's exactly the scenario that was targeted with that processor design, many threads per core such that the cores are always busy even if some of the threads are stalled on memory accesses.

Again, that doesn't justify the stupidity of having just one FPU.

> It wasn't meant for single threaded workloads or HPC workloads, and afaik wasn't marketed for those segments.

I am now totally confused. You are defending the stupid T1 design which only has 1 FPU for its entire processor capable of running 32 threads in parallel, yet your argument is that such design is meant for multi-threaded workloads? I thought you need multiple FPUs to truly parallelize those multi-threaded workloads.


As jfim wrote (emphasis mine): "It was designed for the heavily parallel web workloads of the 2000s". At this time, this mostly meant integer workload. For todays's workloads even in this particular domain, having only one FPU for 32 threads is likely a bad idea, but at that time, this was a perfectly acceptable compromise for this kind of application. These were simply different times.


> I thought floating point is an essential part of databases.

It's not. It's not even particularly necessary for 95% of DB Ops. I have no idea where you even gathered that thought.

This drastically misinformed aspect just makes the rest of your argument seem equally misinformed.


> It's not even particularly necessary for 95% of DB Ops.

Let's say I have databases full of some sensor data that need to be processed by some fancy SQL scripts to crunch those numbers. Tell me how I am suppose to get that done without using floating point.

With only one FPU in mind, tell me how to parallelize such processing and full utilize all those cores.


Picking an example from the remaining 5% does not contradict the parent's point.


The point here is that it is pure stupidity on display to cut the number of FPUs to just one, the short lived history of such design is a proof for that.


> thought floating point is an essential part of databases

To add to the other replies to this, the number of areas where CPU floating point performance matters is shrinking. GPUs are where it's at nowadays. Also, T1 was never going to be a gaming machine or a supercomputer node. Whoever thought otherwise didn't read the actual specs, which were very clear back then.


> I thought floating point is an essential part of databases.

Good heavens, no. Why on earth would you even think that, have you any experience with databases?


> because it is a design that belong to yesterday

Which parts of the POWER architecture design belong to yesterday? Specific examples would be great.


the so called POWER architecture covers like 35 years of development, the latest iteration probably implements some of the latest stuff in the area as it is an experimental platform for researchers.

it is still a design of yesterday because you don't get access to tier 1 software ecosystem. it is a modernized VAX for hobbyists.


Still no specific examples, just handwaving?

SPARC, Intel i960, AMD29k, ARM, RISC-V and many others are direct descendants of the Berkeley RISC design.

Using the number of years of a measure of technological relevance, it can be argued that ARM and RISC-V designs are even older than 35 years. It is pure sophism, though, and it does not help.

By the same token, all modern computing architectures are fossilised designs because they are based on the von Neumann architecture from 1945.


Just want to say thank you. I could have typed that out and didn't. ( See my reply below )

My faith on HN discussions continues to fall. There are many topics I keep thinking may be I shouldn't even click on. But it is good to know there are people, who continue to contribute in a positive manner. ( at least more positive than I am )


> Using the number of years of a measure of technological relevance, it can be argued that ARM and RISC-V designs are even older than 35 years.

I don't know what you are talking about.

X86/ARM are highly successful, being 30-40 years old proves that they are well maintained & comes with a long history of good ecosystems. RISC-V is obviously rising with no limit for its future.

POWER lost the competition, it is done, it should be depreciated. With that in mind, its 35 years history is not a fancy record, it is proof that not many people are going to pick that junk up again.


x86 and ARM are well maintained because they're 30-40 years old, but Power (it's not an acronym anymore) is not because it's 35 years old?

I also suspect you're making the typical mistake of confusing PowerPC with modern Power ISA. They overlap strongly and PowerPC was virtually designed to be 64-bit ready from the very beginning, but chips based on the modern ISA are much more sophisticated.

As someone who actually works on ppc64, the biggest two things that bother me about the ISA are the weirdness with r0 and the large number of instructions that must be implemented to be practical on a new design. Those are comparable to the quirks of any other "successful" ISA by your standards. They hardly make it junk.


> x86 and ARM are well maintained because they're 30-40 years old, but Power (it's not an acronym anymore) is not because it's 35 years old?

I don't know how/why you came up with such strange idea.

x86 and ARM are 30-40 years old, with their widespread market share, such long history became an advantage as numerous tools & apps got built in that 30-40 years timeframe. it is called the ecosystem.

POWER as a failed ISA has been on its dead man walking stage for 30-40 years, nothing really get promoted & adopted. when you look back, it provides very strong certainty that if something good didn't happen in the last 30 years then there won't be a good chance for it happen in the next 5-10 years. its 30-40 years history is a disadvantage, because it is potential has been proven to be pretty bad.

This is like being 50 years old & highly successful in tech putting you in a comfort spot as your age implies your decades long valuable & successful experience. Being 50 years old with a shitty CV full of failures is a different story, your age is a huge liability for obvious reasons.

If the above is not obvious and you are in tech, then I have to say that you are in the wrong business.


What? So what is your definition of Modern architecture ? I am assuming you are referring to ISA here.


AMD64, ARM64, RISC-V.

This is not rocket science. No one needs POWER. Museum is a better place for those kind of vintage stuff.


>AMD64, ARM64, RISC-V. This is not rocket science.

You should re-read what you wrote. [1], and I will leave that as it is. And contrary to what you have been finger pointing. And no one is getting emotional.

[1]. the so called POWER architecture covers like 35 years of development,


> Which parts of the POWER architecture design belong to yesterday? Specific examples would be great.

Is this a troll post? How about the fact that POWER9 is nearly 7 years old and was more than a generation behind it's competitors at the time it launched?


Is this a troll answer?

I don't see any specific example of what exactly is bad about it in this reply, so, the question remains open.

If the answer would be so obvious that the question must be a disingenuous troll question, then surely it must be effortless to give a meaningful answer. It's a terrible architecture because it...what? What obviously backwards and never can and never could possibly work idea does it try to implement?

Not even my question but I just found criticizing the question with this answer pretty rich.


Can't imagine resorting to being this snarky in the defense of a flawed CPU micro architecture. It takes 30 seconds with a search engine to find benchmarks: https://www.phoronix.com/review/power9-epyc-xeon/3

It is, quite literally, half as performant as it's contemporaries while drawing more power. Beyond being 'open', these CPUs are duds and wastes of sand.


I can't be defending something I don't know or care about. The purpose of the comment was stated plainly.


They are just emotional.

Certain vintage tech/gears are being linked to those good old days when certain techs (e.g. SMP and Unix) were only available to selected few. The reality is clear here, we are talking about a totally failed ISA that has no economic future whatsoever.


Emotional? Hardly. I am not affiliated with POWER and have never been. But I like good hardware, good design ideas and debating them.

One could pin me on maybe getting touchy in relation to my own pet ISA design that is a unholy matrimony of an 128-bit extension of the original PDP-11 architecture, ideas from i860, AMD29k and NEC SH-4 that I have been toying around and chugging along with, running on a FPGA at home as well as running a NetBSD port on it – for fun and as an outlet when I get frustrated with the enterprise-y world where I work. But it is my own _pet_ project where I have unbashed freedom to try anything out however silly or impractical the idea is, and where I can't expect others to agree with the ideas I am entertaining.

We could have been debating the merits of the POWER vs ARM CHERI tagged memory architectures and their implications on the compiler design. Or, we could have been disagreeing on the merits of the POWER CPU assisted translation of the ISA's for z/Series IBM mainframes (with the physical CPU long, long not being in existence and POWER CPU's emulating the ISA instead) and AS/400 (or whatever it is named today – the ISA that has never been implemented in the hardware anyway and has always been virtual and powered by POWER CPU's) vs alternatives, in hardware or in software.

We could have also been locking horns over the original Cray vector instructions vs the RISC-V vector extension designs. Or, how 96x slices in a 24x core POWER CPU stack up against competing 96 core CPU designs. Or, agree and disagree how advances in the modern compiler design could have made VLIW architectures more practical today and not. Or something else.

But this:

> […] X86/ARM are highly successful, being 30-40 years old proves that they are well maintained […]

> […] POWER lost the competition, it is done, it should be depreciated. With that in mind, its 35 years history is not a fancy record, it is proof that not many people are going to pick that junk up again […]

is not worth responding to. Such comments belong in Slashdot, Reddit and similar internet forums.

We like picking things apart on here, so please allow us to endulge in it.


>Such comments belong in Slashdot, Reddit and similar internet forums.

Before 2014, most internet reporters, even if they knew HN, they wont name it. Or at best "an orange website". It is as if an accepted culture or a norm for not linking to it. To prevent the site from becoming reddit or slashdot. Somewhere along the line everyone started naming / linking to it.

That is why when ever people ask where to look for high quality information or discussion, I hesitate to even name the few site I bookmarked. Where you would find not only hard core enthusiast but also professional or industry veteran offering their informed opinion.


> AS/400 (or whatever it is named today – the ISA that has never been implemented in the hardware anyway and has always been virtual and powered by POWER CPUs

To nitpick, the original AS/400 systems used a proprietary CPU called IMPI (Internal MicroProgrammed Interface), descended from the System/38. It was ported to the POWER series, or rather the RS64 in the early 90s.

The higher-level software, including parts of the OS, and all applications target a virtual machine.


> because people don't want closed architectures owned by closed companies run by a group of morons from the PR & sales departments?

How would ARM and x86 be "open"?


The T2 fixed what you are complaining about. Was released in 2007.


The same year when SUN decided to use processors from both AMD and Intel to replace its own.


Seeing as they kept making SPARC CPUs for a decade longer, not really.

'Show me on the 1U chassis where the bad CPU touched you' is the vibe I'm getting from your comments in this thread.


> Seeing as they kept making SPARC CPUs for a decade longer

because there were poor customers stuck in the SPARC trap. it would them huge to move to a different ISA and there is usually no $ return for doing that. trying not to further damage its PR, they thus decided to keep the production going.

same logic for Intel making Itanium all the way until 2018 or 2019.


Goes to web site. Talos™ II Entry-Level Developer System TLSDS3. Nice, looks good.

Starting at $5,818.99.

Nope.


I'm not really a free software absolutist, but I do think it's worth putting my money where my mouth is sometimes, so I run a Raptor Blackbird (you could put one together for less than $3000). Raptor systems are still the fastest zero blob computers you can buy.

It's a bit sad to see such a well paid demographic (software engineers) completely disinterested in making any cost/speed/freedoms tradeoffs here.


> It's a bit sad to see such a well paid demographic (software engineers) completely disinterested in making any cost/speed/freedoms tradeoffs here.

This job is might be well-paid in the USA, but in many countries this is not the case; in consideration of the recent layoffs, I would even make a bet that the payment for software developers in the USA is about to change for worse.

Otherwise, I completely agree with your appeal.


$5,800 is almost the exact price a friend of mine paid for his Pentium 66MHz back in 1993 (1994?).

That was probably closer to like $12,000 in todays money


This, I would love to have some SPARC or PPC hardware, but it always comes down to price and power consumption (and noise because I'd probably be sleeping next to the thing)


Price is one thing, but the Talos 2 I'm typing this on is whisper-quiet. I only hear it when the fans spool up during a heavy compute job. Otherwise it's silent except for whatever case fans you choose to have.


Price is the major issue for me. I'd love to use one, but the same amount of compute power is available from big-brand manufacturers as low-end servers in the x86 space. It's hard to justify spending extra for a different ISA.


I get that, but if people want choices in architecture, then sometimes economies of scale won't be available for those less-common choices. I'm willing to pay more for a performance-comparable architecture that is more open and less hostile (and, as a bonus, is one I've worked with personally for decades), and I put my money where my mouth is.

Raptor is well aware of their price premium and tried very hard with the Blackbird, though I wanted the expansion power in the Talos II and would have always bought the bigger system first. But they still have to make a profit and I want to support them in it.


Don't get me wrong, my ARM stuff is quiet too, but if I wanted one of those dual-dual-core G5's from yesteryear, I'd need some earplugs. :)


I've got one of the Quad G5 Macs too, sitting right next to it (in fact, I upgraded from the Quad directly to the Talos 2).

The Quad is also very quiet in idle, but you have to downclock it in System Preferences to be as silent as the T2 is in normal operation, and even with a well-maintained liquid cooling system the G5's roar at full tilt is loud (it's really loud if it's not). The high speed fans in the T2 are more like a quiet whine than what the G5 generates, let alone the wind tunnel MDD G4 I ran before that. And no LCS!

And the G5 is doing that with four cores and four threads, while the Talos II is doing that with sixteen cores and 64 threads. Now, the Quad G5 also came out in 2005, and I got the T2 in 2018. But thirteen years later I'm happy to say that the noise level is absolutely no worse than any other modern workstation on x86_64.


That's the price of computing freedom.


Then you shouldn't be surprised that it's not popular.


Isn't the SPARC ISA super annoying to work with, with 'register windows' and other things that don't make sense in the current year?


Let the compiler deal with those annoying quirks. Almost every arch has some of those.


SPARC, PPC, and ARM can all run Linux just fine.


Yes but why would you waste silicon implementing complicated stuff from the 90s when you can have something more modern like RISCV or ARM?


Register windows and register files are the same thing.

Most RISC architectures have eschewed the register windows in favour of the register files coupled with the register renaming though.

Silicon is «wasted» (debatable) in high-performance CPU designs employing the register files but can be saved in low-cost designs by not implementing a register file at the performance penalty cost. Same is true for the designs with register windows.

The difference between the two is that with the register window the onus of tracking the dependencies in the data flow in registers is explicitly on the compiler (or on the developer) whereas with the register file the onus is on the CPU to track the dependencies, apply heuristics, allocate shadow registers from the register file, rename the in-use registers onto the shadow registers and vice versa.

Modern high-performance CPU implementations that do not employ the register windows typically come with large or very large register files (100-300 circa registers are common).


Calling register files (or register renaming) the same as register windows, is a significant stretch.

Renaming is important for OoO execution, register windows were used (in the original risc and sparc) to paper over the lack of a good register allocator. Later EPIC used a variant of it for software pipelining.


I'm confused about what is called modern here and why.

In any case I (and I am sure the vast majority) of consumers and application developers (system developers are of course in a different position) doesn't care one bit about the inner workings until some inherent security flaw like SPECTRE arises.


Too new can be problematic sometimes.

For pure performance per dollar, and even in terms of energy consumption, AMD64 CPUs are still doing great, and you don't have to worry about something not being ported yet because it's the defacto standard outside of phones and certain niches like automotive, etc; I couldn't care less about ARM in terms of compute for a workstation, gaming computer, or server, especially with things like QAT and various other accelerators.

What I really do care about, though, is all modern x86/AMD64 platforms are super complex and super proprietary; I have no way to trust them. And while ARM may not be quite as crazy as x86, it is pretty much just as proprietary, and so it doesn't solve my issue of being able to trust the hardware and firmware, or tinker with design.

RISC-V is cool. But going back to trust, it is a mountain of work to bootstrap from "bare metal" (UEFI or whatever else) on x86/AMD64 alone -- C++ has been my enemy; virtually everything ends up depending upon having a working C++ compiler at some point in the chain, including LLVM and any version of GCC not at least a decade old.

This means either cross-compiling GCC/LLVM for RISC-V from another system [which is problematic if you don't have a way to cross-compile from something already trusted and reproducible], or backporting/implementing RISC-V support into a ton of software.

Meanwhile, POWER has been around; it may not be as well supported as x86, but I think it's still far easier and significantly more realistic if you want to build a more-or-less trustable, reproducible system.

Plus, outside of root of trust paranoia, again, it's just far more realistic IMO as a workstation, server, or whatever else because of that much larger existing catalog of working software; you can get your proprietary Nvidia drivers for POWER if you want them, but I don't think they have any intention of supporting RISC-V.

Also, in general, I'm not totally convinced RISC-V offers significant enough improvements over POWER to really justify it. Maybe someone more familiar with RISC-V could offer compelling reasons that I'm not aware of, though.

EDIT: Lastly, RE: wasting silicon, there is also the aspect of "wasting" money and R&D building competitive designs relative to existing stuff -- I don't know when I'll be able to get my hands on a fast RISC-V system.


Thanks for the answer, yes I guess it makes sense

But yes I'm not complaining about POWER here, more about SPARC

> C++ has been my enemy; virtually everything ends up depending upon having a working C++ compiler at some point in the chain

Would be surprising if it hasn't been


Probably fair about SPARC. I'm not familiar enough to have any real informed opinions about it. I've heard a fair bit of complaints about painful quirks, though. I guess one cool thing SPARC has had for a long time is memory tagging -- not sure if anything else other than ARM does, but I'm not sure how ARM's implementation compares to SPARC ADI [1].

I don't know of anyone really making modern SPARC designs either, though; pretty sure Fujitsu is focused on ARM now, and I don't know if Oracle is doing anything really, and I'm not super sure, but although Elbrus has built-in x86-translation of all things, I'm not sure any of MCST's relatively recent stuff can still run SPARC binaries.

I think SPARC mostly still alive because of legacy enterprise stuff and because there are existing radiation-hardened designs like LEON that get used for satellites and things like that [2], also I think MCST still produces some of their older SPARC stuff for Russian missile systems [3] since I imagine it takes forever for older stuff to get fully phased out and replaced entirely

[1]: https://www.kernel.org/doc/html/v5.19/sparc/adi.html / https://docs.oracle.com/cd/E53394_01/html/E54815/gqajs.html

[2]: https://en.wikipedia.org/wiki/LEON

[3]: Elbrus 90 used in S-400 - https://web.archive.org/web/20181027225122/http://www.pravda...


> I don't know of anyone really making modern SPARC designs either,

Development seems pretty dead now. Maybe some telco-grade stuff, but that seems to be the whole of it. Not sure how long it'll last.

Unless I'm very wrong, their last releases were in 2017: the SPARC M8, T8 and Fujitsu's SPARC XII. It seems to have found a home in weapon systems though.


ARM originated in the early 80s.


> I'd like to first start out by saying I've been aware of new developments but made certain promises to keep my mouth shut until all the parties were ready to announce. (Phoronix is not so constrained.)

I don't understand. Both this and Phoronix's coverage https://www.phoronix.com/news/Raptor-Computing-New-2024 were posted on October 20th. This seems unnecessarily snarky unless there was something more specific.

People rag on Phoronix, perhaps sometimes rightly so, but I've seen him wait for media embargos in the past.


Can someone provide some good context/resources on this?


openPOWER is IBM's attempt to make the Power architecture relevant.

RaptorCS makes super dank but also super pricey desktop systems.

Some new startup called Solid Silicon has partnered with another company called Lattice to make the chips in Raptor's next line of systems.


A bit of context from the post: "Raptor yesterday officially announced that we're not getting Power10 systems. The idea is we're going to be getting something better: the Solid Silicon S1. It's Power ISA 3.1 and fully compatible, but it's also a fully blob-free OpenPOWER successor to the POWER9, avoiding Power10's notorious binary firmware requirement for OMI RAM and I/O. "


This is extremely suspicious. A totally unknown company is making a high-performance PowerPC processor? Why?


The claim that these CPUs are 'cutting edge' despite being based on the older POWER9 uarch is dubious as well. I suspect this is simply a marketing sham and the CPUs talos continues to ship will be rebrands of their current offering.

That said, if they have the original designs for these CPUs then it's not impossible for a startup to make meaningful improvements, but it almost certainly isn't going to be fabbed on a cutting edge node and certainly won't be competitive with anything else on the market (not that they were to begin with).


Raptor has been making POWER-based machines for quite a few years now. Their Talos line was one of the first systems to have PCI Express 4.0 back in the day: https://www.phoronix.com/news/Talos-2-Initial-Hands-On

(EDIT: OP was talking about the chip manufacturer, not the system maker. And OP is asking a very good question.)


The chip isn't from Raptor; it's from Solid Silicon. Why does Solid Silicon exist?


Ah, now THAT is a good and interesting question. I know that there's been a lot of funding for Texas chip fabs, but yeah, no idea who those guys are. (Edit: According to their website, they're fabless. So that's even weirder. And I'm surprised that one can trademark the paragraph symbol(§) like they claim to have done.)


Well, the fabless part is at least easy to understand. Building a new high end fab costs billions and requires deep expertise that takes years, if not decades, to build.

Practically every chip company these days is fabless. Intel is the exception, not the rule.

That being said, designing a new high end cpu is very expensive and complex. That a hitherto unknown company embarks on such a project is, indeed, very suspicious.


> Practically every chip company these days is fabless. Intel is the exception, not the rule.

A lot of analog and mixed-signal companies aren't: TI, NXP, ADI (with LT, ON Semi, Maxim, ...), Avago/Broadcom, AMS, Bosch, Infineon, Elmos etc.; also "the Japanese" (Sony, Fujitsu, Oki, Canon, Rohm and probably more)


Because they still sell like hotcakes even with the crazy price tag.


Why though? Is there something about the instruction set that is much faster for some kinds of processing (altivec?) or people just need to run old software thats hard to rewrite?


At least part of the reason is not technical, but philosophical. See the third item in the FAQ https://secure.raptorcs.com/content/base/faq.html

These CPUs are completely open and "free". In addition to the libre philosophy this guarantees a level of security unobtainable with any other CPU.

The same rationale is behind the powerpc laptop project https://www.powerpc-notebook.org/en/


Maybe military? There were always $10-20k laptops in near-nonsensical architectures like POWER5, UltraSPARC, dual socket Pentium 4, and such. Maybe they run military radar tools and/or "gas/oil" workloads. Maybe it makes sense for somebody.


Dual socket pentium 4? I don't think that existed. Dual xeon sure, lots of those.


I haven't found a system with multiple Pentium 4s (but I think NetBurst Xeons were literally pretty much Pentium 4s without disabled SMP?).

That said, who knows, I almost wouldn't be surprised; there's been loads of weird unique systems. Compaq SystemPro had custom SMP support to allow for dual 386s, which IIRC is even supported by Windows NT 3.1 officially. I think IBM x445s had some custom crazy interconnect for 32-way Xeon MPs, etc.


Sure they did, the AsRock P4 Combo for example ( didn't find any in laptop form-factor though ).


You can't use both CPUs at the same time with that one. If you install both either something will explode, nothing will happen, or it'll just use one of them (probably configured using a jumper)


>probably configured using a jumper

That's a lot of jumpers https://www.asrock.com/mb/Intel/P4%20Combo/

Also the two memory slots just somewhere in-between the PCI slots.

Some PCIe 1.0 (maybe also 2.0) boards did similar stuff to allow choosing between PCIe x16 and x8+x8.


Ah, oh. I suppose you are right. I just pulled that one out of zhe Google because I remember having a dual socket P4 motherboard, with just one CPU fitted though.

This was in 2001, maybe I am misremembering and it was P3?


Yes a lot of stuff historically that was built with IBM XL for AIX. A lot of Fortran software, a lot of bank related shit. This kind of stuff is simultaneously extremely critical and also just cruising on intertia.


Nobody runs that software on Raptor though.


Hopefully the new ISA will continue to support tagged memory pointers (capability computing), which can stop many security attacks, https://www.talospace.com/2022/10/power9-and-tagged-memory-a... & https://www.devever.net/~hl/ppcas

> IBM's capability-based design is implemented using a hybrid of both hardware and software. The central premise of the design is that you cannot write native code for the platform directly; instead, all programs for the system must be written in an intermediate language which is translated internally (at installation time) to Power ISA machine code by the system's “kernel”. This trusted translator maintains the desired security invariants of the system, just like the modern sandboxed JIT designs used in Java, .NET, JavaScript and WebAssembly, and without relying on hardware memory protection.

> In order to support this capability-based OS, IBM implemented an undocumented extension to the Power ISA known as PowerPC AS. This ISA extension provides a memory tagging system which associates one tag bit with every aligned 16 bytes of system memory (a quadword). In a 128-bit pointer, 64 bits are used for the typical memory address; the other half stores a few bits of metadata and is otherwise reserved (the degree of futureproofing is rather extreme).

HN discussion (2022), https://news.ycombinator.com/item?id=33381823


It's doubtful IBM would drop that support, the PowerPC AS set is required to run the 'i' a.k.a. OS/400 operating system. The original series of processors that implemented PowerPC AS were called RS64. IBM merged this with the POWER (RS/6000) series. All POWER processors since the POWER4 have the tagging instructions.

It would be nice if IBM would document them, so Linux and/or AIX could use them.


To send out a "press release" (of sorts) to announce a new series of machines based around

a CPU that does not exist a CPU with a tiny niche market a company that is mysteriously unknown a company that has never produced (designed) a CPU before. a company that is dedicated to "stealth" mode a company that does not yet know how much the CPUs will cost.

Seems like an extremely risky strategy.

How much is it likely to cost to have an advanced CPU manufactured in a tiny quantity? Can a fab just "change cpu type/model" and produce them? (None to low "setup" fee?


My guess is that this new CPU is just an IBM-licensed POWER10 that will have the mandatory blob removed or open sourced.


... and big endian???


You're probably going to run Linux on these which means you're going to use the ppc64le port.


According to the blog post, it's bi-endian.



Good for them.


Why was POWER9 open? https://www.devever.net/~hl/backstage-cast

> For Intel and AMD, the demarcation point between them and their customer is the CPU, so if they want to keep secrets and maintain a stable interface between them and their customers, they're effectively pushed to do it at the CPU boundary. With IBM, this historically hasn't been the case; traditionally, IBM has only sold servers containing their own CPUs, not the CPUs alone. This means that traditionally, the only customer for IBM CPUs has been IBM — which means there's far less motivation for IBM to lock things down. Moreover, IBM Power servers have traditionally shipped with a proprietary hypervisor (known as PowerVM) built into the firmware. On these servers, all OSes run under this hypervisor, and you can't run an OS directly on the bare metal. This means that the natural interface between IBM and its customers has naturally fallen at the hypervisor—OS interface, not at the CPU..

> When IBM suddenly decided to open up their POWER CPUs for use by third parties, they made available a platform which for its entire lifespan up until that point, hadn't evolved under the same pressures as the existing x86 CPU market, but instead in an environment in which there was really no natural motive for them to lock things down at the CPU level. Which is almost certainly why IBM POWER CPUs seem to so greatly lack any “curtains” — the curtain had always been shipped with the hypervisor, not the CPU.

HN discussion (2023): https://news.ycombinator.com/item?id=36127543

Why did POWER10 become less open, requiring binary blobs?


My understanding is that COVID happened and schedules slipped, and as a result they decided to licence some IP blocks from Synopsys (like a DDR4/5 PHY and a PCIe 5.0 PHY). Synopsys likes including control MCUs in their IP (yes, their DDR PHY has a MCU in it) which are intended to run a binary blob provided by them, and for which they will not give you the source code. For those curious, it's the same DDR PHY blob needed for memory bringup used by the i.MX8M and products based on it (e.g. Librem 5).

Basically IBM facing delays chose to use the IP of Synopsys rather than make their own DDR5 and PCIe5 IP as they usually would (the fact they traditionally made all their own stuff is one of the reasons they were in the position to open 100% of the firmware of POWER9 in the first place), and dropped the ball on making sure that would meet the 100% open source firmware requirements of Raptor's customers. IBM walled themself into a corner here, as they're now dependent on Synopsys IP and it's not like they can persuade Synopsys to open their ahem precious IP.

See my article (it was a hypothetical but has now been confirmed): https://www.devever.net/~hl/omi

(Of course the DDR PHY is on the OMI chip not the CPU, but with only one supplier of the required OMI chips this is a moot point. The PCIe5 PHY is on the CPU though and also needs a blob.)


Thanks, super helpful comment & blog posts.


>Why did POWER10 become less open, requiring binary blobs?

Well the article did actually state it: avoiding Power10's notorious binary firmware requirement for OMI RAM and I/O.

It is the requirement for OMI RAM and I/O. Not the CPU itself.


> they made available a platform which for its entire lifespan up until that point, hadn't evolved under the same pressures as the existing x86 CPU market, but instead in an environment in which there was really no natural motive for them to lock things down at the CPU level.

What is the natural motivation for x86 manufactures to lock their CPUs down?


There are several motivations for locking down x86. One of the biggest ones was DRM, the protected audio video path. The management engine on Intel also hosts TPM functionality, does some stuff with temperature sensing and fan control, and allows for some remote management. Those are all functions that Intel doesn't want end users or their installed software to be able to tamper with.


The "management engine" is a separate low-power CPU that is able to receive commands when the main power is off. This is used for off-hours patching and provisioning of corporate fleets.

The CPU had previously been an ARC RISC core, but converted to an 80486 a few generations ago.

The ME has had some showstopper bugs.

https://en.m.wikipedia.org/wiki/Intel_Management_Engine


Yes, all that is true, and AMD has their PSP. ME and PSP represent the "locked-down" part of the x86 platform, running binary blobs that only the CPU makers have the capability to modify (in the absense of exploits.) Not unlike the binary blobs that led Raptor to use an alternative to IBM POWER10.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: