Hacker News new | past | comments | ask | show | jobs | submit login

I don't think that "open source cores" make sense outside of FPGA contexts. But FPGAs are incredibly closed and are poor performers for "just" CPU-emulation purposes. (FPGAs are best when you truly need custom hardware, and often are paired up with premade cores like Xilinx's Zynq + ARM FPGA+Core system).

So... yeah. We're all beholden to the factories that make these chips. At best, we can design systems that port between factories but unless you plan to build your own chip factory, I don't think there's much of a solution (or need) for open source CPUs.

I think there's a push for 180nm (ex: 2001-era) open source chip designs. But with many microprocessors available from 28nm or 40nm processes today (ex: 2012-era), I don't think the 180nm class chips stand a chance in any practical matter outside of weird space / radiation-hardened situations (180nm is better vs radiation apparently for reasons I don't fully understand).

------------

From "Beagleboard" perspective, their version of "open source" means freely available schematics and hardware reference designs, making it easy to build your own board and possibly even custom-boot your own versions of the Beagleboard.

At least... the previous paragraph is based on "reputation" as opposed to true analysis. I'd have to actually go through the documentation and think about PCB design carefully to really know.

For example, the "BeagleBone Black" was cloned by SeeedStudio and turned into "BeagleBone Green", a ground-up redesign of the board with the same chips. Proving that a 2nd company could in fact take these hardware schematics and rebuild a totally different project.




> I don't think the 180nm class chips stand a chance in any practical matter outside of weird space / radiation-hardened situations

if it was good enough for the fastest PC processors in 2000 and the PS2 and gamecube, why wouldn't it be suitable for something like this?


Because chips supporting open-source software are available at the 28nm or 40nm level at much cheaper prices, higher performance, and lower power usage. I believe that 22nm is expected (in the long term) to become the cost-efficiency king, but its not an economic reality today yet.

But 28nm and 40nm are the cost-efficiency kings today. So what ever could a 180nm design ever offer the typical consumer outside of radiation-hardening?

And similarly, 5nm or 3nm server chips (like Xeon or EPYC) will always be at the forefront of power/performance/cost, you can't beat physics of shrinking transistors or the economics involved (though the most advanced node will need more-and-more volume to be cost-effective, as its become harder to build on these advance nodes).

There's almost no reason to ever pick an ancient 180nm design on a low-run when millions of 40nm or 28nm chips are being made, and/or billions of 5nm superior chips are being made.


Microcontrollers for low-end hobbyist use, household appliances, or industrial use don't necessarily need state-of-the-art nodes. Price and bulk availability matter. And high-end nodes actually have disadvantages in these areas as their supply is limited, as several car makers quite recently had to learn the hard way.

Also, it doesn't really make sense to use high-end nodes for low-end controllers because the smaller the chip gets, the higher the packaging cost as it gets more difficult to handle. Microcontroller designers already throw in tons of extra features because there is now too much space available.

Finally, it is easier to build up and certify a supply chain for an ancient node than for a high-end node as all the patents have expired by now and the process is much more rugged. In the current political climate, supply chain robustness and auditability might trump pure cost for some applications.


My understanding is that 200mm / 8-inch wafers on the older nodes is a literally dying technology. It's inefficient, it's costlier, it's just worse in all possible attributes.

180mm, and other similar nodes from the late 90s, are 200mm wafers. And since area is radius-squared, the modern 300mm wafer contains more than double the mm^2 and therefore double the chip-area.

Except 28nm process also shrinks the transistors by many magnitudes.

It's infeasible for 180mm to remain cost efficient against 40nm, 28nm, or 22nm.

Top-of-the-line 5nm or 3nm are far more expensive. But a 10+ year old 40nm fab has most of it's one-time-costs already paid for and overall has a more cost effective process in general due to the upgrade to 300mm wafers.


Thanks for the reply. I was not necessarily arguing for the ancient processes, but for the existence of tradeoffs between the process generations that don't instantly make the older processes obsolete. However, depending on how important having domestic chips is rated, seemingly obsolete processes might still have the advantage of lower setup cost.


the leading edge nodes will have a substantial cost premium, but about 1 decade behind leading edge is the cheapest per transistor because when you go back further the increase in transistor size makes them less efficient to manufacture.


The old nodes are cheap because the investment has been fully paid off. There are only running costs and maintenance remaining. However, expanding productions of older nodes will be more expensive, especially if the equipment is not available in bulk anymore. Probably not as expensive as a new node though.


Right. pre EUV, the only reason older nodes were cheap is because you didn't have to build a new fab. Now though, EUV machines are expensive enough that it looks likely that the pre-EUV nodes may remain cheaper pretty much indefinitely. 22/14nm is pretty power efficient, a lot simpler to make.


"something like this" i think i would replace with "embedded applications".. too late to edit of course




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: