An open PDK was the last road block for making open silicon chips.
It’s as important as say when Linus introduced an open source kernel with Linux after GNU had bumbling around getting nowhere for years.
A PDK is roughly analogous to what an assembler does for code in the code => compiler => assembler => machine code tool chain. Previously there were open silicon compilers but not open silicon assemblers.
A malicious party could inject all sorts of nastiness into your code if they control the assembler. The same is true for PDK’s. A malicious gate placement in just the wrong spot and your entropy source is massively compromised. Every piece of software runs on silicon - this would allow for the entire stack to be auditable for the first time. It lets you verify your open titan chip in your 2FA token is actually an open titan chip and not some made in China clone with a Bluetooth backdoor and titan badging.
I feared Sky Water gave up on releasing this PDK when an earlier initiative with another entity fell through. Glad to see Google step in and push this over the finish line.
> An open source set of EDA tools is actually the last roadblock.
This is not true. As of last year it is possible to design a complete chip with open source EDA tools and has been demonstrated on RISC-V with the Raven platform [1]. Though this was previously true.
There is debate amongst the community if open foundry tools are essential - with rms surprisingly coming down on the side of “no” with many caveats - but to date there has not been a single chip made with an open source design flow and an open source process technology because while previously there were open source EDA tools there previously were not open source PDK’s. Now there are. Hence the watershed moment.
That design used X-FAB proprietary digital standard cells, I/O cells, analog IP, and SRAM.
Basically, they used XFAB to do the detailed design and extraction of the blocks. And then they assembled the blocks via place and route.
They also didn't push the technology very hard so they could get away with simple static timing analysis. 100MHz in 180nm for a really simple RISC is ridiculously slow--PowerPC chips were 100MHz+ in 500nm.
You will note that for EMIR drop--nothing. For signal integrity--nothing. Extraction and DRC is Magic--that's ... laughable is being nice.
Don't get me wrong, this is a great achievement. Pulling all these pieces together is really difficult.
However, we have been able to do this much for almost a decade+ now. I remember a different European initiative that did similar projects. The difference was that it tried to go after the analog blocks, as well. It failed for lack of accurate extraction.
The whole movement flounders on DRC and extraction. Without those, you cannot do the detailed design and analysis to build the fundamental blocks that you need to make interesting chips.
We've had OpenPDK for some time. Check out the ISCA2020 presentation on OpenROAD[0], they mention that they use OpenPDK but the last piece of the puzzle is an open source standard cell library.
Isn't this more analogous to the GNU toolchain being made available? Maybe then RISC V is the "UNIX standard" of which a "Linux" silicon equivalent will yet come to be? ... maybe PULP or otherwise ...?
Seems they are located in a former Cypress plant, under the Minneapolis–Saint Paul International Airport, next to the Mall of America. I don't know why, but had them bookmarked already:
I'm a little puzzled by it being on googles github.
Anyways, can't hurt, can it?
edit: Though 130nm sounds boring, this offering seems to aim to support many interesting and very modern features. So a mix of old and new seems possible.
Fast14 looked amazing and it's one of those technologies that seems/seemed so promising. It's a tragedy (for them) that their FastMATH missed the machine learning wave - simply too early!
While Intrinsity claimed powered advantages, I've heard off-hand remarks that it was too power hungry. I speculate that dynamic logic might have mores issues as geometries shrink.
I don't know. I don't want to dunk on the effort, but the world has moved on a lot since then.
The "Northwood" Pentium 4 was a 130nm lithography product.
131mm^2 die size, 35x35mm package and a 54W TDP to implement 55M transistors for the 2GHz version; one core, one thread, 32 bits, 256KB of L2 cache.
Apple's A13 Bionic is based on TSMC's 2nd generation 7nm product (this is what's in an iPhone 11).
98.5mm^2 die size, package is scarcely larger, 6W TDP to implement 8.5B transistors with clock speeds up to 2.66GHz; 6 cores for compute; 4 cores for GPU, 64 bits, 4MB of L2 cache.
The Apple product is a system-on-chip design so the comparison is actually worse than that: there's a whole bunch of stuff living elsewhere on the motherboard for Pentium 4 that's on the same die on the A13.
It's difficult to find comparative benchmarks for technologies so far apart in time and application, but there are some SPEC2006 benchmarks for Pentium 4 (a 90nm version with a much faster clock speed and a lot more L2 cache, in a Fujitsu-Siemens workstation from 2005) and A13 (in an iPhone 11).
The iPhone delivers SPECint_2006 = 52.82 / SPECfp_2006 = 65.27; vs the Pentium 4 with SPECint_2006 = 12.3 / SPECfp_2006 = 12.1.
The point is that you can do useful, practical modern computing on a 130nm node. It's obviously not going to compete with flagship 7nm processors, but it's not like you are working with commodore64 level stuff.
You can run modern operating systems on 130nm processors and they will keep up with most everyday tasks. No machine learning, no compiling Firefox, no gaming, but it should be good enough for basic development, browsing the web, checking email, watching 1080p video, etc.
Perhaps with specialized instructions to support decoding. Or a discrete graphics card to do the same.
Many contemporaneous reports show owners of Pentium 4 processors complaining about being unable to manage smooth 720p h.264 playback in software even with 100% CPU utilization; and 1080p was completely non-viable.
So now we have a 50+W TDP processor with a graphics accelerator and it will begin to compete with (if we back away from the bleeding edge 7nm stuff as you suggest) a Raspberry Pi Zero, which costs $15 shipped. I mean, OK, but what's the goal we're chasing here?
I’m curious how much it would cost to get something taped out and produced at this node size (Per wafer) from SkyWater. I have seen the groups that do wafer sharing for a low volume runs, which seems like a good idea if you have a couple grand and want your own silicon.
I wonder if this could make it cheap enough to crowd fund custom chips?
FreePDK is not a "real" cell library in the sense that the core logic cells it defines are not real ones that can be manufactured/fabricated by any entity. It is a fake kit that can be used to both "estimate" chip area (for your own RTL, for some definition of "estimate") and also used as a starting point if someone wanted to design their own PDK (e.g. VLSI research into new PDKs done by research labs.)
The fact that this PDK will eventually be "production ready" for a real fab and real chips can be printed using it will be a significant step forward.
Unless I’m missing something this is jumping the gun slightly. It looks like an empty document template with a few placeholder titles. It doesn’t actually contain any design rule information.
The closest I had seen was the OSU PDK (https://vlsiarch.ecen.okstate.edu/flow/), but it certainly wasn't Apache Licensed as this is. The license is interesting because there had been some debate as to whether ASL is appropriate for hardware -- see some discussions at FOSDEM from 2019 for example, if my memory serves me right. Seeing that this is being published by Google would (generally) imply that internal legal did their homework and came down with the assessment that: yes indeed ASL is appropriate for hardware. Intriguing.
I think big dotcoms (google is with Skywater doing this) are only now realising how much they are reliant on the silicon suppliers.
It is in the current age of big M&A waves going through the microelectronics industry, a big dotcom can be "deplatformed" overnight by a major IC supplier going to a competitor.
An open PDK was the last road block for making open silicon chips.
It’s as important as say when Linus introduced an open source kernel with Linux after GNU had bumbling around getting nowhere for years.
A PDK is roughly analogous to what an assembler does for code in the code => compiler => assembler => machine code tool chain. Previously there were open silicon compilers but not open silicon assemblers.
A malicious party could inject all sorts of nastiness into your code if they control the assembler. The same is true for PDK’s. A malicious gate placement in just the wrong spot and your entropy source is massively compromised. Every piece of software runs on silicon - this would allow for the entire stack to be auditable for the first time. It lets you verify your open titan chip in your 2FA token is actually an open titan chip and not some made in China clone with a Bluetooth backdoor and titan badging.
I feared Sky Water gave up on releasing this PDK when an earlier initiative with another entity fell through. Glad to see Google step in and push this over the finish line.