No, this is panels from which interposers will be made. Which are now larger than chips and rectangular, so wasted edges from a 300mm wafer are high. The proposed size is much larger than chip-grade ingots.
They don't need perfect silicon. It can be grown on a continuous ribbon which is sliced into panel sizes like they do for solar cells. If they need a perfect surface they can deposit some pure Si to finish it. Maybe we will eventually see that replace ingots for chip grade.
On a lighter note, this is actually a career threatening news for me. So much of my job is to figure out what to do with the end of the round wafer (partial dies), that I can't imagine what my whole department will do if we go for rectangular wafers :'). Made me realize how specific my engineering has been for the last 5 years.
I mean even if they do start using rectangular wafers for _some_ things, there is so much supply chain momentum in circular wafers that surely you have some fairly significant job security.
You're right. But, I work in lithography, and there are only positives for going to rectangular wafers. We will be able to clamp is better, I think acceleration stresses will be easier to manage, modeling will be easier because it will be per field exactly the same, and so on. Litho industry (which basically means ASML at this point) might jump on this.
But I'll reiterate I mean all this on a lighter note, don't think it will happen within my career :).
Edit: Didn't read the question you asked in the first sentence. The left over parts are usually just taken along in the process till the end and are scapped when we get to the cutting stage. The problems arise because the structures on the partial dies are not the same as the full dies in the middle of the wafer. This causes a bunch of weird stresses on the edge and in my small corner of engineering we optimize the fuck out of the edge dies so their stresses are less weird.
I'm facing an engineering problem that amounts to stresses accumulating on the edges of a layered substrate. Are there any references, process notes, books, or industry approaches that you could point me to? High level and/or general are OK.
Lithography takes a more abstract approach to it, we are not actually calculating stresses, we have pretty powerful sensors which measure the wafer surfaces and then my department models the surface.
Lithography lives in the thin film approximation anyway. Timiosheko is a good reference. There are papers from Barnett or Nix that are very nice, but edges will probably end up a fem solver domain.
> It takes the deep pockets of chipmakers like TSMC to push equipment makers to change equipment designs.
I presume Apple invests in a lot of the capital costs? Apple needs to put it's cash somewhere and they can align that with exclusive contractual access to production of leading edge CPUs.
Note that I haven't actually read anything about Apple's investment - I'm just hypothetically assuming it. We do sometimes hear about the exclusive contracts with TSMC.
Fabs got too expensive: that was why Global Foundries was spun out of AMD. Intel now has similar problems as AMD did?
I would phrase that as "fabs got so complex that zero-ish companies had leadership competent enough to manage both competitive fabs, and the rest of the finance / design / marketing / sales / support stack.
Even back in the mid-80's, when fabs were (relatively) dead simple and dirt cheap, Motorola was famously bungling at fabbing their own 68000-series chips.
For what it's worth, Apple has been doing that for ages. Back in 2007/2008 nobody could get their hands on a capacitive touch screens at any meaningful scale. Apple had bought the entire world's manufacturing capacity and 100% of the supply for about two years ahead. They showed with iPhone that it was possible to do multi-touch screens well - and the consumers took notice. "Pinch to zoom" was a product differentiator back then.
What the consumers couldn't know was that nobody else could possibly match Apple's offering even with the best engineering workforce on the planet, because it was impossible to get hardware capable of multi-touch beyond tiny lab batch sizes. And you had to fight even for those.[ß]
ß: I had the privilege of working directly for a Nokia fellow from 2007 until 2011, and got a ring-side view into the supply chain problems for high-end mobile devices. I also learned to dislike NXP with a passion, because that company has a funny habit of withholding spec sheets unless you are buying their SoC systems by the millions...
Yeah. That device came with a stylus for a good reason.
I think my main contribution to the N900 software stack was a bug report I dealt with during N800/N810 development cycle. I dove deep into the stack to understand and explain exactly why a certain annoying usability snag (dreadful UI latency in media player) was not possible to fix without ripping up larger parts of the UI toolkit layer. After my dissection the bug was eventually marked as WONTFIX, with a remarkable note: "we do not dare fix this bug".
For N900 that part of the toolkit was rewritten. As a result the latency bug was finally possible to tackle, and the large arrows in N900 media player were actually pretty responsive. My guess is that whoever in their UI team had had the bright idea to specify which exact GTK widgets were to be used for the navigation buttons was either told off or removed from their effective decision chain.
And I actually used N900 for some time as my mobile media terminal. It worked really well. (Coworker got its GPS chip working reliably without a SIM card, but that feature was never released. To the very end, GPS state machine in N900 required AGPS to expose its position, even if the chip itself had managed to get an accurate fix on your location.)
Even if NXP allows you to look at datasheets, the only supported way to read them (on Linux) is an ancient, pirated version of acrobat downloaded from a sketchy Chinese site.
My understanding is that round wafers are cheaper to produce since the silicon purification process produces cylindrical rods that are then cut into circular wafers.
With fabrication becoming more and more advanced I can see that this original cost advantage of round wafers becomes less significant compared to everything else.
Not sure it's that of an issue but it's been a long time since I worked in this space. I can imagine a higher rate of spin with maybe a (large enough) circular plate underneath the square wafer would do the trick.
Is their any blogs or videos about how these ingots are produced? The cost of blank wafers for a hobbyist is just so steep that im looking into making my own silicone blank wafers.
Making large silicon boules is cheap enough that I'm sure what they plan to do is just square off the sides of the boule before sawing into wafers. The scrap from that process, since it is pure silicon, can just go back into the pot the boule was drawn from (it might need some cleaning steps first), so there is effectively no wasted silicon.
I would imagine, as it stands today, that packing rectangular chips into elliptical wafers has a certain amount of waste that can also be recycled. Actually, I suppose it would be less wasteful to fill the ellipse with rectangles up to the safe edge than it would to lop off entire sides of a boule to make a rectangle for filling.
I don’t mean to insinuate you are wrong - I need an education on how this rectangle business is better. Maybe they’re just trying to remove the “lop the sides off” step?
Packing rectangular chips onto circles has waste, but that waste cannot be recycled. It has been processed through a lot of different steps that contaminate it. I'm not sure if it gets recycled, but it's going to be a lot harder to recycle than large chunks of pure silicon.
Good for tessellation only if chips are also hexagons. Arguably chips themselves could be hexagons for better pin density and potentially thermal properties. Surprised nobody has tried this before.
You cannot cut straight lines trough hexagons. Triangles would work but would be even more inconvenient to design cut and process into normal chips. So squares it is.
Water jet cutters or EDM cutters could both be used on silicon and there is no need for either to cut straight lines.
When silicon area is expensive and performance can be maximised by reducing average on-die wire length, hexagons sound like they might make sense.
Obviously current layout tools prefer X-Y area splits, so a lot of tooling would have to be redesigned to make use of a probably rather small performance gain.
No you can't. Anyone whose spent any time breaking glass into specific shapes knows how difficult it is. Glass can't handle the force required to break it in one go. Multiple perfect passes have to be made in order to do it with reasonable yield. Having corners inside your fault lines is asking for trouble. Chips arent that much different.
It isn’t a question of cutting at an angle, it is a question of cutting in a specific direction all the way across the water. Hexagons demand that you change directions multiple times while cutting at chip level precision across a wafer you are trying to hold in place with the same amount of accuracy.
I'm not sure how chips are cut from a wafer, but long straight cuts can be used for a grid. If it's more like laser cutting via CNC then shape might not matter.
I thought wafers were round in part because edges get damaged or contaminated in handling, and one would not want to have a real chip on the edge be damaged that way?
The spin coating process that applies etch resist also favors round wafers but if they've figured that part out, it's a win because their precision positioning equipment is limited to a square X/Y stage. With round wafers, they lose quite a bit of that space and the wafers are so cheap that wasting some edges isn't a big deal compared to the reduced overhead per chip.
Sure but cutting down larger ingots only affects the vendor, and these rectangular wafers could be handled by the same (or at least similarly sized) equipment for all the other downstream processes. It's way easier than retooling everything to a larger circular diameter in one go.
The problem with that is that you can't do it by making straight chord slices of the the wafer - each cut has to terminate or it will go through the middle of the next hexagon. But it does sound plausible - surely lasers could do this more easily than retooling the whole lithography pipeline for rectangular wafers. But you'd think they would have thought of the idea.
You could do triangular chips with straight cuts. But I that would divide the area more finely, which is the opposite of what they need.
It varies, if you look at meteor lake [1] the different die go from 27..100 sq mm
(Compared to the h200 which is an insane 814 sqmm)
But I don't see any obvious advantage to small triangular die. Once the die are small if doesn't make much difference vs rectangular as to how many you can pack into a circle (for a given die area), and rectangle are much more convenient
We’re all just wasting time until someone figures out how to make wafers into shells that nest together into spheres, right? People talk about the end of Moore’s law and such, but we’ve still got a whole other dimension to work with…
Gosh, if they let me handle this chip design stuff, I’d have it figured out in no time! Looks easy.
Sometimes I daydream about what would happen if FTL communications were possible, but only over short millimeters. It’d seem useless right?
Until someone figured out how to put it in chips! Faster processing and memory.
Also there would be a poetry to FTL via quantum entanglement being possible only as a speculative “guess”, similar to specter but on the quantum hardware of the universe. Sure FTL signals might be impossible but guessing at FTL signals might not be. ;)
The idea of a time travel loop in the processor reminds me of a talk by Damian Conway, in which he shows how avant-garde Perl code to exploit sci-fi hardware.
"Temporally Quaquaversal Virtual Nanomachine Programming In Multiple Topologically Connected Quantum-Relativistic Parallel Spacetimes... Made Easy!"
Interesting idea, although if true then time travelling even a second earlier/later would likely put you several hundred km in the sky (or the ground). I guess any potential time travelling device would have a reference point located in itself, therefore prohibiting this effect.
Stupid question: could you literally just put a grid of coolant tubes through a cube processor? Think like the shape of control rods for a nuclear reactor. Power supply is also tricky with a cube chip, but could you electrify the coolant flowing through the tubes? Half of the tubes positive, half negative. So the tubes through the cube double up thermal and electrical conductance.
EDIT: stupid idea #2: what if you also used peltier cooling to route heat out of hot spots?
You could. Tighter cooling integration for denser ICs is an area of active research but is something that needs to be economical at scale to matter. If a rack full of flat chips does more work per dollar than a complicated-to-manufacture 3d-stacked coolant-permeable IC, there's not a very strong argument for building them.
Peltiers are inefficient as all hell and not likely to be part of such a tightly integrated solution.
And to supply power - some of the crazy powerful AI chips like Tesla's Dojo and Cerebras chip need significant copper under the chip to get enough power in. I think the Cerebras WSI chip is like 5 kW, at low voltage that's a ton of wires.
They don't need perfect silicon. It can be grown on a continuous ribbon which is sliced into panel sizes like they do for solar cells. If they need a perfect surface they can deposit some pure Si to finish it. Maybe we will eventually see that replace ingots for chip grade.