> Apple differentiates the majority of their products by generation rather than binning.
This simplifies things for consumers, but how do you make chips without binning? Are they all just that reliable? Do they all have extra cores? Maybe Intel bins more because they can squeeze out a significantly better price for more core and cache, but Apple's margins are already so high they don't care?
So, there are a few instances where Apple does bin their products:
1. "7-core GPU" M1 Macs, which have one of the eight GPU cores disabled for yield
2. The A12X, which also had one GPU core disabled (which was later shipped in an 8-core GPU configuration for the A12Z)
3. iPod Touch, which uses lower-clocked A10 chips
It's not like Apple is massively overbuilding their chips or has a zero defect rate. It's more that Intel is massively overbinning their chips for product segmentation purposes. Defect rates rarely, if ever, fit a nice product demand curve. You'll wind up producing too many good chips and not enough bad ones, and this will get worse as your yields improve. Meanwhile, the actual demand curve means that you'll sell far more cheap CPUs than expensive ones. So in order to meet demand you have to start turning off or limiting perfectly working hardware in order to make a worse product.
Apple doesn't have to do this because they're vertically integrated. The chip design part of the business doesn't have to worry about maximizing profit on individual designs - they just have to make the best chip they can within the cost budget of the business units they serve. So the company as a whole can afford to differentiate products by what CPU design you're getting, rather than what hardware has been turned off. Again, they aren't generating that many defects to begin with, and old chips are going to have better yields and cost less to make anyway. It makes more sense for Apple to leave a bit of money on the table at the middle of the product stack and charge more for other, more obviously understandable upgrades (e.g. more RAM or storage) at the high end instead.
I believe that Apple is effectively doing some binning with the M1: The intro MacBook Air has one less GPU core than the higher-config version or the MacBook Pro.
It would make sense if those were identically-made M1s where one GPU core didn't test well and thus had its fuses blown. Between the CPU and GPU, the GPU cores are almost certainly larger anyway; the GPU cores would therefore have higher probability of defects.
Binning requires more design work for the chip. I would guess the M1 was designed rapidly and probably they decided that hundreds of different bins for different types of defects wasn't worth the complexity if it meant delaying tape out for a few weeks. It also leads to extra product complexity (customers would be upset if some macbooks had hardware AES and others didn't, leading to some software being unusably slow seemingly at random).
How rapidly? And so how come it has such spectacular performance? Or the shortcomings of the x86 arch were so, so soooo obvious, but nobody had the resources to reaaallly give a go to a modern arch?
Or maybe, simply the requirements were a lot more exact/concrete and clear? (But the M1 performs well in general, no?)
Apple has always binned; they just don’t publicly announce it all the time. For example, iPod Touch has been historically underclocked compared to the equivalent chip in iPhone or iPad.
That struck me as an extremely odd metric to differentiate products by, given its low relevance to non-technical users who don't know what a GPU core even is (except gamers, but they are not buying iMacs). Additionally, most people are going to think adding one more core is hardly worth the price upgrade, and its quite strange to see an odd number of cores.
Depends, remember that the 5nm processors overall are fairly new so we have no idea of yields. But if 1/5th of the processors are flawless, 1/5th have one faulty GPU and the rest have more errors (in either CPU or two or more GPU failures) then having twice the amount of CPU's available to sell (at a slightly lower pricepoint) might make perfect sense.
It will have a far greater than 1/8th performance impact.
When data structures are power-of-two sizes, having 7 cores instead of 8 could halve performance, since the work gets split into 4 pieces and 3 cores sit idle.
Well GPU data structures aren’t always a power of two, right? There’s more than textures. For a fact, I know vertex count (vertex shaders?) and screen sizes (fragment shaders?) will rarely be exactly a power of two.
Isn't false sharing (as in all your addresses hashing onto the same cache lines) still an issue for power-of-two sizes? You'd have to mess with padding to figure out what's fastest for each chip regardless of core count.
This simplifies things for consumers, but how do you make chips without binning? Are they all just that reliable? Do they all have extra cores? Maybe Intel bins more because they can squeeze out a significantly better price for more core and cache, but Apple's margins are already so high they don't care?