> One of the reasons that Intel is falling so far behind is that they can't keep up with TSMC (and maybe others as well) on the fab side
Actually more that they bit way more than they could chew when they started the original 10nm node, which would've been incredibly powerful if they had managed to pull it right. But they couldn't, and so they stagnated on 14nm and had to improve that node forever and ever. They also stagnated the microarch, because Skylake was amazing beyond other (cutting corners on speculative execution, yes), so all the folowing lakes where just rehashes of Skylake.
Those were bad decisions that were tied to Intel not solving the 10nm node (temember tick-tock? Which then became architecture-node-optimztion? And then it was just tick-tock-tock-tock-tock forever and ever), and insisting on a microarch that, as time went by, started to show it's age.
Meanwhile AMD was running from behind, but they had clearly identified their shortcommings and how they could effectively tackle thme. Having the option to manufacture with either Global Foundries or TSMC was just another good decision, but not really a game changer until TSMC showed that 7nm was not just marketing fad, but a clearly superior node than 14nm+++ (and a good competitor to 10nm+, which Intel still is ironing).
That brings us to 2020, where AMD is about to beat them hard both on mobile (for the first time ever) and yet again on desktop, with "just" a new microarch (Zen 3, coming late 2020). The fact that this new microarch will be manufactured on 7nm+ is just icing on the cake, even if AMD stayed in the 7nm process they'd still have a clear advantge over Zen 2 (of course, their own) and against anything Intel can place in front of them.
That brings us to Apple. Apple is choosing to manufacture their own chips for notebooks not because there's no good x86 part, but because they can and want to. This is simply further vertical integration for them. And this way the can couple their A-whatever chips ever more tightly with their software and their needs. Not a bad thing per-se, but it will separate even more the macs from a developer perspective.
And despite CS having improved a lot in the field of emulation, cross compilers, and whatever clever trick we can think of to get x86-over-ARM, I think in the end this move will severely affect software that is developed multiplatform (this'd be mac/windows/linux, take two and ignore the other). This is some slight debacle that we've seen with consoles and PC games before.
PC, Xbox (can't remember which) and PS3 were three very different platforms back in 2005-ish. And while the PS3 held a monster processors which was indeed a "supercomputer on a chip" (for it's time), it was extremely alien. Games which were developed to be multiplatform had to be developed at a much higher cost, because they could not have an entirely shared code base. Remember Skyrim being optimized by a mod? That was because the PC version was based on the Xbox version, but they had to turn off all compiler optimizations to get it to compile. And that shipped because they had to.
Now imagine having Adobe shipping a non-optimized mac-ARM version of their products because they had to turn off a lot of optimizations from their products to get them to compile. Will it be that Adobe suddenly started making bad software, or that Adobe-on-Mac is now slow?
Maybe I got a little ranty here. In the end, I guess time will tell if this was a good or a bad move from Apple.
All current Macs include a T2 chip, which is a variant of the A10 chip that handles various tasks like controlling the SSD NAND, TouchID, Webcam DSP, various security tasks and more.
The scenario you mention — a upgraded "T3" chip based on a newer architecture that would act as a coprocessor used to execute ARM code natively on x86 machines — seems possible, but I don't know how likely it is.
Yeah, but what would be rationale? They want to avoid x86 as a main CPU, so either you'd get an "x86 coprocessor to run Photoshop" (let's go with the PS example here).
Or you'd have to have fat binaries to have x86/ARM execution, assuming the T3 chip would get the chance to run programs. Now either program would have to be pinned to an x86 or ARM core at their start (maybe some applications can set preference, like having PS be always pinned to x86 cores) or have the magical ability to migrate threads/processes from one arch to another, on the fly, while keeping the state consistent... I don't think such a thing has ever even been dreamed of.
I don't think there's a chance to have ARM/x86 coexist as "main CPUs" in the same computer without it being extremely expensive, and even defeating the purpose of having a custom-made CPU to begin with.
An x86 coprocessor is not that outlandish. Sun offered this with some of their SPARC workstations multiple decades ago, IIRC.
Doing so definitely would be counterproductive for Apple in the short-term, but at the same time might be a reasonable long-term play to get people exposed to and programming against the ARM processor while still being able to use the x86 processor for tasks that haven't yet been ported. Eventually the x86 processor would get sunsetted (or perhaps relegated to an add-on card or somesuch).
Either if it's for performance, battery life or cost reasons, it wouldn't really make sense:
a) performance wise, they move would be driven by having a better performing A chip
b) if they aimed at a 15W part battery life would suffer. 6W parts don't deliver good performance.
c) for cost, they'd have to buy the intel processor, and the infrastructure to support it (socket, chipset, heatsink, etc)
Specially for (c), I don't think either Intel would accept selling chips as co-processors (it'd be like admitting their processors aren't good enough to be main processors), nor Apple would put itlsef in a position to adjust the internals of their computers just to acomodate something which they are trying to get away from.
Apple probably doesn't need the integrated GPU, so an AMD-based coprocessor could trim that off for additional power savings (making room in the power budget to re-add hyperthreading or additional cores and/or to bump up the base or burst clock speeds).
> for cost, they'd have to buy the intel processor
Or AMD.
> and the infrastructure to support it (socket, chipset, heatsink, etc)
Laptops (at least the ones as thin as Macbooks) haven't used discrete "sockets"... ever, I'm pretty sure. The vast majority of the time the CPU is soldered directly to the motherboard, and indeed that seems to be the case for the above-linked APU. The heatsink is something that's already needed anyway, and these APUs don't typically need much of it. The chipset's definitely a valid point, but a lot of it can be shaved off by virtue of it being a coprocessor.
Most of it must be ARM compatible already, for the iPad version.
Also Photoshop was first released in 1987 and has been through all the same CPU transitions as Apple (m68k/ppc/...) so presumably some architecture-independence is baked in at some level.
Actually more that they bit way more than they could chew when they started the original 10nm node, which would've been incredibly powerful if they had managed to pull it right. But they couldn't, and so they stagnated on 14nm and had to improve that node forever and ever. They also stagnated the microarch, because Skylake was amazing beyond other (cutting corners on speculative execution, yes), so all the folowing lakes where just rehashes of Skylake.
Those were bad decisions that were tied to Intel not solving the 10nm node (temember tick-tock? Which then became architecture-node-optimztion? And then it was just tick-tock-tock-tock-tock forever and ever), and insisting on a microarch that, as time went by, started to show it's age.
Meanwhile AMD was running from behind, but they had clearly identified their shortcommings and how they could effectively tackle thme. Having the option to manufacture with either Global Foundries or TSMC was just another good decision, but not really a game changer until TSMC showed that 7nm was not just marketing fad, but a clearly superior node than 14nm+++ (and a good competitor to 10nm+, which Intel still is ironing).
That brings us to 2020, where AMD is about to beat them hard both on mobile (for the first time ever) and yet again on desktop, with "just" a new microarch (Zen 3, coming late 2020). The fact that this new microarch will be manufactured on 7nm+ is just icing on the cake, even if AMD stayed in the 7nm process they'd still have a clear advantge over Zen 2 (of course, their own) and against anything Intel can place in front of them.
That brings us to Apple. Apple is choosing to manufacture their own chips for notebooks not because there's no good x86 part, but because they can and want to. This is simply further vertical integration for them. And this way the can couple their A-whatever chips ever more tightly with their software and their needs. Not a bad thing per-se, but it will separate even more the macs from a developer perspective.
And despite CS having improved a lot in the field of emulation, cross compilers, and whatever clever trick we can think of to get x86-over-ARM, I think in the end this move will severely affect software that is developed multiplatform (this'd be mac/windows/linux, take two and ignore the other). This is some slight debacle that we've seen with consoles and PC games before.
PC, Xbox (can't remember which) and PS3 were three very different platforms back in 2005-ish. And while the PS3 held a monster processors which was indeed a "supercomputer on a chip" (for it's time), it was extremely alien. Games which were developed to be multiplatform had to be developed at a much higher cost, because they could not have an entirely shared code base. Remember Skyrim being optimized by a mod? That was because the PC version was based on the Xbox version, but they had to turn off all compiler optimizations to get it to compile. And that shipped because they had to.
Now imagine having Adobe shipping a non-optimized mac-ARM version of their products because they had to turn off a lot of optimizations from their products to get them to compile. Will it be that Adobe suddenly started making bad software, or that Adobe-on-Mac is now slow?
Maybe I got a little ranty here. In the end, I guess time will tell if this was a good or a bad move from Apple.