This would explain part of why Apple hasn't been pushing M2 for the data center. Its chips are a better fit for bursty human workloads, not server workloads.
Apple Silicon chips aren’t for sale outside Apple, and Apple hasn’t made any products relevant to data centers since they terminated the Xserve line as part of the PowerPC -> Intel transition.
The Xserve had Intel Xeon versions, not sure why the termination of that line would have anything to to do with the intel transition, since they got deprecated three years after the transition was complete.
IIRC one problem was that MacOS was not competitive. I recall some benchmarks of applications like MySQL, comparing MacOS Server to Linux on similar or even identical hardware, with results that could only be described as abysmal on MacOS Server.
They have a CPU that's been labeled some version of fastest or most efficient, they're hungry for more revenue, but somehow have no interest in the data center market? There must be a reason.
I think this link posted above is the precise reason - there are already other ARM vendors in the server market with ballpark-similar performance, with better market placement and better business relationships. Nobody trusts Apple not to dump server again after XServe and nobody thinks they're a super great partner to work with.
Why would Amazon choose to pay more for an Apple branded product when they could make a higher margin just doing it themselves? Hyperscalers are almost definitionally at the limit where scaling works and they have the volume to amortize some basic uarch work. And it'll be optimized for their exact PPA targets to make the lowest TCO/etc.
Beyond hyperscalers, why would anyone else choose Apple over Ampere, given Apple's general lack of commitment to the server market/open ecosystem/etc? What does the cost look like on this now/in the future? Consider what might happen after the relationship ends, friendly or otherwise - can you keep your business operations going in terms of procurement vs committed instances etc? What if they won't send you any more spares or won't sign some key structure for you (drivers/firmware, UEFI keys, whatever, idk)? Surely anyone involved would want to be super duper sure about this. But Apple has gotten bored and left this market before, and they're always the hot one in the relationship. They can find someone else today if they need.
I just don't think there's much of a market for this that wouldn't rather buy an Ampere or build their own clone like Graviton.
Apple sells its products to its markets, and is mostly uninterested in the data center and embedded market. The support costs are large, and the margins are small-ish.
Which is too bad for the industry, but also lucky for the industry.
I mean, they were the first 64-bit ARM chip, period. In fact, everyone mocked it, but they also trembled in fear because 64-bit.
Data centres care about different things in CPUs. You also don’t see consumer intel CPUs (e.g. core i7, or even the high end gamer cpus / workstation xeons) in normal data centres, even though they run x86. You do see arm in data centres, cf graviton on aws, but these are cpus fds signed for data centres (lots of cores, lots of memory, etc).
The big difference is that consumer cpus care less about virtualisation, high core counts, or having lots of memory, and more about single core performance and potentially not getting hot in your lap.
> consumer cpus care less about virtualisation, high core counts, or having lots of memory, and more about single core performance and potentially not getting hot in your lap.
This goes with what I said in my original comment.
The biggest thing is probably that data centers wouldn't want an M2, M2 Pro, M2 Max, or M2 Ultra. Apple would need a specialized chip. Right now, Apple Silicon tops out at 12 cores (8 performance and 4 efficiency). If I'm a data center, I'm going to likely prefer an AMD EPYC with 64 cores (and 128 threads) over an Apple M2 with 8 performance cores. I can slice that AMD EPYC into a lot more VMs.
Apple would realistically need to make a speciality part for the data center. That would mean taking people off its regular products and tasking them on opening up a new product line that Apple has historically been terrible at. Now Apple has fewer people working on iPhone, Mac, etc. and those products suffer in order to try and enter a market that probably isn't a good fit for them. Heck, data centers aren't going to want CPUs soldered to the motherboard.
Not only that, it would mean tasking software people toward the project. How much of your software staff are now trying to get Linux stable on M2 - taking staff away from iOS/macOS?
Apple doesn't want to do it because it would be a big undertaking. It's not just "print some M2s and sell them to Amazon."
Beyond that, it'd probably be a low margin market compared to what they usually go for. With iPhone and Mac, they have huge product differentiation giving them great margins, but they wouldn't for the datacenter. Even if they're the fastest and most efficient, data center customers are looking for performance-per-dollar. Performance per watt matters in the data center, but not nearly as much as it matters in laptops and phones.
The ARM ISA would mean that they'd need to sell at a discount compared to x64 chips (even if they have better performance) because x64 is the path of least resistance for customers. So Apple would need margins lower than Intel/AMD.
Plus, it would mean taking fab capacity away from iPhones and Macs. We've already seen how constrained that capacity can be. It took AMD 2 years to get to 5nm after Apple launched their 5nm processors. If Apple were to become a large data center player, they'd need to figure out how to prioritize that. For example, only the iPhone 14 Pro got the 4nm A16 processor last year - presumably because TSMC's capacity was really limited. All the rumors on 3nm seem to be similarly constrained. Apple isn't going to risk their cash-cow businesses (iPhone, Mac) for a low margin data center business so that would likely mean shipping data center CPUs that were older nodes. Heck, one of the reasons that AMD hasn't taken over the data center is that they've been a bit supply constrained - and Apple would be too.
There are lots of reasons, but it boils down to the fact that Apple would need to build something they don't currently make - a data center CPU, motherboard, case, open boot system so people can run other operating systems, drivers specs and docs for those other operating systems, etc. Apple would be facing a market where the ARM ISA is a negative, margins aren't as good, and customers would be skeptical of a company whose commitment to enterprise and data centers has been terrible. Plus, Apple's performance supremacy wouldn't even be a total positive in the data center since they're going to be looking at performance per dollar and there would be other companies who would accept low margins all competing in that space.
EDIT: I'd also note that Intel's total revenue is $63B and AMD's is $24B and Apple's is $388B. Let's say Apple is wildly successful and gets a server business as large as AMD's. Apple maybe increases its revenue by 3% (assuming that half of AMD's revenue comes from the data center). So when you say that Apple wants revenue, a new server business wouldn't get them that. More likely, Apple's data center business would be 10% the size of AMD and increase Apple's revenue by 0.3%.
> Right now, Apple Silicon tops out at 12 cores (8 performance and 4 efficiency). If I'm a data center, I'm going to likely prefer an AMD EPYC with 64 cores (and 128 threads) over an Apple M2 with 8 performance cores. I can slice that AMD EPYC into a lot more VMs.
I think it's worth calling out how important this is. Once you get past a certain die size and core count interconnect or "fabric" latency and bandwidth starts to have a much bigger impact on loads than core speed and throughput for code not optimized for that processor. Where M2 is... Apple doesn't have to deal with that at all. AMD on the other hand has gone all in, hence chiplet designs. But yeah Apple wants nothing to do with that, hence they seem to be going for very wide but limited core count designs.
Sure, but Apple's doing pretty well there. Think of the apple silicon as a chiplet. Said chiplet has a 400GB/sec memory bus, 8 performance cores, 4 efficiency cores, and 38 gpu cores (not to mention accel for video encoding, matrix multiplication, and AI.
Said chiplet is sold in 1 and 2 chiplet configurations today (max and ultra flavors of the chip) and have a very healthy chip<->chip connection 2.5TB/sec. No reason Apple couldn't add some glue to allow more than 2 chiplets in a package.
The consumer might replace their laptop once a year, at a fast cadence, but they don't wake up one day and say "I actually need 3 laptops".
Whereas datacenters are now being built constantly, and any future projection would estimate that we're going to keep building more - and if you were unsure before, then the sudden popularity of training AIs and their massive demand for compute should've convinced you.
Apple aren't going to leave money on the table (otherwise they'd still be shipping iPhones with chargers and including dongles with laptops): if they're not targeting server markets, it's because their internal modelling is telling them that what they've got is probably at best a peer-capability, rather then some type of vast excession (or would be wiped out easily by another gen of server grade chip releases).
Apple already failed twice on the server market, A/UX and Xserve, and decided it wasn't for them.
They would have to offer a top option with macOS to make it relevant, as they certainly aren't going to be offering their hardware to run GNU/Linux on top of it.
It happened once, with MkLinux, and that is also something that management won't be keen in repeating.
If it is to have Xserve again, no one cares about it other than companies on the Apple ecosystem, not worth the trouble, they already have Xcode Cloud for that.