Just price, I'd say. AMD / Intel are used to a certain margin on their products, and the low barrier to entry to create ARM CPUs, and fierce competition from giants like Broadcom, keeps margins very thin in this market.
The original smart phones like the Nokia Communicator 9110i were x86 based.
AMD previously had very impressive low-power CPUs, like the Geode, running under 1-watt.
Intel took another run at it with Atom, and were able to manage x86 phones (eg: Asus Zenphone) slightly better than contemporary ARM based devices, but the price for their silicon was quite a bit higher than ARM competitors. And Intel had to sink so much money into Atom, in an attempt to dominate the phone/tablet market, that they couldn't be happy just eeking out a small sliver of the market by only being slightly better at a significantly premium price.
I don't think it is price. Intel has had a bigger R&D budget for CPU designs than Apple. If you mean manufacturing price, I also doubt this since AMD and Intel chips are often physically bigger than Apple chips in die size but still slower and less efficient. See M4 Pro vs AMD's Strix Halo as an example where Apple's chip is smaller, faster, more efficient.
I have not seen any evidence that Apple's chip is smaller, faster and more efficient.
Apple's CPU cores have been typically significantly bigger than any other CPU cores made with the same manufacturing process. This did not matter for Apple, because they do not sell them to others and because they have always used denser CMOS processes than the others.
Apple's CPUs have much better energy efficiency than any others when running a single-threaded application. This is due to having a much higher IPC, e.g. up to 50% higher, and a correspondingly lower clock frequency.
On the other hand, the energy-efficiency when running multithreaded applications has always been very close to Intel/AMD, the differences being explained by Apple having earlier access to the up-to-date manufacturing processes.
Besides efficiency in single-threaded applications, the other point where Apple wins in efficiency is in the total system efficiency, because the Apple devices typically have lower idle power consumption than the competition, due to the integrated system design and the use of high-quality components, e.g. efficient displays. This better total system efficiency is what leads to longer battery lifetimes, not a better CPU efficiency.
The Apple CPUs are fast for the kind of applications needed by most home users, but for applications that have greater demands for computational performance, e.g. with big numbers or with array operations, they are inferior to the AMD/Intel CPUs with AVX-512.
You say you've never seen evidence that Apple's chips are smaller, faster, more efficient but you confidently proclaim that Apple CPU cores are typically bigger on the same node.
Where is your source?
There's plenty of die shots showing that Apple P cores are either smaller or around the same size as AMD and Intel P cores. Plenty of people on Reddit have done the analysis as well.
Qualcomm has a massive "value add" because they own the modem. As well as a doom stack of patents on all things cellular.
You need a modem if you want to make a smartphone. And Qualcomm makes sure to, first, make some parts of the modem a part of their SoC, and second, never give a better deal on a standalone modem than on a modem and SoC combo.
Sure, AMD could make their own modem, but it took Apple ages to develop a modem in-house. And AMD could partner with someone like Mediatek and use their hardware - but, again, that would require Mediatek to prop up their competition in SoC space, so, don't expect good deals.
But there's not a huge market (and therefore, FOSS dev time spent on it) for emulating AArch64 on x86 the way there is x86 on AArch64, so if your options are to build your own AArch64 emulation for x86 (or drop a fortune into an existing FOSS option), or building something based on AArch64 and using the existing x86 emulation implementations, one of these has much more predictable costs and outcomes today.
If an ARM device both suits the goals and has lower risk, there's little upside other than forcing the project to exist.
And since there's very few pieces of AArch64-exclusive software that Valve is trying to support, that's not a goal that benefits the project.
(If I were guessing without doing much research, Switch emulators might be the largest investment of effort in open source on x86 systems running AArch64 things performantly, but that's certainly not a market segment Valve is targeting, so...)
To have flexibility and not being tied to CPU choice due to simply lacking software side. Besides, such kind of project would be beneficial to more scenarios than their immediate need.
They didn't need to back a bunch of projects they backed (radv / aco are a big example where you could claim there was redundancy so they weren't strictly necessary), but results paid off even to the point where AMD is dropping their amdvlk in favor of using radv/aco.
They didn't need to strictly speaking back zink either (OpenGL on Vulkan), but they decided to and it gives them a long term ability to have OpenGL even if native drivers will be gone, as well as an upside of bringing up OpenGL on any Vulkan available target out of the box.
It's just something in their style to do when there seems to be an obvious gap.
OPNsense is decent too. Problem is that running anything open on those AP will still be a mess unless they support something like OpenWRT ;)
Separating router from the AP was something I considered too for building a 10 Gbps network, since I haven't found any WiFi router that could also handle 10 Gbps wired without some accelerator chip requiring non upstream mess to work.
Meanwhile Wine fixed 32-bit OpenGL path performance problem in new wow64 mode, so now you don't need 32-bit Linux dependencies to run 32-bit games in Wine anymore (that affects DX7 games for example that run through OpenGL via WineD3D).
reply