> I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters
I didn't claim it was all that matters, and I haven't seen anyone else do that either.
I do take the point of the rest of your comment though, and it may well be the case that Apple does some clever stuff. But realistically there is only so far that optimisations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
> I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.
Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.
> But realistically there is only so far that optimizations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
Except the M1 is a novel UMA architecture where the GPU & CPU share RAM. There's all sorts of architectural improvements you get out of that where you can void memory transfers wholesale. There's no "texture upload" phase & reading back data from the GPU is just as fast as sending data to the GPU. Wouldn't surprise me if they leveraged that heavily to get improvements across the SW stack. The CPU cache architecture also plays a big role in the actual performance of your RAM. Although admittedly maybe the M1 doesn't have any special sauce here that I've seen, just responding to your claim that "DDR4 is DDR4" (relatedly, DDR4 comes in different speeds SKUs).
> Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.
No one is going to model things as "well users aren't using that much yet". You're going to look at RAM usage growth in the past 12 years & blend that with known industry movements to get a prediction of where you'll need to target. It's also important to remember that RAM isn't free (not looking at the $). I don't know if it matters as much for laptop use-cases as much but for mobile phones you 100% care about having as little RAM as you can get away with on your system since it dominates your idle power. For laptop/iMac use-cases I would imagine they're more concerned with heat dissipation since this RAM is part of the CPU package. RAM size does matter for the iPad's battery life & I bet the limited number of configs has to do with making sure they only have to build a limited set of M1 SKUs that they can shove into almost all devices to really crank down the per-unit costs of these "accessory" product lines (accessory in the sense of their volumes are a fraction of what even AirPods ships).
I didn't claim it was all that matters, and I haven't seen anyone else do that either.
I do take the point of the rest of your comment though, and it may well be the case that Apple does some clever stuff. But realistically there is only so far that optimisations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
> I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.
Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.