You joke, but I think this is actually true. If companies gave their developers of client-facing software slower computers the resulting software would end up being faster.
And why some of my past upgrades were driven by the web browser over-taxing the machine on certain sites, while nearly everything else was perfectly performant with absolutely no complaints.
Yep, if you're running big IDEs (e.g. Rider/IntelliJ, Visual Studio), containers and VMs, 32GB is really a must. There always seem to be people in these threads claiming that 16GB or even 8GB is enough - I just don't understand how that could possibly be for most of the HN demographic.
Do you think most of the HN demographic is actually running big IDEs, containers and VMs at the same time? I'm personally a CS student and never had to run more than a few lightweight docker containers + maybe one running some kind of database + VS Code and that has been working fine on a laptop with 8GB and pop_os. Could imagine that a lot of other people on HN are also just generally interested in tech but not necessarily running things that require a lot of memory locally.
CS PhD Student. Running a laptop with 16GB of RAM. I dont train ML models on my machine but whenever I have to debug stuff locally, I realize precisely how starved for RAM my computer is. I start by closing down FF. Just plain killing the browser. RAM down from 12GB to 7. Then I close the other IDE (usually working on two parallel repos). 7GB to 5. Squeeze out the last few megabytes by killing Spotify, Signal, and other forgotten Terminal windows. Then I start to load my model to memory. 5 times out of 7, its over 12-13 GB at which point my OS stops responding and I have to force reboot my system cursing and arms flailing.
If you're on macOS, there's no such thing as a “lightweight Docker container”. The container itself might be small, but Docker itself is running a full Linux virtual machine. In no world is that “lightweight”.
I was going to say, I'm on a 16gb 2015 macbook pro (not sure what to upgrade to) and Docker for Mac is _brutal_ on my machine, I can't even run it and work on other things at the same time without frustration
I run like 50 chrome tabs, a half dozen Intellij panes, youtube, slack, and a bunch of heavy containers at the same time, and that's just my dev machine.
My desktop is an ML container train/test machine. I also have ssh into two similar machines, and a 5 machine, 20GPU k8s cluster. I pretty much continuously have dozens of things building /running at once.
Yeah. I suspect most people here are software engineers (or related) and IDEs, Docker, and VMs are all standard tools in the SE toolbox. If they aren't using Docker or VMs, then they are probably doing application development, which is also pretty damn RAM hungry.
I do most of my development in Chrome, bash, and sublime text and I'm still using 28GB of RAM.
Depending on the requirements of your job—-just a single VS instance with chrome, postman and slack open takes around 5GB. Teams adds another GB or so. The rest probably another 2GB (SSMS and the like).
On my particular team we also run a dockerfile that includes elastic search, sql server, rabbitmq and consul—-I had to upgrade my work laptop from 16GB to 24GB to make it livable.
Wouldn't you just have all the heavy stuff on a server? I don't understand the goal of running something like sql server and other server type apps on a desktop/laptop.
I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters. Consider the fact that iOS devices ship with half the RAM of Android devices and feel as responsive, have better battery life, and have better performance.
The Apple stack is better optimized to take advantage of the hardware they have. Indeed, one of the reasons is because they have so few SKUs to worry about it focuses the engineering team (for example, in the past, internally engineers would complain about architectural design missteps that couldn’t be fixed because 32bit support wasn’t dropped yet and was pushed out yet another year). Now, obviously in a laptop use-case this is trickier since the source code is the same as the x86 version. It’s possible that the ARM code generation was much more space efficient (using -Oz instead of previously likely set at -O3). It’s also possible that they have migrated over to iOS frameworks in an even greater part than they were able to in the past, leveraging RAM optimizations that hadn’t been ported to macos). There could also be RAM usage optimizations baked around knowing you will always have a blazing fast NVME drive. Now you may not even need to keep data cached around and can just load straight from disk. Sure, not all workloads might fit (and if running x86 emulation the RAM hit might be worse). For a lot of use cases though, even many dev ones, it’s clearly enough. I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.
> I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters
I didn't claim it was all that matters, and I haven't seen anyone else do that either.
I do take the point of the rest of your comment though, and it may well be the case that Apple does some clever stuff. But realistically there is only so far that optimisations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
> I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.
Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.
> But realistically there is only so far that optimizations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.
Except the M1 is a novel UMA architecture where the GPU & CPU share RAM. There's all sorts of architectural improvements you get out of that where you can void memory transfers wholesale. There's no "texture upload" phase & reading back data from the GPU is just as fast as sending data to the GPU. Wouldn't surprise me if they leveraged that heavily to get improvements across the SW stack. The CPU cache architecture also plays a big role in the actual performance of your RAM. Although admittedly maybe the M1 doesn't have any special sauce here that I've seen, just responding to your claim that "DDR4 is DDR4" (relatedly, DDR4 comes in different speeds SKUs).
> Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.
No one is going to model things as "well users aren't using that much yet". You're going to look at RAM usage growth in the past 12 years & blend that with known industry movements to get a prediction of where you'll need to target. It's also important to remember that RAM isn't free (not looking at the $). I don't know if it matters as much for laptop use-cases as much but for mobile phones you 100% care about having as little RAM as you can get away with on your system since it dominates your idle power. For laptop/iMac use-cases I would imagine they're more concerned with heat dissipation since this RAM is part of the CPU package. RAM size does matter for the iPad's battery life & I bet the limited number of configs has to do with making sure they only have to build a limited set of M1 SKUs that they can shove into almost all devices to really crank down the per-unit costs of these "accessory" product lines (accessory in the sense of their volumes are a fraction of what even AirPods ships).
Anecdotal. I write client software for bioinformatics workflows. Usually web apps or clis. Right now with my mock db, Emacs, browser, and tooling I’m using ~5G of ram. At most I’ll use ~8GB by the end of the day.
I also shut down at the end of the day and make judicious use of browser history and bookmarks. If I were compiling binaries regularly I guess I could see the use in having more ram but as far as I’m concerned 8 is enough and so far people find what I put out perform at.
Yeah 32gb is my baseline now. I could probably work on a 16gb machine now but last time I was using an 8gb machine the memory was nearly constantly maxed out.
(Curious) why? VScode, Chrome, and a terminal running a local server usually will do fine with 16gb or less. Are you testing with or querying massive data sets locally or something?
I'm typing this response on a 8GB M1. It's great, but its no magic. Its limitations do start to show in memory intensive and heavily multi-threaded workloads.
Getting some down votes, which I attribute to reasonable skepticism, so hopefully this will allay your concerns.
One example, I was trying download all the dependencies for the Kafka project via Gradle with IntelliJ while watching a video on YouTube and working on another project in Visual Studio Code. The video started to stutter then stopped and Visual Studo Code became responsive. I basically had to shut a bunch of stuff down and go to lunch.
I haven't seen a modern computer struggle with that kind of workload before.
End of the day the Intel Macbooks of the last few years have been terrible low performance processors that get thermal constrained and have abysmal inconsistent battery life. So if all you use is Macs the M1 is going to feel amazing.