Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So my sister does professional photography, and went from a 16GB MacBook Pro(2016) to an 8GB Air, and reports no decrease of productivity - quite the opposite, in her experience all Adobe programs run much faster, and the machine is quieter and lighter to boot. So yeah, I'm not sure - maybe the ram amount isn't as a big deal as people make it out to be. On the other hand I'm a C++ programer and my workstation has 128GB of ram and I wouldn't accept any less.....so obviously it varies.


> I'm a C++ programer and my workstation has 128GB of ram and I wouldn't accept any less

What on earth are you programming?


He won't know until all the templates are instantiated...


That's a fantastic joke, thank you :D


Absolute madlad


Lol monster


AAA video games do that for you :P Well, it isn't the programming part that uses the ram(although yes, building our codebase takes about 40 minutes and uses gigabytes of ram without using distributed build), but just starting up local server + client + editor easily uses 80-100GB of ram since ALL of the assets are loaded in.


Did you have the chance to try your setup on an M1? If it worked for your sister, although you seem to have way higher requirements, is there anything to say it wouldn't work for you?

I'm asking because I read a lot of comments when it was released that it just doesn't need as much RAM because $REASONS. I wouldn't put my money on this, but I'm curious if this assumption holds water now that people have had time to try it out.

Edit: there are such comments further down the thread where it seems to still be a mystery: https://news.ycombinator.com/item?id=26913643

So I'd really like to know where this magic breaks down: if you're used to 128GB or RAM, will the M1 feel sluggish?


I doubt it's possible to try a AAA dev setup on OSX at all, and, for whatever it's worth 64G workstations were "hand-me-downs" at my previous gig (AAA gamedev), I doubt there's much magic that can make "64 gig is not nearly enough" go to "16G is fine"


I also have doubts but that's what the marketing hype has been claiming for some time now, so I'm really curious about real-world experiences and where the hype breaks down. The debate is often "I need way more RAM!" vs "But this is a new paradigm and old constraints don't apply!".

AAA gamedev might be the wrong demographic though, since it's mostly done on Windows (I think?).


Well, 100% of my development tools are Windows only so I can't really give it a try, sorry :-)


I'd suggest people just get the larger ram unless they're tight on budget. I know Apple's trying to argue otherwise and people will agree with them. But I can't hear it as anything other than thinking molded to fit a prior decision. For what it's worth, and not scientific, but reported "percent used" statistics seem to grow slower for the 16 gig models than the 8 gig models (from the smart utils).


When VS decides to disk bomb you or uses literally 120GB of ram because its auto scaling trips up.


I'm equal parts happy and terrified that MS announced x64 version of VS recently, because I know it will just mean VS can now scale infinitely. At least right now the core process has to stay within the 4GB limit :P


So instead of attacking Microsoft about giving VS, we attack apple for not having enough ram lol


>What on earth are you programming?

Earth. He is programming the earth.


One example is linking QtWebEngine (basically Chromium) which can sometimes take upwards of 64 GB of RAM.


How did chrome eat safari’s launch when they were both based on WebKit. amazing !


Large code bases in an IDE, program dumps, large applications (the software I write will gladly use 10-20gb in some use cases), VMs, large ML training sets, &c.

128gb is likely overkill, but I can see a use case depending on what you're doing.


I think everyone should get more ram than they think they'll ever need. 32 gigs is that number for me, but if I thought I'd get even close to using 64 gigs, might as well go for 128.


Same setup as him, I'm working on llvm. It's very nice to be able to test the compiler by running and simulating on a threadripper CX3990, means I don't have to run everything past the build server


Web developer, I had an 8GB M1 MacBook Air and if I ran vscode with my 50kish typescript codebase and dev server for api and frontend + native postgres and redis at the same time I’d be right at the limit of my machine slowing to unusable levels. Switched it for a 16GB one and I’ve never had any noticeable performance issues since.


I think the efficiency of the new M1 is what blows everyone away.

When the M1 first came out I was super super skeptical, seemed like an under powered chip compared to what's in the x86 world.

Now I'm convinced that it's got some magic inside it. Everyone I've talked to said it chews throw whatever they throw at it.

I'm consistently floored by Apple's ability to innovate.


It’s funny you say “innovate.” They certainly innovated with the M1, but with one key difference. They did so in public, step-by-step over the course of a decade.

Perhaps more than any other Apple innovation, we have the greatest visibility into the process with the M1.


I consider ability to perform open development of future products a key differentiator for Apple.

Like Jobs said in his Stanford speech, “You can't connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.”

It seems like Apple uniquely combines open development of technology in known products with secrecy around new product development.

This allows people to be so surprised by the M1, when the late AX processors were obviously pointing toward massive capability.

I believe there are other examples of this happening--specifically with the Apple Watch.

On that product, the size limitations combined with increasing expectations of performance and functionality have allowed Apple learn and improve production-capability in many areas that will be in any forthcoming AR/VR products.


The sweet spot is 8GB for most people. I’d guess 80% 8GB/15% 16GB and 5% more. Even many types of developers are fine with 8.

Apple collects a lot of metrics and I’m sure they know this well.


The M1 also means that Apple regains full stack control of its desktops and laptops. Their phones and tablets prove that when they develop both the main chips and operating systems themselves, they are able to eke out greater performance from lesser specs.

Flagship iPhones always have less ram than flagship androids, but match or exceed their performance.


That may help a bit but unless they adopt that draconian policies that govern iOS runtime that impact is limited. Safari is an example of improved battery life when Apple owns the stack. It’s a great feature, but not a game changer.

Most business users were fine with 4GB 5-6 years ago. Electron apps like Teams and Slack pushed it up to 8. The next tier are folks with IDEs, docker, etc that are usually 16-32GB.


I agree. Honestly electron used the main thing pushing up ram usage these days for most people.


It's the Gigabyte Myth! Less is more! Up is down! In is out!


Running a few Google Docs tabs and I can easily bring my 16GB Linux laptop down..


Web Developer here. Wouldn't accept anything less than 32GB for a work machine.


Now I know why all the websites I use are so slow :D


You joke, but I think this is actually true. If companies gave their developers of client-facing software slower computers the resulting software would end up being faster.


Yep because they would not get anything done ;)

In my experience as a web dev, all the performance goes out the window as soon as the tracking scripts are added.


I had nearly a perfect Lighthouse score until the marketing team added GTM


I don't think it's the developers who are the problem. In my experience, they're usually the ones advocating to spend time improving performance.

PMs and executives are the ones you should give slow machines to if you want more focus on performance.


Agreed, IME it's nearly always the devs arguing to improve performance.


But it would take them an order of magnitude longer to ship it. `npm install` even on a fast machine is a slog.


> `npm install` even on a fast machine is a slog

It may not be with fewer and less heavy dependencies!


It's not the end product itself that eats huge amounts of RAM or CPU. It's the dev tooling.


We must be using very different websites.


Obviously a lot of the modern web is crazy and many sites are pretty heavy on resource usage.

It's still not comparable with your average developer tooling in terms of footprint, was my point.


And why some of my past upgrades were driven by the web browser over-taxing the machine on certain sites, while nearly everything else was perfectly performant with absolutely no complaints.


haha so true


Yep, if you're running big IDEs (e.g. Rider/IntelliJ, Visual Studio), containers and VMs, 32GB is really a must. There always seem to be people in these threads claiming that 16GB or even 8GB is enough - I just don't understand how that could possibly be for most of the HN demographic.


Do you think most of the HN demographic is actually running big IDEs, containers and VMs at the same time? I'm personally a CS student and never had to run more than a few lightweight docker containers + maybe one running some kind of database + VS Code and that has been working fine on a laptop with 8GB and pop_os. Could imagine that a lot of other people on HN are also just generally interested in tech but not necessarily running things that require a lot of memory locally.


CS PhD Student. Running a laptop with 16GB of RAM. I dont train ML models on my machine but whenever I have to debug stuff locally, I realize precisely how starved for RAM my computer is. I start by closing down FF. Just plain killing the browser. RAM down from 12GB to 7. Then I close the other IDE (usually working on two parallel repos). 7GB to 5. Squeeze out the last few megabytes by killing Spotify, Signal, and other forgotten Terminal windows. Then I start to load my model to memory. 5 times out of 7, its over 12-13 GB at which point my OS stops responding and I have to force reboot my system cursing and arms flailing.


> lightweight docker containers

If you're on macOS, there's no such thing as a “lightweight Docker container”. The container itself might be small, but Docker itself is running a full Linux virtual machine. In no world is that “lightweight”.


I was going to say, I'm on a 16gb 2015 macbook pro (not sure what to upgrade to) and Docker for Mac is _brutal_ on my machine, I can't even run it and work on other things at the same time without frustration


I run like 50 chrome tabs, a half dozen Intellij panes, youtube, slack, and a bunch of heavy containers at the same time, and that's just my dev machine.

My desktop is an ML container train/test machine. I also have ssh into two similar machines, and a 5 machine, 20GPU k8s cluster. I pretty much continuously have dozens of things building /running at once.

Maybe I'm an outlier though?


Yeah. I suspect most people here are software engineers (or related) and IDEs, Docker, and VMs are all standard tools in the SE toolbox. If they aren't using Docker or VMs, then they are probably doing application development, which is also pretty damn RAM hungry.

I do most of my development in Chrome, bash, and sublime text and I'm still using 28GB of RAM.


Depending on the requirements of your job—-just a single VS instance with chrome, postman and slack open takes around 5GB. Teams adds another GB or so. The rest probably another 2GB (SSMS and the like).

On my particular team we also run a dockerfile that includes elastic search, sql server, rabbitmq and consul—-I had to upgrade my work laptop from 16GB to 24GB to make it livable.


Wouldn't you just have all the heavy stuff on a server? I don't understand the goal of running something like sql server and other server type apps on a desktop/laptop.


Having a local environment speeds up development significantly.


When I’m developing I don’t want any network dependency. I love coding with no WiFi on planes, or on a mountain, etc.


If you are on linux, try out zram.


I could believe that for students because students are usually working on projects that are tiny by industry standards.

10 klocs is huge for a student project but tiny for a real-world project.


I do use intellij and similarly hungry IDEs all the time, with many other resource hungry processes without trouble on 8 GB of RAM.

Though truth is that I use zram, which everyone should who is not fortunate enough to have plenty of RAM, but does have a decent CPU.


I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters. Consider the fact that iOS devices ship with half the RAM of Android devices and feel as responsive, have better battery life, and have better performance.

The Apple stack is better optimized to take advantage of the hardware they have. Indeed, one of the reasons is because they have so few SKUs to worry about it focuses the engineering team (for example, in the past, internally engineers would complain about architectural design missteps that couldn’t be fixed because 32bit support wasn’t dropped yet and was pushed out yet another year). Now, obviously in a laptop use-case this is trickier since the source code is the same as the x86 version. It’s possible that the ARM code generation was much more space efficient (using -Oz instead of previously likely set at -O3). It’s also possible that they have migrated over to iOS frameworks in an even greater part than they were able to in the past, leveraging RAM optimizations that hadn’t been ported to macos). There could also be RAM usage optimizations baked around knowing you will always have a blazing fast NVME drive. Now you may not even need to keep data cached around and can just load straight from disk. Sure, not all workloads might fit (and if running x86 emulation the RAM hit might be worse). For a lot of use cases though, even many dev ones, it’s clearly enough. I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.


> I don’t understand how a demographic as technically intelligent as HN could make the flawed assumption that GBs of RAM in isolation of the entire system is all that matters

I didn't claim it was all that matters, and I haven't seen anyone else do that either.

I do take the point of the rest of your comment though, and it may well be the case that Apple does some clever stuff. But realistically there is only so far that optimisations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.

> I wouldn’t be surprised if Apple used telemetry to make an intelligent bet around the amount of RAM they’d need.

Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.


> But realistically there is only so far that optimizations can take it - DDR4 is DDR4, and it's the workload that makes the most difference.

Except the M1 is a novel UMA architecture where the GPU & CPU share RAM. There's all sorts of architectural improvements you get out of that where you can void memory transfers wholesale. There's no "texture upload" phase & reading back data from the GPU is just as fast as sending data to the GPU. Wouldn't surprise me if they leveraged that heavily to get improvements across the SW stack. The CPU cache architecture also plays a big role in the actual performance of your RAM. Although admittedly maybe the M1 doesn't have any special sauce here that I've seen, just responding to your claim that "DDR4 is DDR4" (relatedly, DDR4 comes in different speeds SKUs).

> Your average Apple user is likely not a developer though (as others are very often pointing out on HN, whenever they make non-dev-friendly hardware choices). Furthermore, I would think such telemetry would be a self-fulfilling prophecy; if you have a pitiful 8GB of RAM, you're not going to punish yourself by trying to run workloads you know it wouldn't support.

No one is going to model things as "well users aren't using that much yet". You're going to look at RAM usage growth in the past 12 years & blend that with known industry movements to get a prediction of where you'll need to target. It's also important to remember that RAM isn't free (not looking at the $). I don't know if it matters as much for laptop use-cases as much but for mobile phones you 100% care about having as little RAM as you can get away with on your system since it dominates your idle power. For laptop/iMac use-cases I would imagine they're more concerned with heat dissipation since this RAM is part of the CPU package. RAM size does matter for the iPad's battery life & I bet the limited number of configs has to do with making sure they only have to build a limited set of M1 SKUs that they can shove into almost all devices to really crank down the per-unit costs of these "accessory" product lines (accessory in the sense of their volumes are a fraction of what even AirPods ships).


Anecdotal. I write client software for bioinformatics workflows. Usually web apps or clis. Right now with my mock db, Emacs, browser, and tooling I’m using ~5G of ram. At most I’ll use ~8GB by the end of the day.

I also shut down at the end of the day and make judicious use of browser history and bookmarks. If I were compiling binaries regularly I guess I could see the use in having more ram but as far as I’m concerned 8 is enough and so far people find what I put out perform at.


Yeah 32gb is my baseline now. I could probably work on a 16gb machine now but last time I was using an 8gb machine the memory was nearly constantly maxed out.


(Curious) why? VScode, Chrome, and a terminal running a local server usually will do fine with 16gb or less. Are you testing with or querying massive data sets locally or something?


Chrome can fill all your ram, just add more tabs (I'm usually at 100-200+ unfortunately).

With Firefox and auto tab suspend (addon), it's manageable.


Another webdev here. My second dev machine has 8 GB RAM and works fine for my purposes. JetBrains IDE, MariaDB, Puma, browser.


Throwing in some more anecdata:

- 16GB MBP (Intel, not M1) - Running MacVim w/ coc.vim & coc-tsserver (so it's running partial TS compiles on save, much like VSCode)

- One image running in Docker

- Slack, Zoom (at idle), Safari (with a handful of tabs), and Mail.app running as well

Per Activity Monitor, 8.97GB of 16.00GB is used with 4.97GB marked as "Cached Files" and another 2.04GB of swap used.


Anyone remembers the times when "web development" was something you could do on a pretty any computer available?


But have you benchmarked your workflow on x86+32GB vs M1+8GB?


I'm typing this response on a 8GB M1. It's great, but its no magic. Its limitations do start to show in memory intensive and heavily multi-threaded workloads.

Getting some down votes, which I attribute to reasonable skepticism, so hopefully this will allay your concerns.

https://ibb.co/VM4Z1DY


> memory intensive and heavily multi-threaded workloads

Such as?


One example, I was trying download all the dependencies for the Kafka project via Gradle with IntelliJ while watching a video on YouTube and working on another project in Visual Studio Code. The video started to stutter then stopped and Visual Studo Code became responsive. I basically had to shut a bunch of stuff down and go to lunch.

I haven't seen a modern computer struggle with that kind of workload before.


End of the day the Intel Macbooks of the last few years have been terrible low performance processors that get thermal constrained and have abysmal inconsistent battery life. So if all you use is Macs the M1 is going to feel amazing.


Working on very enterprisy web app 100+ servics 30+ containers my MBP 16GB is handling it fine


That machine is using swap file a lot, in couple of years SSD will die.


[flagged]


Funny. Except there are fully supported MacBooks from 2013 running the latest version of macOS, still on their first battery, still bringing people joy and productivity.


Or in my case, a 2012 MacBook Air. I hope they'll support Catalina for another three or four years or at which point it's more than a decade old and can finally mature into a laptop's final stage of life as a Linux machine.


They tend to support OS releases for three years. Catalina EOL is likely to be sometime in 2022.


I stand corrected. I'd attribute my false confidence to the fact that even the previous version (macOS 10.4 Mojave) is still supported. But now that I think about it that'll likely change at the Apple WWDC in June.

Still, decently satisfied with ~10 years of software support for a laptop.


Mojave Support ends this year right on time for 3 years. I finally left Mojave behind myself as I really wasn't using the older apps catalina lost support for anyway.


Late-2013 MBPs still came with replaceable storage unlike the M1 Macs, no?


Yup. My personal machine is a base Late 2013 15” MacBook Pro (8GB RAM). Original battery, original storage, it saw very heavy usage up until a a couple of years ago. Still fast, decent battery.


In ~2018 I briefly used an old 4GB MacBook Pro for work. It was only untenable longer-term because I needed to run two electron apps or many-hundreds-of-MB-memory tabs at a time, sometimes.


But why not 16G? It's a small price increase compared to base and it would basically make it much more usable in any extended amount of time, especially given the SSD write 'bug' that were exposed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: