Hacker News new | past | comments | ask | show | jobs | submit login
Mountain Lion seems to have addressed the memory management issues in OS X (workstuff.tumblr.com)
135 points by fields on Aug 2, 2012 | hide | past | favorite | 119 comments



It's important to get the terminology correct when discussing VM behavior. Page-outs are not identical to swap-outs. And unfortunately, due to the age and architecture of the Mach kernel, there's no way I'm aware of on OS X to measure the rate of the latter.

Page-outs refer to a file-backed page being committed to disk; these are perfectly normal and expected (program writes to a file, kernel eventually commits to disk). Every OS guarantees that file-backed pages be committed within some reasonably short period of time to reduce the risk of lost data in the event of a power outage (the sync interval).

A high page-out rate doesn't necessarily imply memory pressure; it may simply mean some program running on the system is writing to files frequently. However, memory pressure may temporarily increase the page-out rate if the sync interval hasn't yet kicked in.

Swap-outs, on the other hand, relate to anonymous memory pages (i.e. the heap). Swap-outs, in contrast to page-outs, are generally bad and indicate severe memory pressure.


That's not what pageouts as reported by OS X refer to. Try the following experiment:

  1) Open two Terminal windows.
  2) Run top in one of them.
  3) Note the pageout number.
  4) In the other, type echo foo > bar.txt
  5) Refer again to pageout number.
You will see that it did not increase, even though the system just wrote to a file. This will be the case even if you wait a while.

On OS X, the reported pageouts are dirty memory pages being tossed, not writes of files to the filesytem.

Your definition also does not match the historical distinction between swapping and paging and the distinction you draw is idiosyncratic. Originally, swapping referred to swapping out all of a process's memory, while paging is done in chunks. No system has true "swap" in this original sense any more, so the terms "swap" and "page" are now basically interchangeable.


Everything I said is true for at least for Linux, where files are memory-mapped into the process space, and swapped page counts relate to anonymous pages (look at /proc/vmstat for the gory details; note that pgpgin/out and pswpin/out increment under different conditions).

OS X may admittedly be different. It's unfortunate that it doesn't expose more counters to help show what's really going on.

As for "reported pageouts are dirty memory pages being tossed" - where are they being tossed to, if not the filesystem?


They are usually being tossed to the pagefile (except in the case of read-write non-anonymous file mappings, which are not super common on OS X and which are functionally equivalent to paging to the page file anyway).


On OS X files aren't memory-mapped by default.


Where did all of that come from? It's fantasy as far as I can tell.

I have never heard "Page-outs refer to a file-backed page being committed to disk."

I think you are confusing it with a file cache or buffer cache commit interval.

A page out is when a page of memory is written to disk ("paged out") so you can use more live memory than you have physical RAM. Fun fact: the kernel is smart enough to not page out the page in code.


> I think you are confusing it with a file cache or buffer cache commit interval.

That's a distinction without a difference - the file/buffer cache stores memory pages. They are simply file-backed pages.


Dredging up my knowledge from OS class, there are three sections of memory for a (Von Neumann architecture) program: the data section (also called the program section), the stack, and the heap.

The data section is the actual compiled bits of the program: the machine instructions themselves. The stack and heap are memory used by the program at run-time.

Completely separate from that, on a Linux system, there is the VFS buffer-cache system. When you write to a "file", you are actually writing to VFS buffer cache memory, and the OS then flushes that dirty page of VFS cache to disk, at which point the write is committed.

Your description of page-outs sounds as though you are describing the operation of the VFS buffer cache flushing its dirty pages to disk.

In my college education (1998 to 2003), we used the terms swap-out and page-out interchangeably.

However, I have heard some people make the distinction of using page-outs/ins to specifically refer to just the data section of an app being written to / read from disk, with term swap-outs/ins referring to the same operation on the stack and heap.

But I've never encountered the term "page-outs" being used to refer to dirty file-backed storage pages being flushed to disk. Is this an old-school hacker thing? Or did I misunderstand and create a straw man?


"to reduce the risk of lost data in the event of a power outage"??

Doesn't sound right to me; if the power goes, you're rebooting into a fresh memory image, not recovering to the memory state in the current pagefile.

Eagerly synching back to disk is surely more about making sure you have a minimal amount of dirty pages in RAM at any time, so that in the event that you need to use a lot of RAM for something else you don't have to wait until the backing store updates before the page space in RAM is free... no?

Edit: Oh, I see - you're saying that 'page out' refers to writing a page of data back to disk through a memory-mapped file - it's basically just a disk write, and has nothing to do with memory pages or pagefiles. Hmm... is that terminology distinction universal?


> Every OS guarantees that file-backed pages be committed within some reasonably short period of time

Just curious: is that "short period of time" usually in order of seconds or minutes? (I know different OSes have different defaults; I just want to know a ballpark figure!)


ReiserFS on Linux was somewhat unusual in that it had a 5 second timer used to flush all of the dirty VFS blocks to disk. If I recall correctly, EXT2 (the previous popular filesystem) just flushed everything as soon as possible.

The first time I installed a box with ReiserFS, I recall being alarmed by my box "ticking" every 5 seconds.


It's there to keep disk from trashing the head around, so I believe it's not more than a few spins of the platters.


:) Actually, I wanted to ask whether it's in order of milliseconds or seconds! Don't know how the "minute" found its way into my comment. A minute is reaaaly long.


I still ended up with an 8 GB page file after a few days of regular usage (nothing too hard like VMs or anything - just your typical Xcode dev workflow, including opening up Photoshop once or twice to export graphics).

This is on a MBP with 8 GB RAM.

I never had any memory-related performance issues in Lion, (maybe because I have an SSD), but I always end up with a big pagefile of stale data.


Most modern operating systems try write the contents of the physical memory to the page file at the first good opportunity (well before it's needed) in order to save time when the page is actually paged out, so it's perfectly normal to have a page file that is at least as large as your physical memory.


AFAIK VMS and NT behave that way but Linux is much lazier. Especially if your swap is dynamically allocated (as in OS X) you don't want to swap out unless there is actual memory pressure.


The size of your page file isn't a sign of a performance problem. What matters is how often your paging out and in between disk and memory. That will result in a huge slow down in human-perceived performance.

The OP (and others, including myself) have observed a noticeable drop in the delays caused by frequent paging in and out. I checked and my pagefile is the same size as on Lion. Perhaps ML is still paging frequently, but if so, they've found a way to do it so that I'm not noticing it. Everything just feels faster compared to Lion.


Are you sure that's not your sleep image?


I'm going by the stat in iStatmenus. It slowly grows from 0 the first 2-3 days.


you sure it's the page file? page file or hibernation (safe sleep) file? Memory swaps to /var/vm/pagefile0 etc....

unless you disabled it deliberately through a 3rd party tool or command line, your macbook will dump all of RAM to disk as soon as it sleeps - this is your "safe sleep" recovery if power drops too low. It doesn't wait around for low power to dump it out, it does it as soon as you close the lid.


It seems as though a lot of people are noticing some speed enhancements. I haven't quantified any of it, so my experience is all anecdotal, but I definitely notice the boost as well.

The changes in Mountain Lion have been subtle, but I really love the upgrade. I performed a full disk backup which took the longest amount of time. The actual install (2012 MacBook Air) took like 15-20 minutes. If anyone is in doubt, definitely do yourself the favor and upgrade. Typically I am hesitant to do major OS upgrades due to things like python or ruby breaking most of my local websites ... everything works great!

I had to modify some apache2 settings (I use the built-in webserver for PHP development) but that was about it. Oh also, apache is still there. A lot of people think that it's gone because the preferences pane is missing, but it's still hiding within the belly of the beast.

I feel like I can throw anything at this. I've never multitasked like this before.


One of the things I've noticed is that the new Xcode 4.4 (released the same time as Mountain Lion) seems to be an awful lot snappier to me. My theory is that this is happening because they might have switched Xcode from being garbage collected to ARC (since it used to be GC ever since Snow Leopard came out but GC is now deprecated).


Does anyone know if there is a way to monitor xcode and find out if it's GC or ARC?


You can just use otool to dump the Objective-C information out of the MachO executable:

     otool -o /Applications/Xcode.app/Contents/MacOS/Xcode
     .....
     Contents of (__DATA,__objc_imageinfo) section
       version 0
         flags 0x6 OBJC_IMAGE_SUPPORTS_GC
That's for Xcode 4.4. So yes, it's still being built for Garbage Collection.


A recent post on the Apple developer forums indicated that it is still using gc. But it's a very large, old codebase at this point so that's not surprising.

I imagine that as they drop support for older platforms and have the time, they'll slowly remove reliance on gc.


We've been hearing this for a long time:

“free memory is wasted memory” vs “inactive memory is never released”

“Purge is the worst thing you could do” vs “Purge solves the issue”

We have the kernels's source[1] and DTrace. Let's get some real data?

[1]http://opensource.apple.com/source/xnu/xnu-2050.7.9/


Haven't seen the paging issues Adam was noticing, but I'm still gobsmacked by the amount of memory some applications seem to take. The OS (kernel_task, mds, WindowServer, and friends) regularly eat up .75 of a gig, and Firefox consumes .5 - 1 gig without much effort (other browsers seem to have a similar profile).

Quick poll, what are you running, and what's your memory consumption? Here's me:

Currently running: Firefox, Spotify, Mail, Calendar, Terminal, Mongo (mongo using 100mb).

Memory Used: 3.75 GB


It is definitely better overall with ML in that it is no longer ridiculously slow but memory consumption for typical set of tasks still is way higher than either of Linux and Windows.

For instance I have KDE4 running on Linux with Firefox, Amarok, Kopete, Konsole and all other default services (ssh etc). Used memory is 1.1GB.

I've ML running with Firefox, Terminal.app, Activity Monitor in addition to the defaults. Any my used memory is 3.92GB. Wired is 1.75GB and Swap used is 11MB!

My experience with Windows 7 is also lot better than ML - it tends to be bit more than Linux but in the same ballpark.

Edit: To put this in perspective - I just realized that my work laptop (that I RDP into and never reboot) with all possible bloatware loaded and running - Word, Outlook, RDP, Lync 2010, Java Backup process, Enterprisey stuff, Firefox with couple tabs etc. is taking up only 2.83 GB! Apple really has a memory usage problem with OS X.


Unused memory is wasted. You should be complaining about the system that doesn't use all available memory.


Windows and Linux don't leave unused memory around - both use it heavily for page cache. However what we have a problem here with is active/resident memory use and/or VM subsystem making no so smart decisions(this part seems to have gotten better with ML).

So the end result is that when Windows and Linux can be quite usable with 2GB RAM with a rotational disk, for similar workload OS X slows down quite a bit.


Unused memory is wasted memory. As a user, you shouldn't care about memory usage if there's plenty available.

It's far more interesting to see what happens in a low-memory situation.


Except... when apps are using a lot more memory than they do on other OS, you tend to get a low-memory situation anyway.

If I've got a 4gig machine, and running 5 apps takes up 3.8 gig, trying to run another app with necessitate swapping/paging some of the existing apps around to accommodate my new app. If those initial 5 apps only took up, say, 2.8g, I'd have less chance of hitting 'low-memory' situations in the first place.

I say this as someone with 16g mbp and SSD who still hits beach balls and unexplained pauses on a daily basis. Fewer than I used to, but it gets annoying. Not quite as annoying as win95/2k blue screens of 12+ years ago, but getting my ire up.


Beach ball is not necessarily a low memory warning (or memory related). It's showed (only) when a GUI application fails to respond to UI events. Sometimes it's because of bad threading in applications or things like that.


understood they're not always the same thing - I often see a beachball with 6gig free, which used to perplex me a bit, now it just bugs me. :/


If you're interested in seeing what's happening, fire up Activity Monitor and hit Sample on the app when you see the beach ball. It records the call frames of all threads a couple of thousand times over a few seconds, so you can see what that app is waiting for.


If the entire system has ground to a halt such that it's difficult to actually get to Activity Monitor to do the sample, then it's likely to be a memory problem. If it's just one app, it's probably something else.


If the system is hung such that you can't get Activity Monitor to sample, use command-option-control-shift-period to run sysdiagnose. Attach the resulting /var/tmp/sysdiagnose_$(TIME).tar.gz to your bug report.


And inside that tar.gz file is a spindump.txt file, which will have the samples of whatever application is spinning at the moment, just like if you used Activity Monitor to sample it.


I had never heard about it... Wonderful.


... and report the bug to the app's developer! Be sure to include the sample.


good idea - upvoted! :)


Odd - I've been using an 8 gig mbp with swap disabled and I've yet to crash due to running out of ram. That's running a few browsers, editors, VM instances, possibly photoshop for some tweaking. Beachballs are very rare and would be due to software errors, not out of memory conditions - those would just crash.


A lot of that memory is shared memory.


What's the problem with the OS and apps using as much available memory as possible?


The problem is that OS X appears to favor swapping over evicting file caches. This ends up creating a situation where no matter how much physical RAM you add to your system, you still end up seeing frequent swap-induced beach balls.


That's child's play. Open Dashboard and watch the horror.


Funny you should say that. Exactly my experience. Just left a comment on Adam's blog, but let me add it here as well.

I turned off Dashboard where I had a couple of unused widgets active running in the background (used Dashboardkiller app from MacUpdate) because I haven't used any of those widgets in months. Suddenly all of my mysterious memory/paging issues on Lion disappeared. Swap and page-outs decreased significantly. Machine is running like the first fresh install. Give it a try.

Edit: I'm running Lion 10.7.4


I've never really tried to understand Dashboard from the O/S side, but it seems like it's almost installing a separate WebKit process for each widget.

Edit: I'm on Mountain Lion, and Dashboard is still a memory hog. I just disable it now.


I believe it actually is a separate WebKit process per widget.

Could you quantify the Dashboard memory usage? I'm using the Weather and Delivery Status widgets, and they're only using 16.7 MB and 28.9 MB. This is on Mountain Lion.

edit: most of that memory is shared, so the real memory usage is even lower.


Note: I did not shut down dashboard because of reported memory usage, nothing was obvious from activity-monitor either, ie not listed separately as a memory-hog.

However, once I did shut it down (not just remove all the widgets, I had like 10 running, but actually prevent it from starting) my machine was finally back to normal.

I'd be curious to see if anyone here who does experience the memory problems can replicate this.

Addition: since shutting it down, the reported memory usage for the kernel task has dropped a lot, does that make sense?


My dashboard has 10 widgets, two of those are web clips. I'm seeing 90MB real memory and 220MB virtual. Maybe it's not as bad as I thought. Or maybe it's just improved since 10.6.


Is that Virtual Memory? Or Private / Reserved / Shared?


On a new rMBP with 8GB RAM, running eclipse, iTunes, weka (ML software), kindle reader, open office, safari, preview with a half dozen pdfs open, terminal. 5.5 GB used, 0 pages out:)


open office -> try libreoffice, it is evolving much better.


I'm running programs of similar functionality but coming in at 1.95GB.

Chrome (only 8 tabs right now), Rdio, Sparrow, Coda, Terminal, Notational Velocity.


Quick thing on Terminal. I noticed that I had to limit the scroll back history in Terminal or else it eats up all your everything. For example if you have a number of term windows open and they are tailing log files or have verbose running processes open. I put it at 10k or so and things are fine now.


I can concur with OP. My MacBook Pro suffered severe paging issues under Lion, but seems to be (mostly) gone in Mountain Lion. Running Eclipse, Photoshop, Chrome, and Parallels at the same time is possible once again without major slow down on my machine. As always, YMMV.


Odd. I had this problem in 10.6 (Snow Leopard) and it went away when I upgraded to 10.7 (Lion). The most obvious symptom is the machine does not seem to be able to make use of the inactive memory (the blue slice of the pie in Activity Monitor). I did not notice the problem at all when I first started using 10.6. When I hunted around on the help forums a few months ago I found lots of users with the symptoms, but nobody acknowledging the "validity" of the problem. In fact a lot of deniers out there. I wonder if it was quietly fixed or will recur after the system gets used for a while.


I don't suppose anyone can comment on whether Mountain Lion has done anything for the mouse lag issue[1]?

I can't upgrade yet because I disabled HFS+ journaling and nothing seems to be able to re-enable it (the installer requires journaling to be enabled).

[1] http://d43.me/blog/1205/the-cause-for-all-your-mac-os-x-mous...


I'm sure someone will be along to correct me quick enough if I'm wrong, but last year when I was upgrading to Lion I had the same issue (journaling turned off) and was able to turn it back on with minimal pain by doing it from the Snow Leopard boot disc (Disk Utility, of course).


Unfortunately that just gets me a helpful "Journaling could not be enabled". I guess something got messed up when I was writing to the volume from Linux.


What about:

    $ diskutil enableJournal /dev/diskXsY
ML's diskutil(8) says:

     enableJournal device
                Enable journaling on an HFS+ volume.  This works whether or not
                the volume is currently mounted (the volume is temporarily
                mounted if necessary).  Ownership of the affected disk is
                required.

     disableJournal [force] device
                Disable journaling on an HFS+ volume.  This normally works
                whether or not the volume is currently mounted (the volume is
                temporarily mounted if necessary).  If the force option is speci-
                fied, then journaling is disabled directly on disk; in this case,
                the volume must not be mounted.  Ownership of the affected disk
                is required.

     moveJournal external | internal [journalDevice] device
                external will create a 512MB Apple_Journal partition out of
                journalDevice and an HFS+ partition will be created out of the
                remaining space if available; journalDevice must be a partition,
                not a whole-disk. The journal for device will then be moved
                externally onto the newly created Apple_Journal partition.

                internal will move the journal for device back locally.
I don't know if it can work on the mounted root filesystem so you may want to try that from the recovery partition.


Just curious: why did you disable journaling?


Linux doesn't currently support writing to journaled HFS+ volumes.


Since the GM, I also noticed this on my production machine. Previously I had custom settings for the dynamic pager, but it seems that is not longer necessary.

Subtle changes and I'm happy with them. Probably the next year, we'll see some subtle improvements on the backend.


This is the reason I installed Mountain Lion. I've only used it a bit in the last few days, but I haven't seen crazy amounts of mds RAM usage since installing either.


Only thing holding me back from upgrading is the complete lack of wanting to hex edit my ATI kext again.. heck it might not even be compatbile with ML.

Hackintoshes :/


What kind of card do you have? Hackintoshing has gotten a lot friendlier in the last few years. I upgraded a hackintosh I built for a friend from 10.6 to 10.8 and was pleasantly surprised by how everything "just worked" with minimal kext juggling.


Gigabyte Radeon 6850. What'll happen is, you have to choose a specific framebuffer configuration to even boot. 2 of these work with my card, only one of which allows hardware acceleration to work right and allows both DVI ports to work.

The one that allows hardware accel on a single screen is actually the right one for the card. The hex editing is to change the display outputs to enable the second head.


I'm using an XFX 6870, two-headed, on my Hackintosh, and didn't have to edit anything. I'm pretty sure hardware acceleration is working, too.


Those are sufficiently different that you likely don't have the problem. The 6870 uses a different framebuffer personality than the 6850, and likely the fact that it's a different manufacturer might ameliorate the problem altogether.


Which hackintosh tools did you use? I used Unibeast/Multibeast.


Multibeast from CD. I don't think this is an install problem though, there's an absurdly large thread about this on insanelymac: http://www.insanelymac.com/forum/index.php?showtopic=273937


My battery life seems to last as long as it did on Snow Leopard, too.


I'm seeing the exact opposite, but for a different reason. On Lion I could put my MBP to sleep and only lose about 5-10% of the battery over 8 hours. Now on ML I lose ~40%, with nothing running. I turned off location services today to see if that's the culprit.

As for actual on-battery use - yes, the battery life is noticeably improved over Lion.


Are you running Power Nap on battery? Also are you leaving USB items plugged in?

On Lion at least I noticed that if I left my USB HD in, and closed the lid on battery it killed it, every time. There was an Apple Support thread confirming that.


Apparently, having an external disk mounted prevents the system from going into some sleep modes, which will kill the battery:

http://support.apple.com/kb/HT1776


That could quite possibly be it, as I don't always unmount my fileservers when I leave home. I'll make sure to do that from now on, and might just write a script to handle it. Thanks.

I wonder if this same issue would crop up with the Amazon Cloud Drive.... It's not much more than a CIFS/NFS mount.


Nope, nothing plugged in. No power nap. WiFi is turned off.

I've got some time next week to look at it, and intend to get to the bottom of it. It's annoying to not be able to carry around a sleeping laptop for a day without it dying.


How I found that was happening for me was looking in Console for Wake and Sleep strings, best of luck.


Is Power Nap on? (The switch is in the Energy Saver pane)


13" MacBookPro - Power Nap is not supported. Though it is certainly behaving as though it's turned on (power-wise).


How much did Lion affect it?


With Lion, I got only about 2 hours...then went back to SL. Now with the upgrade, I seem to be back in the 3.5 hour range.


Yup, memory management seemed to be fixed as of 10.7.4 even.

I've found new bug in ML, whenever I switch users, Mountain Lion seems to think I've a desktop much smaller than what I actually have. And sometimes the "virtual" desktop isn't even positioned at 0,0 on my actual desktop. :-/


OK, that's enough reason to do the upgrade.

Parallels gets abysmally slow for me when TimeMachine starts up.


This is slightly off topic, but make sure you are on the latest version of Parallels before doing the upgrade. For some reason, Parallels 6, which is less than 2 years old, does not support Mountain Lion. Personally, I do not care about the new features in Parallels 7, but was a bit annoyed when forced to upgrade.


I got bit by the same thing with VMware. You must be on Fusion 4 for Mountain Lion. Sucks because I had no plans on upgrading.


There's a free Technology Preview of Fusion 5 that's time-limited, but probably not so much that it wouldn't work until Fusion 5 ships. Mountain Lion is officially supported as a host and guest. I've been using it daily for a couple months to run Windows (7 and Server 2008 R2, plus the various previews of 8 and Server 2012), FreeBSD, and OS X (Lion, Snow Leopard Server, and Leopard Server), and it seems stable.


I was surprised to see myself being forced to upgrade to the next version of Parallels. Pretty annoying.


Virtualizers need as much RAM as you can possibly throw at them. Filler up!


Is that memory consumption or i/o contention?

If your VM image is being backed up that could be a problem. I keep my VM images on an external drive mainly for overall i/o performance reasons. They're not backed up but then all the data I care about is still on my main drive and safe. If you can tolerate the risk, you could try excluding the VM image directory from time machine and see if that makes a difference.


I would not describe this as "the memory management issues in OS X". It should be obvious to anyone how much of an oversimplification that is.


I'm not so sure. When someone says "the memory management issues in OS X", I feel pretty confident in assuming specifically which problem they are referring to.

There are some memory models which appear to behave in a way which is relatively psychologically pleasing, and some which do not.

When users are using a program, there are certain times when they have a reasonable expectation that the machine is going to need some time to complete a task ("click and wait"). There are other times when they have no such expectation, and if the machine makes them wait, they get annoyed.

It isn't about the total time they have to wait, its about whether that waiting occurs when they expect it to or not.

Further, when they start to experience these unexpected, annoying pauses, they have a certain expectation that adding more RAM to the system should solve the problem, and when it doesn't, their annoyance only intensifies.

Having used both Linux and OS X as a desktop, I'm going to claim that the Linux VM behavior is pleasing in the ways described above, and the OS X behavior is not.

My understanding is that this is due to just one underlying behavior of the VM: When experiencing memory pressure, Linux (generally) favors evicting filesystem cache over swapping out stack/heap memory to disk, and OS X does not.


It's pretty hilarious that this is in one area in which Linux put the user first, while OS X has been plagued by the engineers saying "there is no problem" over and over again.


Question: (2009 15" MBP, 8GB 1333MHz RAM, 7200rpm HDD, Mountain Lion)

I have zillions of apps open.

    1GB wired, 3.25GB active, 1GB inactive, 2.55 free

    757,197 Page Ins

    0 Page Outs
It's confusing... If I haven't "swapped out" (page out) anything to disk, then how come I've "swapped in" (page in) 750,000 pages from the disk?


If you mmap a file, and then access the mmap'ed region, pulling in the pages of the file from disk is considered paging the file in. Watch the output of fs_usage to see examples of this.


It's much, much worse for me. Things go into Inactive memory, which is fine, but then it doesn't get freed when it's needed, or else it does so very slowly. So I have 20 MB of Free memory and 4 GB of Inactive and everything slows to an unusable crawl and I have to run `purge` to get my computer to start behaving properly again.


Does anyone still have a 4gb MBP? I'm still on SL and wondering if I should switch.


Mountain Lion uses like 300mb more on my Macbook Air (probably some of the new stuff running in the background like Notifications) but I don't feel the difference in performance. It's not much slower or heavier than SL during real world usage. It does on the other hand feel better than Lion (I really had regrets when I first switched to Lion, particularly before the bug fixes). But I don't think Lion's problems had anything to do with ram management. Some of the animations just felt particularly sluggish, even if the computer had lots of ram free, and lots of other little things like that.

By not switching to lion and waiting for ML you dodged a bullet.


I had 4gb up until I upgraded to ML. After 2-3 days on ML I noticed my free memory was around 300mb. I upgraded to 16gb for $65(cheap Komputerbay ram from amazon.com). An 8gb upgrade is ~45 for crucial 1.35V ram. I think its worth it.


Cool, thanks for the advice. I didn't even realize upgrading memory was so easy. I have a 2010 unibody, and Apple claims the maximum memory it supports is 8gb. Is that not really "true?"


Dig around - the 16 gig aftermarket upgrade may have been confined to some early 2011 mbp's - the 13" for sure (mine's on the way)


I am running Mountain Lion on a 24 GB ExpressCard (about 3 GB free) with 4 GB of RAM. After upgrading to Mountain Lion I am getting a "Hard disk is nearly full" message much more frequently. Almost never seen that on Lion. Could this memory management change be the culprit?


Anyone did the upgrade from SL? Is a clean install always recommended?


Upgraded one machine from Snow Leopard (10.6) and another from Lion (10.7). No issues what so ever. Both still work perfectly, all my stuff from HomeBrew transferred over without any issues either (which is fantastic, did not want to have to recompile all of that stuff).

The upgrade was fast and easy, definitely not something I expected and I had backed up my machine to do an install from scratch, but not required at all.


Did you lose the XCode Command Line tools? I had to install them last week (after I had upgraded) and I swear on my life I had them before. (I would have had to have to have used brew and built as many of my projects as I had).


I did indeed lose those. It leaves your /usr/local intact, but removes your /usr/<everything else>.

Just re-installed them, all is well again.


Which is really as it should be. /usr/bin is typically considered to be the property of the OS.


Clean install is always recommended no matter which OS you're coming from.


Recommended by who? The mac pundits that I listen to, Gruber, Siracusa, certainly don't seem to have a problem with upgrading in place on OS X.

And, "no matter which OS" is definitely incorrect. I've done an in-place upgrade of an OpenBSD system for the better part of six years (twelve upgrades, twice a year). The _recommended_ approach on that platform is to do _in place_ - not clean install. It's how you maintain library compatibility with all your legacy apps.


I've had no more problems with the Lion machine I upgraded to Mountain Lion than I have with the two machines I did clean (USB stick, NetInstall) installs on, and the same was true when I moved from Snow Leopard to Lion.

With that said, there are fewer things that can automatically go wrong in any clean install (as opposed to the things that can go wrong manually with a clean install, like forgetting to deactivate Adobe CS, or that scripts accustomed to sshing in to your server don't take kindly to unannounced host key changes).


What makes you say that? I've been running the same install since 10.3. It's changed machine several times, and I have run every version of OS X along the way, all without any problems.


I did an upgrade from SL with no issues.


> addressed the memory pun intended?


No, but that's funny.


This guys initial advice was terrible and should never have been paraded around as a solution


Really? Because it solved my problem completely. I have to reboot when my system runs out of RAM, but I'd much rather do that once every two weeks than be constantly bombarded with beach balls.


In spite of the fact that it worked for many people?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: