Hacker News new | past | comments | ask | show | jobs | submit login
Help, Linux ate my RAM (linuxatemyram.com)
102 points by ez77 on March 13, 2012 | hide | past | favorite | 102 comments



I've tried to educate Oracle DBAs on why top is wrong and their memory really isn't being used. It's painful and they often refuse to believe that I know what I'm talking about, and that they should use the free -m command to see what memory is actually available for use.

Is there any particular reason why Oracle DBAs are less likely to believe this? Perhaps it's because most of them grew up in legacy UNIX environments rather than Linux.


I think this is a pretty common misconception overall. Working at startups, often find devs with multiple hats sometimes doing ops tasks. Seen many who hop on a system trying to diagnose some issue, fire up top and proclaim "OMG, the problem is we're running out of memory!"

2nd is explaining virtual/resident set size.


Okay, I'm one of those devs. What is this virtual/resident set size thing you're talking about?

EDIT: Thank you for all the helpful responses!


Short and overly simple answer: If a program allocates 2GB of RAM, it might not need to use it all right away. The kernel pretends like it just gave the program 2GB of RAM, but it doesn't really need to flush all those unused pages of cache yet. It can still use them for other things like disk cache, and when the program tries to access those pages then the kernel will dynamically page them in and out as needed. This explains why a program might have 2GB virtual but only 128MB is resident in actual physical memory pages.

This is overly simple and there are a lot of nuances such as shared memory segments, etc.


then there's over-commit, e.g. "you want to malloc 200% of the ram? here you go! enjoy! oh wait, you wrote to the second half? have fun finding the power button, sucker!"

can someone explain to me why so many distros, including "enterprise" server stuff, ship with /proc/sys/vm/overcommit_memory set to 1?


Most software cannot cope with malloc failing. A little swapping is at least recoverable. A failed malloc almost guarantees that your application will crash--sometimes in a very unpleasant way.


ime the main alternative is the box crashing, frequently without leaving behind enough information to know what went wrong. at least if the app crashes you have a pretty good idea who was incompetent.


I'm a developer too and this is what I understand (might be wrong):

- Virtual memory is basically "abstract memory" that is linked (mapped) to RAM or HDD, this means that by accessing this "memory" you might actually be accessing the HDD and, because of this, larger than physical ram.

- Virtual set size is the allocated virtual memory (the above) to the process.

- Resident set size is the allocated physical (RAM) memory to the process.

- Shared memory is memory that is shared on multiple processes, meaning that if you have 10 processes using 10mb of resident memory each and have 2mb of shared memory each, the total of resident memory used is not 100mb but 82mb.


Virtual memory is actually more subtle than that.

At a mechanical level, virtual memory is permission from the operating system to use addresses in your address space. It is so called because, as you point out, it allows us to separate the concept of "memory for a process" from "physical memory on a chip." The reason I further refine the concept is that allocating virtual memory does not allocate actual memory. Let's look at an example:

  void* addr = mmap(NULL, 10 * 1024 * 1024, PROT_READ | PROT_WRITE, 
                    MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
That call allocates 10 MB of virtual memory. But there is no physical memory backing any of it. All that has happened is that the operating system has now said "Okay, starting at the address I return to you, you can now access 10 MB of memory. I will do all of the work of making sure physical memory backs the virtual memory when you access it." That is, once I try to access the memory, it will trigger a page fault, and the OS will find a page in physical memory to back my virtual memory. But until that happens, no memory - not in RAM, not on disk - backs the virtual memory.


The virtual size of a program includes all shared objects (shared libraries, shared copy on write memory pages, the read only executable pages and other shared memory) that a process uses in addition to the memory that only that particular process is using (its resident set size.) Shared objects like shared libraries can be mapped into memory by multiple processes and thus don't use any additional physical memory for each additional process that uses it.

The RSS thus usually indicates the amount of heap and stack a process is using that is unique to it.


RSS also includes shared memory pages (those that are currently resident).


On Windows it's called committed memory

http://blogs.technet.com/b/markrussinovich/archive/2008/11/1...

(the article doesn't have an anchor there... but you know what I mean)


There's a good overview of this in the first four pages of http://www.atoptool.nl/download/case_leakage.pdf

The rest of that document shows how to use atop to monitor per-process memory usage and identify a memory leak.


Well, if you don't understand it, you must read some basic book about modern computer architecture. Short answer: It is possible to reserve memory address space without assigning actual physical memory. As your program runs it can dynamically assign physical memory pages to its address space when it needs it. Again - you must read a book if you want to understand it.


Why downvoting? It may be not the best explanation, but IMO the closest to the way a "developer" should understand it. On Linux a virtual memory is committed with mmap call. Check this for details: http://stackoverflow.com/questions/2782628/any-way-to-reserv... This is basically how shared libraries and other shared objects are attached to your process.


> Why downvoting?

> you must read a book if you want to understand it.

"Go read a book" is not a very constructive or helpful answer to a question that - as it turns out - could be answered with a brief comment.


It can be when a short description completely and accurately describes what's going on, but the person creating the comment needed to read a few books on the subject to actually understand both what's happening and why due to other issues.

EX: How does virtual memory get mapped to L1 cache?


to henrikschroder: I expected that this reaction was caused by mentioning a book. It is my deepest belief that questions like memory management can not be answered with brief comment. You must read at least a basic book if you want to understand it in depth.


That's depressing. Developers are the ones who have to understand this. In IT it's common to find people (even "DBAs") who have made a career of following procedures someone else wrote.


I don't think that is necessarily true. Most modern programming languages, as well as the relative cheap nature of RAM, have rendered a true understanding of the inner workings of RAM less important.

Would they be better at their job if they did? Probably. But do they have to? I'm not so sure anymore.


Sure, but we're not talking about the "narrow, career day-job programmers" here. We're talking about the "wear multiple hats and diagnose problems wherever they lie" set. Those people shouldn't be diagnosing problems with their own ignorance. At the very least they should understand what they don't know.


Agreed. It can sometimes be a side effect of small companies, or young developers living the startup lifestyle. Not saying its bad or the root of the problem, just an observation.

More so, I think it comes more with modern languages that don't force you to manage memory anymore. When you don't need to alloc/free everything you're using, it is easy to get lazy. Factor in never generation who've never worked outside a managed memory language. There are minute details about retaining references, having multiple copies of the same data, or other leaks (file descriptors?).


2 minute fix: get people to use htop, not top.


Having tried both, I'd recommend atop over htop. There's a great overview of its capabilities at https://lwn.net/Articles/387202/


While more powerful, atop is rather confusing. htop is friendly.


I'm not sure what about using atop is confusing, but I'll admit to printing out and referring to the manpage when I discovered it. That was as much because of the great explanations atop's manual provided for the metrics it could display as it was because I wanted to learn the interactive commands.

Some things about htop that I could see as friendlier:

* Htop scrolls with the arrow keys, while atop uses ^F and ^B.

* Htop displays a reminder for some commonly-used commands at the bottom of the screen (e.g. that you need to press 'F1' for help), whereas in atop you have to remember the commands or look them up by pressing 'h' or '?'.

* Htop displays gauges for system-level activity. I think this is a bad tradeoff, though:

    CPU | sys      0% | user      1% | irq       0% | idle    799% | wait      0% |
    MEM | tot    7.8G | free    5.7G | cache 735.6M | buff  351.2M | slab  248.1M 

is much more useful than

    Avg[|                                        0.2%]
    Mem[|||||||||||||||||                 1008/7967MB]
There are features of htop that I wish atop had. For example, the toggleable display of threads, the tree view, and integration with strace and lsof. Even without those I find atop more useful, but YMMV.

The killer feature for atop is logging per-process performance data and reviewing it after the fact.


Thanks! htop is MUCH better than plain ol top.


You'd be surprised how good and feature rich plain ole top actually is. My guess is you rarely deviate from the default options. I made a video which shows how you can get top to show you some pretty interesting stuff.

http://www.youtube.com/watch?v=yFKRsLj_Jhg


thanks for sharing!


Teach them to interpret and understand cat /proc/meminfo .. one of the most interesting things you can do with this is pipe it into gnuplot and watch it over time as things happen. Try it sometimes .. you might get through to one or two.


If you want to be more enterprisey, you can pipe it to SNMP counter and then draw a graph with your Network Monitoring System. It is much more convenient if you have more that few servers.


Maybe that's more 'enterprisey', but its more 'system administrator'-ey to just use an onboard gnuplot and look at real graphs without putting much SNMP/traffic-load on the system of interest .. the purpose is to educate the user on how the system works, not use it as a monitoring device for lazy sysadmins.


I've been an Oracle DBA for 15 years, and no-one uses top for that, as it's well known not to account in any sort of meaningful way for the way Oracle uses shared memory. The only thing it's useful for is seeing which of the sysadmin's Perl scripts is chewing the CPU.


As a longtime Linux user/admin and beginning Oracle DBA I now know that all memory should be occupied by Oracle, not by OS. :-) Serious mode on: I'm sure it's a big problem to be narrow expert. They can be brilliant specialist in their field, but one step aside and they are absolutely helpless.


Thanks for the great comment. I really wish more people like yourself at least understood Linux memory management. I think it will truly help you to be an exceptional DBA. I can also understand how an expert DBA might not know or care too much about the OS underneath his software, although I would argue that if you understand the OS fundamentals, you will be that much better at whatever specialty you have.


Thank you. I'm on my way to become exceptional DBA. Oracle is only 2 years younger than me, but unfortunately it has been developing much faster than I am. It literally takes years just to become familiar with all these layers of Oracle technologies. So, it's really hard to blame DBAs for not knowing how underlying OS works. :-)


I've had the same discussion with Websphere administrators who can't grasp the concept of caching and the role of swapping. I even had one admin think that modifying the swappiness setting to keep memory free would be a good idea...



Looks like HN ate all their RAM...


It's cute that this pops up every few years, and I think it points to a steady (if slow) attraction of new users to Linux.


People say the same things on Mac: http://news.ycombinator.com/item?id=3584609

It represents a fundamental misunderstanding of how modern OSes work. That misunderstanding is not the problem; modern OSes are complex pieces of software, and most people shouldn't have to understand them. OSes should just work. The problem comes in when people who don't understand how they work get the itch to "improve" their system.


I'm actually continually amazed that even some technically inclined people misunderstand how memory works in /all/ modern OSes: recently I had a discussion on Twitter that went something like this: "OS X is horrible, it uses nearly all of my 8 gigs of RAM and my browser is horribly slow!" to which I replied, "it's perfectly normal for the OS to appropriate the RAM in the fashion you see in Activity Monitor; this doesn't mean your machine is slow due to lack of RAM." Unfortunately, even after a lengthy discussion and several forwarded links, I seem to have failed to make the case.

It would be interesting if someone more knowledgeable than I am were to do a write up explaining memory usage in OS X, Windows, and Linux; would be an awesome resource to share with curious tinkers who may be slightly misguided in their understandings of the inner-workers of their computers.


Probably more technical than what you wanted, but here's a bit about how memory works from the "how much memory is the app really using" perspective.

http://neugierig.org/software/blog/2011/05/memory.html


On a few occasions, Safari has claimed gigabytes of memory and caused my machine to start thrashing the swap. In that case, the correct answer is to restart Safari, not take a refresher course on virtual memory allocation.

Windows Vista had an issue where copying large files would set off such a huge swap-storm that OS became completely unresponsive for several minutes. People gave all the same VM excuses then as well, but there obviously was something wrong.


Those aren't excuses, those are reasons: copying large files are the pathological case for just about all caching techniques. You've now kicked out everything useful from your cache with something that you will never use again. It has little to do with virtual memory itself.


As for why the browser is horribly slow, try blocking all the unnecessary (read: tracking) Javascript and Flash. Suddenly, the browser is a joy to use again.


Programmers need to remember that users are like this: (from an Apple discussion thread)

… [periodically]'installd' begins using 100% of all 4 CPU cores, my fan goes full speed and the whole computer gets very hot. Seems to happen before Software Update checks for updates. I usually go to the terminal and kill the 'installd' process, which reduces the fan speed and heat to normal within a minute.

I wonder if this guy is also one that says you need to reinstall from scratch every few months to keep things working?


The reaction of the user is not suprising. No background-daemon should use all 4 cores and be that noticable. Users don't feel good when the computer starts to make thing on his own they don't expect.

Ubuntu had or has the same issue with a deamon used for the graphical package-administrations-guis rebuilding an index (I forgot the name). They tried to help themselves with appropriate nice-settings, defused the issue, but on old machines you still need to move the cronjob to monthly to have a usable system.

You simply can't use the system properly when a background-process uses all ressources. And one normally want do something else than wait for the system. I consider such behaviour a bug.


"You simply can't use the system properly when a background-process uses all ressources. And one normally want do something else than wait for the system. I consider such behaviour a bug."

Agreed, but to add to your point, if your background process is taking up all the resources in my system, that defeats the purpose of being a background process.


You are probably talking about apt-xapian-index. It rebuilds its indexes once a week, consuming all system resources. I had an old notebook, which became unusable every weekend. I did not want to buy a new machine, so I simply deinstalled apt-xapian-index package. :-)


I'm a programmer but not a Mac user. I fail to see what is wrong with the bug report above. Care to explain?


"installd" is a completely undocumented (outside of Apple) daemon that is somehow involved in installing your software (it runs while installing App store programs) and checking to see what updates you might need. It has no user visible existence. No one would ever know about it unless they ran "ps" or something similar. They will find no hint at what it does, or if killing it is safe.


Thanks for the explanation. I still don't think it's a good example in the context of this thread. My impression is similar to onli's:

http://news.ycombinator.com/item?id=3699158


installd is part of framework that handles installed packages. It's fired up as part of the normal software update process (although it really shouldn't be taking a core for itself). Killing it would probably abort the software update process.


I have seen installd peg my CPU to 100% and have killed it. What would you suggest I do?

One of the most annoying features of modern OSes is when some system process just decides to start going wild, eating memory and CPU. Often I find reinstalling is the only way to fix such things.


Let it finish. It has some thinking to do, maybe crypto checksums? Maybe just a really inefficient corner case on an algorithm. There's nothing wrong with letting your CPU work. if running at 100% makes your machine flakey, it is broken.

If your foreground performance is being impacted too severely (and I haven't seen this from installd, I just noticed and researched installd while removing Mac Keeper (malware) from my wife's laptop) then reboot. It's extreme, but it has the best chance of getting your processes shut down cleanly as opposed to a kill where you could nail a process in the middle of a state that really does not want to persist. Programmers are a lazy sort. They won't consider the effect of termination at each point in their program. You are hunting for bugs using your live system as bait if you kill a program.


More than weekly (but fairly randomly) I used to find installd would peg my CPU to 100% and just stay there for hours. It doesn't make my machine flakey, but it makes it slow and I have other things to do.

In the end I just reinstalled my OS, restored files from Time Machine, and everything was fine. I never did figure out why it was misbehaving. I have had (once) a similar problem with spotlight. Fortunately there I knew enough to run lsof to find it had got snuck in an infinite loop on one particular mp3 file, which I just deleted.

However, my point (which I should probably have been clearer about) is that bits of OSes are known to just start going wild for no reason, and often killing them, and eventually reinstalling, is the only option.


"then reboot. It's extreme, but it has the best chance of getting your processes shut down cleanly as opposed to a kill"

so... how do reboots work on OS X ? on every *nix flavor I know, there's one command that just halts the damn machine and damn the torpedoes, and there's one command that does it more gracefully, by sending progressively harder-to-ignore signals to ~every process except init, ending up on SIGKILL (which is not trappable).


A standard reboot from the GUI should be fine. If you boot up in verbose mode there are some amusing messages that are displayed at the top of the screen during reboot/shut down when a program has to be force quit by the OS.


It makes me feel old. As soon as I saw the headline I knew what the story was going to be. Another fresh wave of young faces learning about memory...I just wish they'd stay off the lawn.


It makes me feel a little embarrassed for HN actually. This article might be a revelation for the noobs on the Linux subreddit, but I'd expect the HN crowd to find it pretty pedestrian.

Not only that, but in some cases it is flat-out wrong:

No, disk caching only borrows the ram that applications don't currently want. It will not use swap. If applications want more memory, they just take it back from the disk cache. They will not start swapping.

Try again. You can tune this to some extent with /proc/sys/vm/swappiness but Linux is loathe to abandon buffer cache, and will often choose to swap old pages instead.

I have learned this the hard way. For example, on a database machine (where > 80% of the memory is allocated to the DB's buffer pool) try to take a consistent filesystem snapshot of the db's data directory and then rsync it to another machine. The rsync process will read a ton of data, and Linux will dutifully (and needlessly) try to jam this into the already full buffer cache. Instead of ejecting the current contents of the buffer cache, Linux will madly start swapping out database pages trying to preserve buffer cache.

Some versions of rsync support direct i/o on read to avoid this, but they're not mainstream and readily available on Linux. You can also use iflag=direct with dd to get around this problem.


The cache can be cleared via `/proc/sys/vm/drop_caches`.

http://linux-mm.org/Drop_Caches


Yes it can, but you probably don't want to do that. You definitely don't want to do it in an automated way when the machine is experiencing memory pressure.

There are very good reasons that Linux (and most other modern operating systems) makes aggressive use of page caches and buffers. For the vast majority of applications dropping these caches is going to reduce performance considerably (disk is really really slow) and most applications for which this isn't true are probably using O_DIRECT anyway.

The arguments in favor of page caching are: (a) disks have very high latency (b) disks have relatively low bandwidth (c) for hot data RAM is cheaper disk IO both in dollars and in watts [1] and (d) it's basically free because the memory would have been unused anyway.

The arguments against page caching are: (a) occasionally the kernel will make poor choices and do something sub-optimal and (b) high numbers in 'free' make me feel better.

Too many inexperienced operators (or those experienced on other OSs) confuse disadvantage (a) for disadvantage (b) and decide to drop caches using a cron job.

[1] Old but good: ftp://ftp.research.microsoft.com/pub/tr/tr-97-33.pdf


Yes, it was actually sort of a response to that webpage that said it was not possible to free this cached memory.

The cache dropping is actually useful when you are doing benchmarking...


> The cache dropping is actually useful when you are doing benchmarking...

Agreed.

My response was more to the "Let's Put 'echo 3 > /proc/sys/vm/drop_caches' In Cron and Get Free RAM!!!!" thinking, which sadly seems to be widespread.


firefoxatemyram.com is still available. Perhaps we can put a site up there as well.


Yep, firefox is currently 555 MB resident and using 1.5 GIGABYTES of virtual memory space. Goddamn, firefox, you are a pig. Saddest thing? I've got gmail, github, and maybe 10 static pages open.

Linux just looks like it ate your RAM. Firefox straight up does eat it.


Am I missing something? Does Firefox not use unused RAM for cache in a similar manner as Linux uses unused RAM?


555 MB is still a lot. That's the actual memory usage.


Modern web browser are complicated pieces of software that are also the nexus for most people's interactions with their computer. tiles' point is correct: Firefox will tend to cache things much like the OS itself. We should expect it to use a lot of memory. If your system performance has not suffered, there is no problem.


Performance does suffer, because the operating system doesn't know that all that memory being used by firefox is just cache. And firefox doesn't know when the operating system would like to use memory for something other than cache. So when you start a second large program, it's thrashing time.

VMs like vmware have the exact same problem, where the host might want memory used by the guest, and you end up with weird scenarios where the guest's swap is in the host's disk cache, but the guest's memory is in the host's swap. One of the things guest tools are supposed to do is communicate with the host regarding memory pressure. Firefox lacks this feedback mechanism.


I'm sorry, but this reply doesn't make any sense to me. If applications have to cache something, they should use the file system, which would not affect their resident memory at all. Maybe I'm just old school, but where I come from operating systems handle the memory hierarchy, not applications.


They cache it in memory to avoid latency. Writing to and reading from disk increases latency. Even crossing the kernel boundary will increase latency.

Modern web browsers have architectures similar to OSes at this point - because they have requirements similar to OSes. I think it's natural that they will take on some of the same responsibilities.


If it's cached to the filesystem, it'll be handled by the file cache, under the kernel's control with the rest of RAM. Maybe browser makers have a good reason for thinking they can do better than the OS, I don't know, but having two systems trying to do the same job with the same resources sounds like a recipe for instability and inefficiency to me.


If you have to cross the kernel boundary every time you want to access something in your cache, your "cache" is now much, much slower. Note that this applies even if the OS keeps the file in memory, and doesn't require going to disk.


Sure, but if accepting that cache slowdown makes the rest of the system more responsive and more useful for background tasks, it may be worthwhile. It's a trade-off, and the browser makers have every incentive to be as selfish as possible.


Where I come from, what you're describing is called pre-fetching, not caching. That's why your earlier comment confused me.


No, I'm talking about caching: keeping data around that you have previously used, assuming you will use it again. However, in order to support caching, you must pre-allocate memory.


I suspect you don't know what virtual memory is: http://news.ycombinator.com/item?id=3699481


Yes, I know what virtual memory is, I'm a computer engineer and I work on operating systems. I'm just extremely impressed that they needed 1.5 GB of address space, for christ's sake.

Insert here some pithy comment about Apollo missions and Twitter, or whatever.


The reason I suspected that is grabbing 1.5 GB of virtual memory doesn't mean much. It just means you asked the OS to change its book-keeping to allow you to access that much. Applications and libraries that aggressively cache and prefetch routinely do that sort of thing.


The reason for acquiring a lot of virtual memory is to save memory and improve performance.


Why is Firefox always mentioned in this context? Sure they have had a history of it, but lately they've been fine and if one compares the combined spawned processes of Chrome, Chrome typically has more memory consumption.


I keep my Firefox up to date with release versions, have few plugins, and still find it absorbing memory at a tremendous rate. Chrome can also use memory, but somehow manages to let it go sometimes, as well as not drag my entire (Windows) system to a stutter.

I guess so many of us mention FF in this context because it still happens to us, even though we gamely continue to use it. But we still love it, and that's why we continue to use it. Though admittedly, most of us have a Chrome on the side...


The ongoing efforts of the MemShrink project keep finding memory leaks -- usually in Firefox addons. You can read more about it at http://blog.mozilla.com/nnethercote/category/memshrink/


Remember that when you are accounting for those you also need to remove all of the stuff that is paged into every process and is the same across them, such as shared libraries and the like. Most likely Chrome is using less ram than you think.


Chrome has a built-in task manager that takes this into account, as well as a slightly more hidden "Stats for nerds" link in the task manager that gives total memory usage. It's just genuinely very memory hungry once you add up all its processes.


HN's pro-Google bias.


What does about:memory say?


Basically the same thing. It's been several hours, right now it says it has 644 MB of explicit allocations, it is 675 MB resident, and has 1,633 MB of virtual address space. When I looked at about:memory right after composing my original post, the numbers were within a few megabytes of what htop reported.


Putting that in Chrome gets me to a page that was new to me, chrome://memory-redirect/


I never knew this existed...thanks for pointing this out.


Is there any way to have top display this information?


Use `htop` instead. It displays the information in a bit more verbose way. You'll get each section of "used" memory colour-coded, so that the last yellow area can be ignored as cache.


yep +1 for htop.


free does the job of taking cache into account (mentioned in the original post). If you've read the neugierig post, and want a better per-process monitor:

gnome-system-monitor has a top-like monitor as well as graphs, and measures memory properly (including a discount for shared maps); smem works in the console; it doesn't have a term interface like top, but it can be combined with watch.


atop is what I use these days, very flexible.


I found this page because I was pretty confused about these issues, and still am...

Question for the crowd: In this site the example given says that in reality there are 869MB of used RAM. I'm comparing this with my VPS values, and would like to know if this is the sum of some column in top. Is it? It looks like it's pretty close to the sum of the SHR column. Does this make sense? Thanks in advance.


You can't really do sums of top columns, because some memory is shared and you'll end up double-counting it.

And you can't just subtract the shared memory numbers, because different sets of pages are shared between different sets of processes, and top doesn't give enough information to figure out what's actually happening where.

Running the pmaps tool on all pids and summing the Pss number is perhaps the closest you can get to the actual memory use.


I found a good introduction on unix memory caching in chapter 3 (The Buffer Cache) of 'Design of the UNIX Operating System', http://www.amazon.com/Design-Operating-System-Prentice-Hall-... . At least it was good for me (mathematician by training, programmer by profession)


Does the Linux disk cache push out pages that are used by running applications? I believe Windows does it, though I can't state that for a fact.


And Hacker News ate your website :-/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: