Hacker News new | past | comments | ask | show | jobs | submit login

Swapping should have disappeared years ago. At best, it gives the effect of twice as much memory, in exchange for much slower speed. It was invented when memory cost a million dollars a megabyte. Costs have declined since then. How much does doubling the memory cost today?

What seems to keep swap alive is that asking for more memory ("malloc") is a request that can't be refused. Very few application programs handle an out of memory condition well. Many modern languages don't handle it at all. Nor is it customary to check for a "memory tight" condition and have programs restrain themselves, perhaps by starting fewer tasks in parallel, opening fewer connections, keeping fewer browser tabs in memory, or something similar.

I've used QNX, the real time OS, as a desktop system. It doesn't swap. This make for very consistent performance. Real-time programs are usually written to be aware of their memory limits.

Most mobile devices don't swap. So, in that sense, swapping is on the way out.




> Nor is it customary to check for a "memory tight" condition and have programs restrain themselves, perhaps by starting fewer tasks in parallel, opening fewer connections, keeping fewer browser tabs in memory, or something similar.

These aren't mutually exclusive and are actually complementary with swap.

If you have more than enough memory then swap is unused and therefore harmless. The question is, what do you do when you run out? Making the system run slower is almost always better than killing processes at random.

And it gives processes more time to react to a low memory notification before low turns into none and the killing begins, because it's fine for "low memory" to mean low physical memory rather than low virtual memory.

It also does the same thing for the user. "Hmm, my system is running slow, maybe I should close some of these 917 browser tabs" is clearly better than having the OS kill the browser and then kill it again if you try to restore the previous session.


> Making the system run slower is almost always better than killing processes at random.

In practice, heavy swapping (forth and back) makes it impossible to even kill the culprit manually (because I can't open an xterm or whatever). While there is often no benefit to have the processes continue running that slow.

Also, idealistically programs should be written with the assumption that the machine could go down at any instant. Having a few more cases where the program is killed will have the effect that the program is better tested and debugged.


I cannot remember a single occasion, where my desktop recovered when it started swapping. Always, the whole system locks up and I need to reboot. Thus, better kill some random processes instead of all of them.


Always, really? Perhaps I'm lucky but this happens quite frequently with my system (dev workstation, so browser with lots of tabs, IDE, my own app/server stuff, other "power/mem-hungry" dev tools...), and I always manage to keep it sane/healthy:

- notice system starts swapping (if you do not monitor that, to me it sounds as careless as driving on the highway on 2nd gear and ignoring engine noise -- ideally the OS could proactively help here but unfortunately I don't know a good "automated" tool)

- find out which process/app uses the most memory (Linux can even tell you which ones use the most swap space [1])

- decide which one you want to (gently|forcefully)-(quit|restart|whatever). Exercise judgment.

[1] http://stackoverflow.com/questions/479953/how-to-find-out-wh...


> I cannot remember a single occasion, where my desktop recovered when it started swapping.

..which operating system is that?


Ubuntu


Sounds to me your swap is not swapon'd. I get the same behaviour when I'm not running swap and memory is depleted.


Swap space is only partially related to virtual memory overcommit, and virtual memory overcommit is extremely common and almost unavoidable on most Unix machines. Part of this is a product of a deliberate trade-off in libraries between virtual address space and speed (for example, internally rounding up memory allocation sizes to powers of two), and part of this is due to Unix features that mean a process's theoretical peak RAM usage is often much higher than it will ever be in reality.

(For example, if a process forks, a great deal of memory is shared between the parent and child. In theory one process could dirty all of their writeable pages, forcing the kernel to allocate a second copy of each page. In practice, almost no process that forks will do that and reserving RAM (or swap) for that eventuality would require you to run significantly oversized systems.)


Plus mobile apps do get, and usually handle, a low-memory notification from the OS.


On iOS too many low memory warning in a set amount of time, Apple won't tell developers how many in what time frame in order to prevent them from gaming the system, will result in your app getting killed.


Until Apple stops soldering on memory, swap will still be alive on the desktop.


Years ago, about 80% of desktop machines were never opened during their life. It's probably higher today.


... for a small fraction of users.


Memory allocation is a non-market operation on (most? all?) operating systems. There's effectively no cost to processes allocating memory, and a fair cost to them not doing so.

I'm not sure that turning this into a market-analagous operation (bidding some ... other scarce resource -- say, killability?) might make the situation better or worse. And the problem ultimately resides with developers. But as a thought experiment this might be an interesting place to go.


This idea was implemented in EROS, and we've been exploring it for Robigalia as well. Storage is a finite resource which can be transferred between processes, including an "auction" mechanism which allows two processes to examine a trade before agreeing to it.


Doesn't this already exist for processor scheduling?


There's a weighting in many such systems, but ultimately it's still just a queue, usually a FIFO one.

Niceness allows for higher-priority processes to preempt others, but doesn't address the problem of an overwhelmed queue.

And processor scheduling isn't memory allocation. Time is ultimately some percentage of wall-clock (and/or overcommittment). Memory is ... different.

There's also the question of such stuff as garbage collection and scheduling of that. I had the opportunity to do some JVM tuning "ergonomics" (horrible name) a few years back. Turns out that you get far better behaviour in most cases by decreasing the sweep frequency and increasing the the allocation chunks (terminology is escaping me), due to the fact that natural attrition deallocates memory, and running sweeps too frequently simply chews up massive amounts of CPU time with no return on freed memory.

We also identified processes which genuinely did require very large memory allocations, and allocated hardware specific to those.

Specific workflow and process understanding (always idiosyncratic to a particular work assignment) was necessary, and took time to acquire.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: