Hacker News new | past | comments | ask | show | jobs | submit login

Something seems to be seriously wrong with the swap implementation on modern systems.

20 years ago on Windows 98 it just started swapping, but it was no big deal. If something became too slow to be usable, you could just press ctrl+alt+del and kill that swapped program and everything worked fine afterwards.

On the other hand, my modern linux laptop, it starts swapping, and it swaps and swaps and you can do nothing, not even move the mouse, till 30 minutes later something crashes.




> on Windows 98 it just started swapping, but it was no big deal.

At that time, swapping out a 4k page was a significant part of memory: 4k of 16 MiB is 1/4096 of memory. Each swap gets back a lot of memory the program needs. Now the swap is still 4k pages, but memory has expanded by a thousand fold. Basically swap is a thousand times worse today than it was in the time of Windows 98.

For harddrives swap isn't used now to expand memory, it's used to remove initialization code and other 'dead' memory. Swap should be set to only a tiny fraction of the memory size for this reason, to prevent it from being used to handle actual out-of-memory conditions. But realistically for most users it's not even worth enabling at all because of the occasional memory that needs to be swapped in from disk.

For SSDs the seek speed has improved to match the extra memory so swap can still be used like in the old days to expand the effective memory size. But memory is so large a swap file that's a fraction of memory size to offload 'dead' memory is enough unless there's a specific reason to actually use swap for out-of-memory.


I have been using various operating systems for a while.

I feel like Linux has, in general, from a UX point of view, the worst behaviour when swapping and the worst behaviour in general under memory pressure.

I feel like it has gotten worse over time, which might not be just the kernel but the general desktop ecosystem. If you require much more memory to move the mouse or show the task manager equivalent, then the system will be much less responsive when it thrashes itself.

Honestly, I'ld much rather have Linux just crash and reboot, that'd be faster than it's thrashing-tantrums.

Luckily, there's earlyoom, which just rampages the town quickly if memory pressure approaches. Like a reboot (ie. damage was done), just faster.

In any case, it makes me sad (in a bad way) to see how bad the state of things is when it comes to the basics of computing, like managing memory.


Not an excuse for bad implementations, but since I run i3wm, my feelings of happiness increased rapidly. To such an extend that I do not want to ever run anything else; stability, speed, memory use... Solves (for me) the issues you have.


i3 is magnificent. The same display seems 10x bigger when using i3. As true for netbooks as for big desktops. My old x120 dual boots win7, which is unusably slow and unstable on it. Arch with i3 is still snappy. Unless I'm running a web browser. Web browsers have gone insane.


Use Noscript for browsing on old or resource limited hardware. The problem is the amount of code running on modern websites.


I couldn't figure out how to scale i3 to the high DPI on my Yoga 900, with Wayland on F25.


If you're on Wayland, use Sway instead. It feels so much like i3 that I often forget it's not i3. Hidpi works pretty well: https://github.com/SirCmpwn/sway/issues/797. I use this on a Dell Precision with the 4k display.


Thanks!


Use xrandr --dpi 192 (or whichever value you’d like to use) before starting i3, i.e. typically in your ~/.Xsession.

i3 ≥ v4.13 will pick up this value from the Xft.dpi resource in ~/.Xresources as well, which is the more common way of configuring DPI.

edit: haven’t tested this within Xwayland, though. Note that i3 is only supported on X11.


What do you mean by "scale i3"? Just the text drawn by i3 or also the managed windows?


I meant scale the entirety of the interface. I'll try Sway, as apparently i3 and Wayland isn't really a supported combination.


> Web browsers have gone insane.

Yep. I bought a second computer for full browsers. One for dev, another for 'full' browsing (Javascript on) and on my i3 dev machine, I only have NoScript browsing on for dev stuff.


That's what happens when you run everything through 100 layers of abstraction. Windows, for better or for worse, runs most things closer to the metal.


Because Windows 98 always kept enough resources available to show you the c-a-d dialog. On Linux, however, there is no "the shell must remain interactive at all times" requirement, so a daemon that gobbles memory and your rescue shell have the exact same priority. Modern Windows even has a graphics card watchdog and if any application issues a command to the GPU that takes too long, it's suspended and the user is asked if it should be killed. Probably not what you want on an HPC that does deep learning, but exactly what you want on an interactive desktop.

I suppose it might be possible to whip something up with cgroups and policy that will keep the VT, bash, X and a few select programs always resident in memory and give them ultimate I/O priority, but I haven't tried.


This is the exact opposite of my experience. Back in the Windows 9x days it was a fairly routine experience for the system to soft-lock with the HD grinding away and I'd sometimes end up just hard rebooting the computer after waiting a few minutes for the ctrl-alt-delete dialog to appear. On macOS with a SSD I don't even notice when my system is swapping heavily.


Isn't this related to this change on kernel 4.10? https://kernelnewbies.org/Linux_4.10#head-f6ecae920c0660b7f4...


Possibly, however since the writeback behavior is configurable I expect you could test that thesis by changing the aggressiveness of the writeback draining.


Could this be a reflection of the increasing gulf between RAM speed and HD speed? Even with NVMe drives, which one probably shouldn't be swapping to anyway, RAM is orders of magnitude faster.


I think, among other things, it has to do with the size of the swap space relative to the speed of the swap device. IME high disk i/o combined with large swap space means swap never fills up and the OOM killer doesn't kick in. On systems with less RAM and swap, OOM conditions were hit much sooner, even with slower disks.

Default settings for dirty ratio and dirty background ratio exacerbate the issue: more data is held onto before it is written, and once the background ratio is hit, any application writing to disk will block.


With SSD's disk is not that slow.


SSDs are only ~4x faster than magnetic last I checked. If RAM is 100ns per access, and hd access is down from say, 1ms to 0.25ms, that's still a huge huge gap. 4x isn't even an order of magnitude.

EDIT: see comment below for more accurate numbers.


From the article:

>A typical reference to RAM is in the area of 100ns, accessing data on a SSD 150μs (so 1500 times of the RAM) and accessing data on a rotating disk 10ms (so 100.000 times the RAM.


reminded me of this...

Latency Numbers Every Programmer Should Know

https://gist.github.com/jboner/2841832


Thank you for the correction. I should have read more carefully. Still, we're talking 3 orders of magnitude for SSD vs RAM.


0. Possibly not true in all cases. 1. Modern systems are much more aggressive about enormous disk caches, which can ironically lead to io storms when it swaps out your application to buffer writes, then has to flush the cache to swap the app back in. 2. Difference in working set size and number of background programs waking up.


I think thats more related to Linux and its prioritization of IO than anything else. Note that the latest kernel release 4.10) contains an IO throttle that should improve this experience.

https://kernelnewbies.org/Linux_4.10#head-f6ecae920c0660b7f4...


I feel you. X and some recovery critical software should have their reserved memory cgroup with some guaranteed, safe amount of physical memory and 0 swappiness. I speculate that on Windows it works so well because most of these stuff are in kernel space anyway.


If you have an SSD, try setting vm.swappiness to 1 (not 0).


Just type

sudo swapoff -a sudo swapon -a


Can't type while it is thrashing. Otherwise the offending program could just be killed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: