Hacker News new | past | comments | ask | show | jobs | submit login

No it isn't, and doing that will chew up your SSD. Which on that MacBook Air is soldered.





The answer is it depends i think…

If your SSD is near its max-capacity, then any extra wear has a bad affect on its longevity. But modern SSD’s handle excess writes very well if they are not near capacity.

A few extra GB written to disk daily is a drop in the bucket in an SSD’d TBW rating, no??

I’d say for a casual user with low storage needs, it’s perfectly fine. Otherwise it’s a bad idea imo.


What's telling for me is that SSDs have been a readily available consumer part for around 15 years now, a default option in PCs for quite a while now, and to my knowledge there hasn't been many tales of SSDs dying (specifically for write endurance or otherwise) beyond occasional bad models like the old OCZ vortex2s. Even early torture tests were finding that you'd need to push around 2PB of writes (on smaller drives than we have now) to get failures, and that was on a sample size of 1 for each model. I wouldn't expect a SSD to die more than any other electronics.

I've got a few dozen tales of SSDs dying in machines I've managed. Some dying slow deaths with lots of bad reads, some locking themselves in a read only mode, some just disappearing from the system.

Wear leveling spreads the wear out. If there is no free space, it can't do that, and you're completely screwed.

The problem with swapping is that SSDs are fast. If you have 8GB of RAM and manage to pick up any workload with a 10GB working set size, you're short 2GB, so the OS will have to put 2GB on the SSD. But your working set is 10GB and now only 8GB is in RAM, so it needs that 2GB back immediately. To do that it has to swap out some other 2GB, which it also needs to have back immediately. The result is that your SSD is the bottleneck and ends up maxed out doing writes.

NVMe SSDs will do something like 4GB/sec. Not a few GB a day, a few GB a second. A 256GB consumer SSD that can handle 100 full drive writes over its lifetime can thereby hit its lifetime wear rating in just two hours. Under ordinary storage use that doesn't happen because you're not maxing out the drive for hours on end -- after all, if you were storing ordinary data, writing at 4GB/s would cause the drive to be completely full after only 64 seconds.

But swap is deleting stuff and overwriting it and deleting it again. In a pathological case it could burn out a brand new drive in an afternoon and in more realistic cases could plausibly do it over a few months.


> But swap is deleting stuff and overwriting it and deleting it again

I'd imagine modern systems wouldn't bother deleting a page in pagefile unless modified, but I don't actually know. Why would one bother deleting pages from the page file unless you have to? After all, if you're swapping a lot, there's a good chance that page will be evicted again and need to be copied back anyways. Leaving it there gives a chance you don't have to actually rewrite the page, assuming it was unmodified. You then also don't have to spend the time doing deletes on all those unmodified pages.

Obviously, your percentage of modified pages to unmodified pages during swapping would be highly dependent on workload. But I imagine a good number of workloads have a lot of stuff in RAM that are somewhat static.


> I'd imagine modern systems wouldn't bother deleting a page in pagefile unless modified, but I don't actually know. Why would one bother deleting pages from the page file unless you have to?

They presumably use TRIM/UNMAP on SSDs when they're done with it because you'd otherwise have pages in the pagefile which are unused but the SSD can't erase to reallocate. Also, to keep track of whether a page has been modified after it has been swapped back in, memory writes would have to generate page faults just to allow the OS to mark the page as dirty, which is slow.

> Obviously, your percentage of modified pages to unmodified pages during swapping would be highly dependent on workload. But I imagine a good number of workloads have a lot of stuff in RAM that are somewhat static.

Memory reads still cause swap writes because a page has to be evicted in order to make room in RAM for the one being swapped back in, even if neither is being modified.

Also consider what would happen if pages were kept in the pagefile even after being swapped back in. You have 64GB of RAM and a 64.2GB working set. If pagefile pages are reused after being swapped back in, your pagefile is 0.2GB. If pages are left in the pagefile after being swapped back in, after one pass over the working set your pagefile is 64.2GB.


> because you'd otherwise have pages in the pagefile which are unused but the SSD can't erase to reallocate

> Also consider what would happen if pages were kept in the pagefile even after being swapped back in

In Windows, the pagefile doesn't grow and shrink based on you reading and writing pages. The pagefile.sys is usually a fixed size managed by the OS, and it will usually be several gigs in size even if you have say 100MB of "active" pages in the pagefile. On this Windows machine I use right now, the pagefile is set to ~25GB. pagefile.sys is this size regardless of if there's only 5MB of active pages in it or 20GB of pages in it. The pagefile.sys is the size it is until I go and modify the setting.

In Linux, swap is often a dedicated partition. It isn't going to shrink or grow based on its usage. And generally, a swapfile cannot be dynamically shrunk while online.

In fact, while there is an option to enable discards on swapon, it seems it doesn't always actually increase performance. I've been seeing a lot of debate with a lot of suggestions to not enable it.

https://www.man7.org/linux/man-pages/man8/swapon.8.html

> memory writes would have to generate page faults just to allow the OS to mark the page as dirty

The OS already knows the page is dirty and would need to be rewritten to properly swap out the page. It wouldn't have to always go to update the page in the swap/pagefile immediately on being marked dirty, only when it wants to swap out that page. In Windows, it reports this memory as "Modified", with the description of "memory whose contents must be written to disk before it can be used for another purpose". Meaning, it knows these pages are dirty, but it hasn't bothered touching the pagefile.sys yet to sync the page, largely because it doesn't have to as my machine still has plenty of free memory.

> Memory reads still cause swap writes because a page has to be evicted in order to make room in RAM for the one being swapped back in

Assuming it always deletes when a page gets read in. If both pages have previously been swapped, the page isn't dirty, and it doesn't just automatically delete on paging back in, it wouldn't have to do a write. Once again, significantly increasing performance. This is precisely why you wouldn't want to immediately delete the page from swap, because if the page is going to go back and forth between active memory and swap you might as well just leave it there.

You're also assuming that reading the page back into memory from swap inherently means another page has to be moving to swap. But this isn't always true; maybe some page was swapped out during a period of memory pressure, but now the system has more available memory. In this case (which probably happens often), reading that page back wouldn't require pages moving to swap.

Reading more about swap, it sounds like modern Linux uses an LRU caching strategy for memory pages in the swap cache. It doesn't explicitly go around deleting pages from swap unless it is going to reuse the page, or one has enabled discard=pages.

https://www.kernel.org/doc/gorman/html/understand/understand...

> When the reference count to the page finally reaches 0, the page is eligible to be dropped from the page cache and the swap map count will have the count of the number of PTEs the on-disk slot belongs to so that the slot will not be freed prematurely. It is laundered and finally dropped with the same LRU aging and logic described in Chapter 10.


Neither of those assertions is correct. You personally may have a workload which requires more RAM, but there are many people – even developers – who have direct experience otherwise. macOS is notably more memory efficient than Windows and the M series hardware has efficient compression, and that configuration holds up fine for the usual browser+editor+Slack+normal app usage which a lot of developers have.

SSD wear is a concern, but they aren’t using low-end components so you’re looking at 5+ years of daily usage. I used an 8GB M1 for years and when I upgraded to an M3 there was no indication of SSD wear either in measured performance or the diagnostic counters.


> You personally may have a workload which requires more RAM, but there are many people – even developers – who have direct experience otherwise. macOS is notably more memory efficient than Windows and the M series hardware has efficient compression, and that configuration holds up fine for the usual browser+editor+Slack+normal app usage which a lot of developers have.

Sure, it's physically possible to use a machine with 8GB of RAM without running out. If all you do is open some terminals and a single-digit number of browser tabs to well-behaved websites, 8GB is an ocean.

But that use case is the exception, not the rule. Worse, ordinary people don't know what causes it. If you're a developer and your machine is sluggish, you know enough to realize it's because it's swapping, and in turn to know that it's swapping because you opened up some ultra-high-res NASA images in an image viewer and forgot to close them, or because you have the tab open for that awful news website that will suck up 20GB of RAM all by itself with its ridiculous JS, or simply because you have 10 different apps running.

For most people, all they know is that their computer is slow -- which it wouldn't be if it had an adequate amount of RAM.

Meanwhile, because they don't know what causes it, they don't know what to do about it, so they just suffer through it. Which has the machine continuously swapping, which is what wears out the SSD.


But in your example the OS should be smart enough to realize "hey these pages related to the image viewer application aren't being messed with much in the last 12 hours, those should be high priority to swap out". So when they switch back to the image viewer app with the 100 gigapixel or whatever image they'll get that slowness hit but otherwise not experience it. Then the machine isn't constantly swapping, it just quietly hides those apps nobody is really touching.

The OS shouldn't be swapping hot pages when there's lots of things sitting around not doing much.


  > Which on that MacBook Air is soldered.
And has insufficient storage to begin with...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: