It sometimes feel like it is a bonus today, if apps bother to clean up their memory at runtime, so maybe that's why parent poster thought it is a good and special thing, that the OS free's the memory of an ended processes.
Btw. many people don't seem to know, that also in languages with a garbage collector like Javascript, you can create awesome memory leaks. And I would bet, most websites actually do: it only works, like it works, because websites are closed regulary. And because RAM is every increasing. But browse with a older smartphone and you hit the RAM limit very quickly.
I have 32GB of RAM and it sits unused most of the time. Right now I am at 4GB/32GB. It simply isn't a significant source of memory consumption. Open Atom and you can easily get to 500MB for a single application, which is completely wasteful. That browser can run dozens of apps in 4GB.
On the other hand, my browser (Firefox) keeps overflow through 8GB ram few times a day. Sometimes I wish people programmed like we had 512megs in a luxury machines.
I also have 32 GB RAM and right now am at 25 GB + 2.4 GB in swap. I'm at around 20 GB most of the time but always have at least 3 Firefox tabs open. Sometimes a buggy process (looking at you, Apple…) decides to go haywire and use 30-60 GB of virtual memory. I don't even notice that until I have a look into the activity monitor. Handling RAM spikes seems to be no issue at least on macOS.
In ff and chrome tabs are processes. So aside from resources allocated on behalf of that process by other processes not being cleaned up, the OS will cleanup all the memory that tab told the OS to allocated when closed.
Firefox user here, plenty of tabs, Win10 as OS. 3.7 GB before opening the GIF in new tab, 3.8 after opening, 3.7 after closing. Reopening and closing it several times in a row yields the same results, consistently. At least for my setup (Win10 heavily crippled to my own liking) closing tab == closing app in terms of memory gained back.
It would be fair for a browser to assume that if you’ve just visited one page that you might return soon, and so keep assets in cache for a little while.
A process which doesn't exist cannot hold memory. But the OS can certainly chose to defer the erasure as long as there's no better use for that memory. This is often done to speed up the performance of processes which are frequently quit/stopped and reopened/started.
> A process which doesn't exist cannot hold memory
Not quite. Some leaks are across processes. If your process talk to a local daemon and cause it to hold memory then quitting the client process wont necessarily free it. In a similar way, some application are multi-process and keep some background process active even when you quit to "start faster next time" (of act as spywares). This includes some infamous things like Apple updater that came with iTunes on Windows. It's also possible to cause SHM enabled caches to leak quite easily. Finally, the kernel caches as much as it can (file system content, libraries, etc) in case it is reused. That caching can push "real process memory" into the swap.
So quitting a process does not always restore the total amount of available memory.