This comment: https://news.ycombinator.com/item?id=34655095 has made me thinking: what are the genuine changes in software during last 20 years that require more RAM nowadays?
That is: if you wave a magic wand and transplant this change to the software written 20 years ago, its RAM usage will increase.
Only changes that pertinent to the end users of software, and not developers of software count, and only increasing RAM usage. There are changes that make software CPU-heavier, but that's a matter for another post.
I have come up with the following:
- Large screens and ubiquitous 24-bit and 30-bit color. This change requires larger framebuffers, and is obviously useful to the users.
- Universal UI compositing, double-and-triple buffering. This change requires more framebuffers, and gets rid of annoying redrawing artefacts that are mostly unavoidable in single-buffering.
- Unicode everywhere. Internal text representation requires more memory (sometimes as much as 4 times more memory, and also more memory for temporary buffers for de/serialization), and obviously useful to the users.
- Sandboxing untrusted components. This change obviously requires memory set aside for sandbox execution, and prevents usage of various shared caches. The usefulness is obvious, nobody laments the past where rogue files could own your PC easily.
- 64-bit address space. This change increases sizes of pointer-heavy structures. While not directly useful to the end-users, it is an important change that makes economical making software that works on large data sets. While niche, it is still useful.
Have I missed anything obvious?
Opening such images requires more memory.