Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's trivial to implement and much easier to use.

...unless you opened the document from a network share or removable media. Or serialization takes a long time. Or the storage device is slow. Or you don't have write permission in the file's original location and have to pull a potentially large file from somewhere to persist changes to some local position. Or the local storage device is full.

A "always save" mechanism would be best on a system that supported copy-on-write, network-aware links, and automatic file versioning to make writes super cheap. On actual real world systems that currently exist these mechanisms don't really exist or aren't universal so "always save" is fraught with difficult to handle edge cases.



You don't have to write in the same directory as the original file (and you definitely don't want to overwrite the original file). Original vi (and nvi) keeps the working copy below /var/tmp. If you can't write there, you have bigger problems.

(just be consistent from which host you modify a file from ;-}


Writing to /tmp is great if your file is loaded entirely in memory. Not all programs do that for a variety of reasons. Like I said, it's the tons of edge cases that make "constantly save" problematic and far from a trivial feature.


/tmp often is a RAM based file system. Not the best destination for data intended to be permanently saved. (n)vi is primitive enough to insist in copying the whole file (usually not a problem with files meant to be edited interactively), but nothing stops a more sophisticated program to store only changes (transactions) and consolidate on request. Most software, certainly all interactively used, ought to be "crash-only" software [1].

[1] https://www.usenix.org/conference/hotos-ix/crash-only-softwa...




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: