What he's saying is that everything should be idempotent, which may be possible for local only calls and filesystem snapshots, but anything doing a network call is outside the realm of this possibility. Such a system would need to spin up a local, accurate backend for any network call, execute the call, verify the results are not catastrophic and retry with a real call, but then we also introduce time caused uncertainty as the real system may drift enough from the expected state during the local validation. A fun thought experiment but science fiction IMHO.
dang, I think you're right, my mind branched off somewhere it seems. I was thinking of how operations can be executed multiple times (verification + actual result run) with effect being applied only once.
Some ~20 years someone gave me access to their server and I typed `rm -rf something ` instead of `rm -rf something`. I have been hyper paranoid about destructive commands ever since. Yesterday I wanted to setup a boot usb for bazzite on a machine with two nvme drives, but I kept checking multiple times that the usb drive is indeed at /dev/sda and nothing else could possible be that drive even though the SSD's were all on /dev/nvme0. Some hard lessons you never forget.
In my experience, that tends to just make the approval of specific file deletions reflexive.
The worst situation I've been in was running the classic 'rm -rf' from the root filesystem, several decades ago.
I was running a bootable distro, had mounted all filesystems but the one I was actually attempting to reformat and repurpose read-only, and the upshot was that I enjoyed the experience of seeing just what a system which has shell internals (not sure it was even full bash) and little else functions like. (I found that "echo *" is a good poor-man's 'ls'.) Then, having removed the filesystem I'd intended to remove in the first place (and a few more ... memory-only ... filesystems), I rebooted and continued.
What saved me was safeing all parts of the system save that which I was specifically acting on. Where I've had to perform similarly destructive commands elsewhere and since, I've made a habit of doing similarly, ensuring I'd had backups where necessary, triple-checking that what I wanted to annihilate was in fact what I was going to annihilate.
Among those practices:
I'll often move files or directories to a specific "DELETE_ME" directory, which 1) gives a few non-destructive checkpoints to destructive actions, 2) takes no system time or space (file / directory moves on the same filesystem don't involve copying or writing data other than the filesystem metadata), then review and finally delete those files.
I'll set all filesystems other than those I'm specifically performing surgery on to "read-only". This suffices for almost any file-oriented actions, though of course not filesystem or partition operations. ('dd' is the exception to file-oriented commands, though you'd have to be writing to a partition to cause problems.)
Rather than using dynamically-generated file lists (e.g., using shell globs, 'find | xargs', $(shell expansions), or similar techniques, I'll generate a one-off shell script to perform complex operations. This makes explicit all expansions and permits reviewing of operations before committing them.
I'll often log complex output so that I can review the operation and see if it ran as intended.
Return 0, but don’t do anything yet. Fire a cron with an N-minute sleep that destroys the FS on expiry. Also, rewrite various ZFS tooling to lie about the consumed space, and confound the user with random errors if they try to use the still-allocated space.
It should be unsage execution but with an easy undo like git or zfs.