Hacker News new | past | comments | ask | show | jobs | submit login
Linux Horror Stories and Protection Spells (Volume I) (2021) (blopig.com)
37 points by marcodiego 4 months ago | hide | past | favorite | 18 comments



Title is incorrect, I went for horror stories and all I got was unsolicited advice. Here's a couple classic horror stories for anyone like me:

https://www.ecb.torontomu.ca/~elf/hack/recovery.html

https://www.toddheberlein.com/blog/2014/4/16/how-rm-rf-almos...


Relying on aliases is the worse solution in my opinion. If it is protecting you too much, you will forget the risk and once for some reason you end up on a shell with a different profile you will forget about it and run a risky command.


Exactly! I belive every user should make that kind of mistake at least once. That's how we become responsible users.


Instead of `alias rm="rm –I"` for the confirmation dialog before removing a file, I prefer to use trash-cli[0] since I can restore trashed files.

However, I haven't found a similar solution that works on Termux.

[0] https://github.com/andreafrancia/trash-cli


I do the same. My alias for rm just echos "use trash" so I don't replace its function, just disable it.

When I really need to use rm I type /bin/rm.


Bash also allows you to use \rm to override the alias. Comes in helpful at times.


On a per directory basis...

     % touch -- -i
     % touch foo
     % touch bar
     % rm *
    zsh: sure you want to delete all 3 files in /Users/shagie/test [yn]? y
    remove bar? y
    remove foo? y
     % ls 
    -i
     %


You should, of course, have backups. And one form of backups which you should have are point-in-time snapshots; every ten minutes sounds good.

I’ve lost count of the number of times I’ve deleted something I shouldn’t, but it’s never a big deal.


For those that run systems with zfs, sanoid makes this fairly trivial to configure on a per-dataset basis. It's saved me more than once.

https://github.com/jimsalterjrs/sanoid


Linux huge advantage: It gives you huge flexiblity and very sharp knifes.

Linux drawback: Well, see above.

It takes getting used to if. As I keep joking - nuking your first production system is a bit of a rite of passage as a linux admin. My first was a "mv $WORKSPACE/* $WORKSPACE/bin". I also once caused a ridiculous mess by screwing up NTP. It doesn't count 100% imo, but one of our juniors recently wanted to reset a DB on a newly installed system and did a "rm -rf /var /lib/foobar" to delete the system state. Another pretty green colleague wanted to cleanup a bunch of db dumps and ran "rm -rf /var/lib/barqux/*". It certainly removed the database dumps, and the database too.

Both are now educated in the arts of either `find -delete`, `xargs -p`, as well as generating scripts full of individual `rm` or other calls for review. Some simple "for f in bla*; do echo 'mv $f ${f##blub#blob}' >> script; done" is a powerful technique. And I don't think they will forget this anytime soon, without anyone getting angry or anything.

I'm mostly this chill about it because we have enough redundancy, automation and backups to let them run free and wild like this so they can learn quickly. Oh and yeah, they also learned how to reinit and/or rebuild database nodes, hah.


For the "Too many files for a common directory" section, the xargs utility exists to address this problem.

Modern (GNU) versions of xargs will also give you the ability to run operations in parallel: https://www.linuxjournal.com/content/parallel-shells-xargs-u...

For the file removal problem, using a file system that supports (read-only) snapshots removes much of the pain. This can be done with zfs, btrfs, and the LVM subsystem can also do it with less advanced filesystems.

I have never used the LVM snapshot features.

https://docs.redhat.com/en/documentation/red_hat_enterprise_...


This is from 2021¹, so it should probably have a (2021)

¹See bottom of article under who the author is


> What makes find so powerful is that you can execute commands involving the found files using the option -exec.

Or -print0 and pipe to xargs -0


GNU find on rhel9 has explicit warnings about the -exec option:

"There are unavoidable security problems surrounding use of the -exec action; you should use the -execdir option instead."

"If you are using find in an environment where security is important (for example if you are using it to search directories that are writable by other users), you should read the `Security Considerations' chapter of the findutils documentation, which is called Finding Files and comes with findutils. That document also includes a lot more detail and discussion than this manual page, so you may find it a more useful source of information."


[flagged]


Well that's reductive and dismissive. These are perfectly understandable mistakes or scenarios one might find themselves and a simple method to avoid/rectify them.

Find is a very powerful and useful tool, and the author gave a clear example of when and where it might be useful. They also called out how to prevent accidents with `rm`.


Would it make you feel better if it had the title "Linux Horror Stories and Protection Spells (from the perspective of a spoiled Windows user)"? Does it really matter?


As far as I can tell they didn't mention windows at all, unless this author is notorious for something I'm not aware of. The only places windows turns up on that blog seems to be for instructions that also apply to windows.


Hubris, followed by Nemesis when you accidentally delete that important config file that causes an outage. What if and backups are always a good idea when deleting via shell




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: