'vinceguidry actually makes a pretty good point. It's one thing to cover potential stupid mistakes with safety features. But beyond some point, safety starts to oppose utility - i.e. a perfectly safe car would be a simple chair. A perfectly safe software is also one that is totally useless for anything.
It's important to consider when designing software that safety should be about gracefully handling mistakes, and not something that should lure the user into false sense of not having to know what they're doing. Unfortunately, the latter attitude is what drives todays' UX patterns and software design in general, which is a big part of why tech-illiterate people remain tech-illiterate, and modern programs and devices are mostly shiny toys, not actual tools.
It's true that safety and also security can impair the usefulness of something past a certain point. It's also irrelevant to our current topic given the existence of systems that don't self-nuke easily. This is a UNIX-specific problem that they've fought to keep for over 20 years with admittedly some improvement. There were alternatives, both UNIX setups and non-UNIX OS's, that protected critical files or kept backups from being deleted [at all] without very specific action from an administrator. And nobody complained that they couldn't get work done on or maintain a VMS box.
So, this isn't some theoretical, abstract, extreme thing some are making it out to be. It's a situation where there's a number of ways to handle a few, routine tasks with inherent risk. Some OS's chose safer methods w/ unsafe methods available where absolutely necessary. UNIX decided on unsafe all around. Many UNIX boxes were lost as a result whereas alternates rarely were. It wasn't a necessity: merely an avoidable, design decision.
I'm glad we have the same opinion then - as I said, it's not very useful to reason about "perfect safety".
It's certainly possible to make a product "safer" then necessary and hinder utility (though I think "safety" is the wrong concept to look at here - see below) but if the common opinion of your product from tech-illiterate people is "complicated and scary", I think you can be pretty sure that you are still a long way away from that point.
In fact, some versions of rm do add additional protection against root deletions, e.g. the --no-preserve-root flag. What utility did that flag destroy?
I believe if you really want to make people more tech-literate (which today's apps are doing a horrible job of, I agree), you have to give them a honest and consistent view of their system, yes.
But you also have to design the system such that they can learn and experiment as safely as possible and can quickly deduce what a certain action would do before they do it.
Cryptic commands, which are only understandable after extensive study of documentation, and which oh by the way become deadly in very specific circumstances don't help at all here.
"Cryptic commands, which are only understandable after extensive study of documentation, and which oh by the way become deadly in very specific circumstances don't help at all here."
Exactly. That's another problem that was repeatedly mentioned in UNIX Hater's Handbook. It still exists. Fortunately, there's distro's improving on aspects of organization, configuration, command shells, and so on. I'm particularly impressed with NixOS doing simple things that should've been done a long time ago.
It's important to consider when designing software that safety should be about gracefully handling mistakes, and not something that should lure the user into false sense of not having to know what they're doing. Unfortunately, the latter attitude is what drives todays' UX patterns and software design in general, which is a big part of why tech-illiterate people remain tech-illiterate, and modern programs and devices are mostly shiny toys, not actual tools.