Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting points. I think systems like Burroughs counter the concept in the a lot of safety can be baked into a system. Here's what they did in 1961:

http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

Notice that's good UI design for the time, hardware elimination of worst problems, interface checks on functions, limits of what apps can do to system, and plenty recovery. Systems like NonStop, KeyKOS, OpenVMS, XTS-400, JX, and so on added to these ideas. You can certainly bake a strong foundation of safety into a system while allowing plenty flexibility.

So, for example, critical files should be write-protected except for use by specific software involved in updates or administrative action. Many above systems did that. Then, one can use VMS-style, versioned filesystem that leaves originals in there in case a rollback is needed so long as there's free space for that. Such a system handling backups and restores with modern-sized HD's wouldn't have nuked everything. Might have even left everything if using lean setup but can't say for this specific case.

"You can't design a sword that can be used safely by the untrained."

A sword is designed to do damage. A better example would be a saw that's designed to be constructive but with risk of cutting your hand off. Even that can be designed to minimize risk to user.

https://www.youtube.com/watch?v=esnQwVZOrUU

"If you've picked the former right, (backing up human-readable information rather than data only readable by software programs that might go away in a crash) then risk is minimized."

That's orthogonal. A machine-readable format just needs a program to read it. The risk is whether the data is actually there in whole or part. This leads to mechanisms like append-only storage or periodic, read-only backups that ensure it's there. Or these clustered, replicated filesystems on machines with RAID arrays that lots of HPC or cloud applications use. Also, multiple, geographical locations for the data.

People doing the above with proven protocols/tools rarely loose their data. Then there's this guy.



Table saws should never be used on flesh. rm(1) should always be used on files. How in FSM's noodly universe is the command supposed to intuit which files it should safely delete versus those it shouldn't?

> ...or administrative action.

You mean like, "sudo rm -rf {$undefined_value}/{$other_undefined_value}"? D'oh!


Two different people here have already figured out this wouldn't have happened in OpenVMS due to versioned filesystem w/ rollback. People also claim saner commands for this stuff but I can't recall if remove was smarter.

Anyway, pertaining to RM, here you go:

https://launchpad.net/safe-rm


Make `--one-file-system` the default!


He really should not have made the first element of the path variable. Doing an "rm -rf /folder/{$undefined_value}/{$other_undefined_value}" would have made his day much better.

Also, never having all backup disk volumes mounted at the same time is good practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: