Hacker News new | past | comments | ask | show | jobs | submit login

HFS+ is not fundamentally unreliable. The potential exists for data corruption to occur in the amount of time required for the kernel to panic and halt. Even when that happens it still requires the stars to align and catch the HFS+ process writing data to the wrong place with exactly the right bits of memory being flipped. Even when that happens the chances or those bits landing on non-free user data with no file format supplied error correction further reduces the chances of data corruption. Even when that happens many file formats suffering from data corruption can be salvaged. Of course if you have a backup of any kind the odds of the same exact file / bits on both drives being corrupted causing permanent data loss are astronomical. Add a third offsite / cloud backup and it'd be way more worthwhile to worry about being eaten by sharks.



It's not fundamentally reliable either.

From: http://en.wikipedia.org/wiki/Hierarchical_File_System "The Catalog File, which stores all the file and directory records in a single data structure, results in performance problems when the system allows multitasking, as only one program can write to this structure at a time, meaning that many programs may be waiting in queue due to one program "hogging" the system.[2] It is also a serious reliability concern, as damage to this file can destroy the entire file system."

See also: https://news.ycombinator.com/item?id=7876217


As the great teacher of my Monte Carlo course taught us, "If something can happen, it will."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: