Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a specific reason why the drives die at the same time? Electricity spike?


Buying two identical drives has high chances of them being from a single batch, which makes them physically almost identical. It’s a pretty well-known raid-related fact, but some people aren’t aware of it or don’t take it seriously.


Identical twins may both die of a heart attack, but not usually at the same time.

Normally, failures come from some amount of non-repeatability or randomness that the systems weren't robust to.

The drive industry is special (in a bad way) in that they can exactly reproduce their flaws, and most people's intuition isn't prepared for that.


If they're bought together, like mine were, and they have close serials, they've be almost identical; if you then run them in a ZFS mirror like I was, they'll receive identical "load" as well.

Since mine had ~43000 hours, they didn't fail prematurely, they just aged out, and since they appear to have been built pretty well, they both aged out at the same time. Annoying for a ZFS mirror, but indicates good quality control in my opinion.


If they're ~identical construction and being mirrored so that they have the same write/read pattern history, it could trigger the same failure mode simultaneously.


More likely to be from the same bad batch too. There was a post with very detailed comments about this just a few days ago.


Why bad? What's considered a good/bad lifetime for these? Mine had ~43000 power on hours, I don't know if that's good or bad for a WD Red (CMR) drive, but they weren't particularly heavily loaded, and their temps were good, so I'm fairly happy with how long they lasted (though longer would have been nice).


You're right it might be a natural end of life that coincides too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: