Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Am on a similar raid1 setup -

For me, I have 2x extra drives and a USB caddy, and rsync the array onto the caddy automatically, keepping the unused drive offsite.

This does mean that the wear on the live array is higher than the offsite ones, and I have to have 4 drives total, but since RAID1 with a traditional filesystem doesn't provide integrity protection (e.g. bit errors on 1 drive can cause silent corruption), I don't have to worry about subtle raid rebuild issues gradually propagating through the entire raid set.

The 'cheaper' version would be to only have 1 offsite drive, but that means my data on the raid array is only protected from severe failures up to the last time I ran the sync.

Longer run, I'm looking at moving up to something with integrity protection, but since my server is OpenBSD (less storage configuration options), this means RAID5 which was only recently OK'd for rebuilds, and soft-raid5 rebuilds take forever on spinny drives - Will probably wait and upgrade to 4x SSD's 1st or get a hardware raid card (my data set is fairly small).

Another thought I had was to setup a raspberry PI at a friend/relative's place and have an Rsync run nightly to it, and offer same to them... but haven't gotten around to it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: