Hacker News new | past | comments | ask | show | jobs | submit login

I had a RAID 6 have a failed disk a couple weeks ago... 8 x 3TB drives... 18TB usable. Took 7 hours to rebuild.

Disks were connected via SATA 3 and was using mdadm




Assuming it is reading and writing almost simultaneously that is ~125Mbyte/sec on each physical drive, which isn't bad considering the latency of the read-from-three-or-four-then-checksum-then-write-to-the-other-one-or-two process if you if the array was was in use at the time.

There are a few ways to improve RAID rebuild time (see http://www.cyberciti.biz/tips/linux-raid-increase-resync-reb... for example) but be aware that they tend to negatively affect performance of other use of the array while the build is happening (that is, you are essentially telling the kernel to give more priority to the rebuild process than it normally would when there are other processes accessing the data too).


In my experience working with Debian, the rebuild is sensitive to disk usage. If you start using the disks, the rebuild will slow down intentionally so as to not choke the drives. This could explain why it was fast for you but not for the parent.

Edit: To clarify, I was using mdadm, not hardware raid.


Yeah, I question that rebuild. If the array is being used along with the rebuild, typically the rebuild process gets put to a lower priority. I've seen IBM enterprise RAID 5 storage arrays take forever to rebuild if they were the main storage for a high transaction database.


I you can tune the speed when the disks are in use; I usually bump it to around 60MB/s as long as there isn't a live database on the disk; the default is something insanely slow, and it impacts read/writes almost as much as the higher value.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: