Hacker News new | past | comments | ask | show | jobs | submit login

I was simplifying... dump backs up inodes not blocks. Some inodes point to file data and some point to directory data. Hard links are references to the same inode in multiple directory entries, so when you run xfsrestore, the link count increments as the FS hierarchy is restored.

xfsdump/zfs send are file system aware, unlike dd, and can detect fs corruption (ZFS especially having extensive checksums). In fact, any info cp sees about corruption comes from the FS code parsing the FS tree.

However, except on zfs/btrfs, data block corruption will pass unnoticed. And in my experience, when you have bad blocks, you have millions of them -- too many to manually fix. As this causes a read hang, it is usually better to dd copy the fs to a clean disk, set to replace bad blocks with zeros, then fsck/xfs_repair when you mount, then xfsdump.

dd conv=noerror,sync,notrunc bs=512 if=/dev/disk of=diskimg

See Also: http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US... http://xfs.org/index.php/Reliable_Detection_and_Repair_of_Me...




If the risk of keeping the system running while the array rebuilt was deemed to high, I would have just gone with a dd/ddrescue of the remaining disks onto new disks and then moved on from there.

+1 for mentioning ZFS. It's really quite amazing. Almost like futuristic alien technology compared to the other freely available file systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: