I am sure the author will appreciate ditching the proprietary Synology to go instead with a custom ZFS server, as the reliability, recoverability, and feature set of ZFS is quite frankly hard to beat. I have been using ZFS to build my custom NASs for the last... checks notes 17 years. I started back when ZFS was only available on Solaris/OpenSolaris. My builds usually have between 5 and 7 drives (raidz2).
However I do not recommend his choice of 4 x 8TB drives in a raidz1. Financially and technically it doesn't make sense. He spent $733 for 24TB usable ($30.5/TB)
He should have bought fewer, larger drives. For example 14TB drives sell for $240. So a config with 3 x 14TB in a raidz1 would total $720 for 28TB usable ($25.7/TB). Smaller costs, more storage, one less drive (= increased reliability)! It's win-win-win.
Especially given his goal and hope is in a couple years to be able to add an extra drive and reshape the raidz1 to gain usable space, then a 14TB drive then will be significantly cheaper per TB than an 8TB drive (today they are about the same cost per TB).
Actually, with only 8.5TB of data to store presently, if I were him I would probably go one step further and go with a simple zfs mirror of 2 x 18TB drives. At $320 per drive that's only $640 total for 18TB usable ($35.6/TB). It's a slightly higher cost per TB (+17%), but the reliability is much improved as we have only 2 drives instead of 4, so totally worth it in my eyes. And bonus: in a few years he can swap them out with 2 bigger-capacity drives, and ZFS already supports resizing mirrors.
Where? Also, is it worthwhile to buy hard drives explicitly for NAS when you're using ZFS? For example, Seagate has the IronWolf product line explicitly for NAS and cost more.
Drives branded for NAS applications differ slightly from mainstream drives. For example Seagate claims the IronWolf is "designed to reduce vibration, accelerate error recovery and control power consumption" which essentially means the drive head actuators will be operated more gently (reduced vibration) which slightly increases latency and slightly reduces power consumption, and also the firmware is configured so that it does fewer retries on I/O errors, so the disk commands time out more quickly in order to pass the error more quickly to the RAID/ZFS layer (why wait a minute of hardware retries when the RAID can just rebuild the sector from parity or mirror disks.) IMHO for home use, none of this is important. Vibration is only an issue in flimsy chassis, or extreme situations like dozens of disks packed tightly together, or extreme noise as found in a dense data center (see the video of a Sun employee shouting at a server). And whether you have to wait a few seconds vs a few minutes for an I/O operation to timeout when a disk starts failing is completely unimportant in a non-business critical environment like a a home NAS.
However I do not recommend his choice of 4 x 8TB drives in a raidz1. Financially and technically it doesn't make sense. He spent $733 for 24TB usable ($30.5/TB)
He should have bought fewer, larger drives. For example 14TB drives sell for $240. So a config with 3 x 14TB in a raidz1 would total $720 for 28TB usable ($25.7/TB). Smaller costs, more storage, one less drive (= increased reliability)! It's win-win-win.
Especially given his goal and hope is in a couple years to be able to add an extra drive and reshape the raidz1 to gain usable space, then a 14TB drive then will be significantly cheaper per TB than an 8TB drive (today they are about the same cost per TB).
Actually, with only 8.5TB of data to store presently, if I were him I would probably go one step further and go with a simple zfs mirror of 2 x 18TB drives. At $320 per drive that's only $640 total for 18TB usable ($35.6/TB). It's a slightly higher cost per TB (+17%), but the reliability is much improved as we have only 2 drives instead of 4, so totally worth it in my eyes. And bonus: in a few years he can swap them out with 2 bigger-capacity drives, and ZFS already supports resizing mirrors.