That seems like you're really living on the edge. :-)
My typical setup is:
2.5" 15K-RPM HDDs making up the storage array. Generally in 4 or 5 drive, single parity sets with a spare for every set up to 3 spares (more just seems silly). I build out the array with 2 to 4 sets generally. The more sets, the more capacity and performance while keeping your parity stripes a reasonable size and ensuring a single drive failure doesn't drastically degrade the performance of the whole array.
So with 300GB drives, that gives me (300GB * (5 - 1 (for parity))) * 2 == ~2.4TB usable. Then throw on a couple Crucial M4s or Samsung 830s (had very bad luck with most Sandforce equipment) as striped L2ARC, and another pair as mirrored ZIL.
Then use a caching RAID controller with a BBU in JBOD mode.
The reason you want 15K-RPM drives is that things like replication and volume creation/deletion bypass the ZIL and caches, so while most of the time you'll be able to push ~30K 8K blocks around in either direction sustained with ease, for the low level administrative things you'll be limited by your disk speed. Which in the 10 drive (including spares) 2.4TB example above means you'll have about 1200 IOPs worth of performance you can tap into. If you'd instead used a three-drive single-parity set with 2TB SATA drives, you'd be storing a lot more data on each drive, meaning failures would take much longer to recover from, not to mention the drives themselves would be about half the speed. On top of that, for the lower level operations you'd be limited to about 1/10th the IOPs. Which is not fun at all. Especially if you've made the mistake of attempting to use deduplication. :-(
This is last-generation stuff though. These HDDs offer less storage, and much less performance for about twice the price of a good 512GB SSD today. Any new arrays I build will forgo the L2ARC and ZIL, and go full SSD.
This way you can cut out the 4 SSDs used in the example, and instead of 10 drives, have 10 SSDs. Instead of around ~30K 8K IOPs, you'll have access to 120K. Instead of about 15K for writes, you'll have about the same 120K. And all your lower level administrative tasks will run at the same speed as everything else.
So you'll end up with about 4X the performance at less than half the price, and with ~40% more storage as well. So even if you need the array to have a 4 year life-span, you could replace all the SSDs every couple of years, and continue to expand the performance envelope with whatever the current generation offers, and still end up paying less. Not to mention drive bays aren't cheap. So the all-SSD option means a Dell R820 with 16-bays is now an option whereas before you'd need the 10 drive bays, 4 bays for SSD, and your mirrored boot devices... you'd be looking at buying a pretty pricey MD1220 in addition to the server and now you've given up another 2U as well.
For high-performance IT HDDs are all but dead. We've passed a price/GB/performance milestone where for 98% of use cases, HDDs should only be considered for archival storage, not operational storage. IMO.