You're not missing anything. They're completely wrong.
In RAID-Z, you can lose one drive or have one drive with 'bit rot' (corruption of either the parity or data) and ZFS will still be able to return valid data (and in the case of bit rot, self-heal. ZFS "plays out" both scenarios, checking against the separate file checksum. If trusting one drive over another yields a valid checksum, it overwrites the untrusted drive's data.)
Regular RAID controllers cannot resolve a situation where on-disk data doesn't match parity because there's no way to tell which is correct: the data or parity.
The situation I laid out was a degraded Z1 array with the total loss of a single disk(not recognized at all by the system), plus bitrot on at least one remaining disk during resilver. Pairity is gone, you have checksum to tell you that the read was invalid, but even multiple re-reads don't give valid checksum.
How does Z1 recover the data in this case other than alerting you of which files it cannot repair so that you can overwrite them?
Why do you have bitrot to begin with? That's what scheduled scrubbing is for. You could of course by very unlucky and have a drive fail and the other get corruption on the same day, but check how often you find issues with scrubbing and tell me how likely that scenario is.
I've had hundreds of drives in hundreds of terabytes of appliances over years. URE and resilver is a common occurrence, as in every monthly scrub across 200+ drives. This isn't 200 drives in a single array, this is over 4 appliances geographically distributed.
The drives have been champs overall, they're approaching an average runtime of about 8 years. During that 8 years we've lost about 20% of the drives in various ways.
It is almost guaranteed that when a drive fails, another drive will have a URE during the resilver process. This is a non-issue as we run RAID-Z3 with multiple online hotspares.
We could do weekly. The volume of data is large enough that even sequential scrubbing when idle is about an 18 hour operation. As it is, we're happy with monthly scrubbing on the Z3 arrays. We don't bother pulling drives until they run out of reallocatable sectors, this extends the service lifetime by a year in most cases.
I intentionally provisioned one of the long term archive only appliances with 12 hot spares. This was to prevent the need for a site visit again before we lifecycle the appliance. Currently down to seven hot spares.
That replacement will probably happen later this year. Should reduce the colo cost by power requirement reduction enough that the replacement 200TB appliance pays for itself in 18 months.
In RAID-Z, you can lose one drive or have one drive with 'bit rot' (corruption of either the parity or data) and ZFS will still be able to return valid data (and in the case of bit rot, self-heal. ZFS "plays out" both scenarios, checking against the separate file checksum. If trusting one drive over another yields a valid checksum, it overwrites the untrusted drive's data.)
Regular RAID controllers cannot resolve a situation where on-disk data doesn't match parity because there's no way to tell which is correct: the data or parity.