You are correct that the failure rate for contracts is somewhere around 9%, and you are also correct that you need over 10 contracts to get 11 9's of redundancy, however this doesn't translate to needing 10x redundancy on the Sia network to get high reliability.
Data is uploaded to Sia (per-default, it's configurable through the API) in a 10-of-30 scheme using Reed-Solomon coding, which means that each piece of data is held by 30 hosts, and out of those 30 hosts any 10 of them are sufficient to recover the original data. This has a total overhead of 3x, and the algorithms behind it are in my opinion super fascinating.
> Data is uploaded to Sia (per-default, it's configurable through the API) in a 10-of-30 scheme using Reed-Solomon coding, which means that each piece of data is held by 30 hosts, and out of those 30 hosts any 10 of them are sufficient to recover the original data.
This sounds good - I'm just trying to understand how this is counted - are you establishing 30 separate contracts to achieve this?
Sia maintains 50 contracts with hosts at all times, and uses 30 of them for each file segment that gets uploaded. We use state channels, so we can use the same contracts each time you upload a new file, minimizing the total amount of on-chain activity.
Data is uploaded to Sia (per-default, it's configurable through the API) in a 10-of-30 scheme using Reed-Solomon coding, which means that each piece of data is held by 30 hosts, and out of those 30 hosts any 10 of them are sufficient to recover the original data. This has a total overhead of 3x, and the algorithms behind it are in my opinion super fascinating.
If you assume each host independently has 91% uptime, you get this amount of downtime as a result: https://www.wolframalpha.com/input/?i=sum+of+(30+choose+x)+*...
In reality, the software reliability is a bigger factor in downtime than the host's reliability.