I think Borg or Tarsnap use the right approach here: a map of blocks, updating a file updates only the changed block(s). It balances the efficiency of updates and the completeness of the copy. Sort of like FAT filesystem, only with block-level deduplication built in.
Of course you don't get a nice mirror of your files right in the cloud, unless you run a separate server that reconstructs it and makes available as traditional buckets.
I use a Rubric appliance, that does block level dedupe and extends to cloud. I was able to instantiate a multi TB db, from the backup to a physical server in minutes. Extremely impressed .
I decided against a block- level system with Zero because I'm trying to make predictions about which files will be needed next locally and that's hard on a block level, I think.
Of course you don't get a nice mirror of your files right in the cloud, unless you run a separate server that reconstructs it and makes available as traditional buckets.