The path to really big distributed datastores go through the CAP theorem.
I'm not fully familiar with POSIX requirements for filesystems, but having atomic changes and good performance and a host of other aspects to keep the illusion of available and consistent files in a distributed store runs right through the CAP theorem.
The problem isn't so much requiring atomic changes at all, but what kind of changes are required to be atomic, and which changes are not atomic. The POSIX tradeoffs are just horrible here.
As an example of how to do better, squint a bit and have a look at git interpreted as (part of) a distributed data store.
Git doesn't pretend that you can make a change in one location, and have it atomically show up in all locations. On the other hand, Git does allow you to make changes to multiple files, and only have them show up together or not at all.
Git also has explicit mechanisms for resolving conflicts when multiple people tried to make changes to the same files.
Git doesn't get around the CAP theorem; it just has much better trade-offs than POSIX, because Git was designed explicitly for distributed use.
POSIX also doesn't allow you to have different trade-offs for read-only and read-write data.
He basically stated the CAP theorem. It's been proven mathematically.
Essentially if you distribute your data, you have to choose some degree of consistency versus availability.