Agreed, it's very hard, that's why GFS and HDFS had give up some parts of POSIX compatibility.
Per CAP, it's addressed by different meta engines (CP system, Redis, MySQL, TiKV) and also different object stores (AP system). When the meta engine is not available, the operation to JuiceFS will be blocked for a while and finally it returns EIO. When object store returns 404 (object not found), which means it's not consistent with the meta engine, it will be retried for a while, may return EIO if it's not recovered.
The file format is carefully designed to workaround the consistency issue from object store and local cache. Any part of data is written into object store and local cache with unique ID, so you will not go stale data once the metadata is correct [1].
Within a mount point, JuiceFS provides read-after-write consistency. Across clusters, JuiceFS provides open-after-close consistency, which should be enough for most of the applications, also provide good balance between consistency and performance.
Per CAP, it's addressed by different meta engines (CP system, Redis, MySQL, TiKV) and also different object stores (AP system). When the meta engine is not available, the operation to JuiceFS will be blocked for a while and finally it returns EIO. When object store returns 404 (object not found), which means it's not consistent with the meta engine, it will be retried for a while, may return EIO if it's not recovered.
The file format is carefully designed to workaround the consistency issue from object store and local cache. Any part of data is written into object store and local cache with unique ID, so you will not go stale data once the metadata is correct [1].
Within a mount point, JuiceFS provides read-after-write consistency. Across clusters, JuiceFS provides open-after-close consistency, which should be enough for most of the applications, also provide good balance between consistency and performance.
[1] https://juicefs.com/docs/community/architecture/#how-juicefs...