Hmmm, a p2p version that uses local storage on each device you have could have some interesting applications. As a client-server architecture it's a tad complex for a lot of users to set up, but with a good UI it could become a viable alternative storage method, as well as allowing for potential app cross-compatibility. You would have to make sure at devices had a chance to sync - perhaps have a web service there? - and I'm unsure how you would deal with conflict resolution, but the idea definitely seems to have a lot of potential.
The main problem is garbage collection - to guarantee that you can sync across all devices for all time, your data structure must be append-only, so your document can only grow. A long living document will eventually grow very very large. You could let the user decide when to collect garbage/establish a new baseline doc; maybe like 5 years. But this means that if you edited the document on a laptop, didn't sync it with another client, and then closed the laptop for 5 years you would lose the ability for the changes you made on that laptop to be resolved automatically.
That's what I'm suggesting, but I'm pointing out the fact that CRDTs are generally append only data structures and it's possible to run into space issues with them eventually for some users.
A CRDT which models edits of arbitrary text with essentially unbounded revision history (required for merging) almost certainly can only grow (monotonically?) in size. Maybe you can be smart about compression, but I don't think it can ever shrink. Someone smarter than me can probably formally prove this with an argument about entropy while considering a series of particularly pathological edits.
But you're right that I'm wrong when I say "[all] CRDTs are generally append only structures." I meant to say "text-revision CRDTs are generally append only data structures."
But let’s say for example your phone and your computer are clients, and your computer has latest version of your data. When you open your phone, if your computer is not on the internet, you won’t be able to get latest. So, would this be practically useful for a terribly trivial use case like this?
Have a server be just another peer, not privileged in any way, but always online and available.
Maybe do the routing through a distributed hash table like we use for finding torrent peers (perhaps even the exact "mainline" DHT torrents use), which would mean that even with the server being down you could still sync with your other online clients.
I'll need to think about this.