Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hi, author here! The scenario was meant to be high level — I guess I should have gotten more into the various architectures and tradeoffs, but the article is already pretty long.

The way I see it there are a couple of ways this can shake out:

1. If you have a sync server that only relays the updates between peers, then you can of course have it work asynchronously — just store the encrypted updates and send them when a peer comes back online. The problem is that there's no way for the server to compress any of the updates; if a peer is offline for an extended period of time, they might need to download a ton of data.

2. If your sync server can merge updates, it can send compressed updates to each peer when it comes online. The downside, of course, is that the server can see everything.

Ink & Switch's Keyhive (which I link to at the end) proposes a method for each peer to independently agree on how updates should be compressed [1] which attempts to solve the problems with #1.

[1] https://github.com/inkandswitch/keyhive/blob/main/design/sed...






> The problem is that there's no way for the server to compress any of the updates; if a peer is offline for an extended period of time, they might need to download a ton of data.

There are ways to solve this, using deterministic message chunking. Essentially clients compress and encrypt “chunks” of messages. You can use metadata tags to tell the server which chunks are being superseded. This is fast and efficient.

Alex Good gave a talk about how he’s implementing this in automerge at the local first conference a few weeks ago:

https://www.youtube.com/watch?v=neRuBAPAsE0


I would really love this to be added to the article (or a followup), since it was my conclusion as well, but most readers are going to be thinking the same thing.

I'd also love to know how balancing the trade off of compute time between FHE and the bloat of storing large change sets affects latency for online and offline cases.

Perhaps, as with many things, a hybrid approach would be best suited for online responsiveness and offline network and storage use?

Admittedly, I haven't read the linked research papers at the end. Perhaps they have nice solutions. Thanks for that.


Okay, I updated it to hopefully clarify!

There's another option: let the clients do the compression. I.e. a client would sign & encrypt a message "I applied messages 0..1001 and got document X". Then this can be a starting point, perhaps after it's signed by multiple clients.

That introduces a communication overhead, but is still likely to be orders of magnitude cheaper than homomorphic encryption


Now you need a consensus algorithm.

You don’t. Instead of sending the resulting state after compressing, you can compress a chunk of operations which get merged in a block by the client.

It works fine.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: