Hacker News new | past | comments | ask | show | jobs | submit login
Willow Protocol (willowprotocol.org)
429 points by todsacerdoti 8 months ago | hide | past | favorite | 121 comments



> Some questions in protocol design have no clear-cut answer. Should namespaces be identified via human-readable strings, or via the public keys of some digital signature scheme? That depends entirely on the use-case. To sidestep such questions, the Willow data model is generic over certain choices of parameters. You can instantiate Willow to use strings as the identifiers of namespaces, or you could have it use 256 bit integers, or urls, or iris scans, etc.

> This makes Willow a higher-order protocol: you supply a set of specific choices for its parameters, and in return you get a concrete protocol that you can then use. If different systems instantiate Willow with non-equal parameters, the results will not be interoperable, even though both systems use Willow.

Help me out here - isn't the point of a protocol that two independently developed systems don't have to agree on how to implement the protocol? What value does Willow have if two systems that both purport to be "Willow-compatible" aren't compatible with each another?


By having a more generic protocol on top, it allows the same tools to be used for different specific end results. So shared libraries and common debugging tools can benefit more use cases. You might even make a "higher-order" tool that can work with any willow data, at the cost of specific UI affordances that can be used when you know more about the underlying data.

This is sort of how ActivityPub is a thing, but it underpins multiple, sorta-but-not-really interoperable systems like Lemmy and Mastodon.


> What value does Willow have if two systems that both purport to be "Willow-compatible" aren't compatible with each another?

That you can claim to support Willow, or be Willow-compatible, without actually having to interoperate with your competitors. See e.g. the usage history of x509 in the nineties.


Protocols can have parameters. The protocol will be interoperable between parties who choose compatible parameters.

For example, SSH only works if both sides support and can agree on the same cryptographic algorithms, which is something that SSH is parametrized over.


Unless I missed it somewhere, Willow has no specified handshake procedure. So there's no standardized way for the two sides to come to an agreement on how to communicate. (Willow appears to be completely agnostic even of the data encoding used to communicate in the first place.)

In that sense it is even more high level and abstract than parameterized protocols like SSH.


True, it was an example of how “the same protocol” doesn’t necessarily imply compatibility. The algorithm negotiation procedure doesn’t guarantee that there will be an agreement, so it is somewhat secondary to the argument.


Not all protocols have to have a handshake to negotiate how they will communicate. Sometimes you just need to have both sides configured the same way. E.g. PTP clocks being in the same domain, having the same delay request mechanism etc.


> Not all protocols have to have a handshake to negotiate how they will communicate. Sometimes you just need to have both sides configured the same way.

True, but the industry has been moving away from "you have to configure both ends" towards autonegotiation.

e.g. RS-232 you had to configure data rates (e.g. 9600 baud) and encoding (e.g. 8N1 - 8 data bits, no parity bit, 1 stop bit), and if you didn't have the same config at both ends, communication wouldn't happen. USB, the primary successor, determines stuff like that by negotiation. Or similarly, with Ethernet autonegotiation, you no longer have to worry about manually configuring speed/duplex on each end–which I can remember being a big drama 20 years ago.


Yeah, that is true. And even as I typed that I thought, PTP can be fairly hand off and auto negotiates most stuff. I guess E2E vs P2P it can't really because they can represent different physical infrastructure.

Probably not the best example, I just happened to have the PTP spec open at the time!


And I think it's a good idea. Take HTTP for example. It is much more than its protocol. Freeing it from those details means that what is valid today:

    GET / HTTP/1.1
    Host: example.com
Could have the same semantics in any format:

    { 
      "method": "GET",
      "version": "1.1",
      "headers": [{"name": "Host", "value": "example.com"}]
    }

Of course a server accepting the former won't be able to communicate with the latter, but that's just an implementation detail that Willow does not want to commit to at this stage, and does not make it any less complete. Just a bit impractical.


The history of cryptography failures suggests that it’s better for a protocol to be opinionated and complete rather than open ended.


Format agnostic layers are a good thing. HTTP2 depends on it with different encoding but same meaning. But have to choose some encoding for communication to happen. Otherwise it is a meta protocol.


Willow appears closer to a "Protocol Construction Kit" than a protocol itself.

As a construction kit, it has value for people who want to make protocols where they'll control both ends, but don't have to re-implement basic table stakes.


FWIW, HTTP allows subprotocol / backwards-compatible negotiation. Perhaps that could be similar to how implementations of this might need to cooperate.


I think the idea here is that Willow is meant to handle "higher order" issues like encryption and the especially prickly sharing encrypted data within a more cooperative environment, so that application builders can focus on their more specific applications.

Say that I want to implement something Figma-like for designing drug-runner operations, Willow seems to be an excellent building block (Yes, the example is kinda out-there but it's meant to indicate that genericity intended here).


I wish the web page came out and said what Willow actually provides and what is up to the developer.

As far as I can tell, this is primarily a crypographic specification, like Noise (http://www.noiseprotocol.org) except for a stateful key-value store instead of stateless connections


Is IP useless because the other end might not support UDP or TCP?

More practically/less absurdly: is SSL useless because side A doesn't support the same ciphers as side B.

Perhaps a Willow negotiation protocol would be needed to reconcile, but it's a bad idea from a security perspective because it enables downgrade attacks.


Comparing apples and oranges.

A protocol is allowed to have presumptions, and then provide an interface.

Its not allowed to have no interface at all and no presumptions (A void).


Think of it like const generics in languages like rust or C++.

You can make two data structures with a const parameter.

If the parameter is not the same, they are not compatible (not the same type). The parameter can be tuned according to the specific needs of the application.


No I think you are right, this isn't a protocol. Its a protocol generator...

"higher order" is some nonsense, and would make me shy away from using it...


It is unusual term in the area of protocols, but it seems understandable it tries to draw a parallel to higher order functions. So "some nonsense" might be a bit strong...


Protocols are useful, and as such the costs are high to using a bad one. Confused verbiage does not instill confidence that the authors know what they are doing, or that there is real benefit.

So, perhaps it is strong language, but I think it is a reasonable reaction.


Huh? I'd understand your phrase "protocol generator" to be the equivalent of "higher order protocol" in the same way that a "higher order function" can be seen as a "function generator"...


The difference would be that a function, order non-withstanding, is (a) callable (site). "Higher order", merely means one that takes/returns other functions, but they are still fundamentally, functions (callable sites).

A protocol has the property that it is implementation independent, but that it has a defined interface (i.e. it is immediately usable).

This is neither (not a defined interface, implementation dependent). If it doesn't share either property with a protocol, then you can't claim that it is truly a protocol, "higher order" or otherwise.

This confused verbiage is what should be cause for concern - note that I can claim a "higher order" protocol with JSON or gRPC - its all the basic building blocks for a protocol, just both sides need to implement the same stuff!

Except, neither JSON nor gRPC are crazy enough to claim to be a "higher order" protocol, which to me puts this in the rubbish bin of over-complicated technologies looking for a problem, like SOAP, JavaBeans, OSGi - all of these could also be claimed to be "higher order" protocols as well.

The term is meaningless, and so I assume, is this project.


its wild to me that you would dismiss a project as meaningless because you can't immediately understand it


Oh I might read their specs: "Willow is a family of specifications:" https://willowprotocol.org/specs/index.html#specifications

This part looks useful.

But is it useful to be a "willow" family of protocols? Probably not.

Their claims on the front page are extraordinary. Extraordinary claims require extraordinary evidence, and heading the page with nonsense is not a good start.


Pretty rude to dismiss a project without reading any of its documentation.


Another webpage compares Willow to other protocols like IPFS: https://willowprotocol.org/more/compare/index.html#willow_co...

According to them, data on IPFS is immutable, stateless, and globally-namespaced, whereas data on Willow is mutable, stateful, and conditionally-namespaced. I interpret Willow as an authenticated, permissioned, content-based, globally-addressed, distributed database system, where an address has the hierarchy and expressiveness of a URL.

One particularly nice feature about the documentation: if you hover over an underlined word (https://willowprotocol.org/specs/data-model/index.html#data_...), a pop-up box provides a definition or explanation. Importantly, some terms in the pop-up are underlined themselves, so you can dig down into the terminology with great ease. More projects should implement this functionality.


I love the style of the documentation. That hover feature is just great.

_Surely_, they didn't write this from scratch, did they? _Surely_, there is a tool that they used that I can use, too. Right?


I asked. They made their own generator which they want to release once it is polished up a bit: https://github.com/earthstar-project/willowprotocol.org


How does this compare to IPFS?

I personally found IPFS very disappointing in practice, so I'm very hopeful for a successor.

(The promise of IPFS is great, but it is excruciatingly slow, clunky, and buggy. IPFS has a lot of big ideas but suffers from a lack of polish that would make Augías look clean. And as soon as you scale to larger collections of files, it quickly crumbles under its own weight. You can throw more resources at it, but past some point it just falls over. It just doesn't work outside of small-scale tests.)



if you are looking for something similar to ipfs but a bit more minimalistic and performance oriented, check out iroh https://github.com/n0-computer/iroh .

It is a set of open source libraries for peer to peer networking and content-addressed storage. It is written in rust, but we have bindings to many languages.

One part of iroh is a work in progress implementation of the willow spec. The lower layers include a networking library similar to libp2p and a library for content-addressed storage and replication based on blake3 verified streaming.

Most iroh developers have been active in the ipfs community for many years and have shared similar frustrations... See this talk from me in 2019 :-)

https://youtu.be/Qzu0xtCT-R0?t=169


https://veilid.com/ should also be a great alternative. i haven't had time to use it yet, but it was built to address performance issues with ipfs and allow for both dht style content discovery, but also for direct websocket connections for streaming (and doing that in an anonymous fashion)


This looks very interesting. They made very similar choices than we (iroh) did. Rust, ed keys, blake3.

They seem to do their own streams, while we are adapting QUIC to a more p2p approach. Also the holepunching approach seems to be different. But I would love to get more details.


https://yewtu.be/watch?v=Kb1lKscAMDQ

this was the presentation at DC'31. i will also check out iroh! thanks for working in building something in this space, it is much much needed!


Thanks. This is awesome. I think they are doing more work themselves in terms of crypto, whereas we rely on QUIC+TLS more.

Regarding holepunching, our approach is a bit less pure p2p, but has quite good success rates. We copy the DERP protocol from tailscale.

I am confident that we have a better story regarding handling of large blobs. We don't just use blake3, but blake3 verified streaming to allow for range requests.

Also I wrote my own rust library for blake3 verified streaming that reduces the overhead of the verification data. https://crates.io/crates/bao-tree

I tried to get on their discord at https://veilid.com/discord, but I get an invalid invite. You know a better way to get in touch?


hmm this is strange, i tried the invite and it worked for me. If you are on fedi, @thegibson@hackers.town is part of the team.

thanks for the links, i will get in touch personally when i try ir0h :)


Hi, I'm super intrigued by Willow and your work on iroh. Do you have any kind of documentation on how iroh deviates from Willow, or what parts of Willow are planned to be implemented vs omitted?


Not yet. We have been busy with other stuff, also the willow spec has been a bit of a moving target until now.

We would like to take our rust willow impl and separate it a bit more from our code base, so that iroh documents are just users of the willow crate.


That makes sense. I think I might try to really jam through the Willow docs and get a good understanding. If it all looks good, I might be able to help out splitting these things out =].


iroh seems to have a couple of "killer tools" already, known as dumbpipe[0] and sendme[1].

Although I am concerned that while dumbpipe does mention cryptography, sendme's webpage makes no mention of it (?).

0. https://www.dumbpipe.dev/

1. https://iroh.computer/sendme


It's using the same transport. Basically sendme is like dumbpipe, but adds blake3 verified streaming from the iroh-bytes crate for content-addressed data transport.


I imagined as much, but the website does still not mention encryption.

Which works against it. E2EE is a requirement today.


Thanks for letting us know. We will add a section about encryption.

These tiny tools are basically one week projects to show off the tech, but they try to be useful on it's own as well.


Is iroh suitable for use as a collaborative decentralized function cache? Had a quick skim but can't quite see how to get at the get/put etc in the python api yet. Will spend more time later.


Interesting — does the Rust crate export a C API then?


Not officially. We currently have bindings for rust, python, golang and swift.

These were the most asked for bindings (python for ml, golang for networking and swift for ios apps).

We are using uniffi https://mozilla.github.io/uniffi-rs/

Would you need C or C++ bindings?


Ah, I see. Hm. I might be interested in a C API since that could be used in C, C++, and Lua equally well. I really was just wondering what the common implementation between the bindings was since it struck me as unusual that there would be a number of bindings but not C (which is AIUI the only interface besides Rust that Rust can really export.)


So, I am not the one that is doing the bindings, so take this with a grain of salt.

It seems uniffi does create C compatible bindings in order to make bindings for all these other languages. But these are internal bindings that are ugly and not intended to be used externally.


Perhaps if an issue were opened describing the necessary steps to provide a fluent and stable C API, staying consistent with the uniffi approach you're using for the other wrappers, then someone enterprising could pick up the ball and run with it. :)


Willow solves the biggest problem I have always had with IPFS: it’s content addressable, which is nice for specific things but not generic enough to make the protocol actually practical and usable in the real world outside of specific use cases. (Namely you can’t update resources or even track related updates to resources.)

(Mathematically, a name-addressable system is actually a superset of content-addressable systems as you can always use the hash of the content as the name itself.)


> a name-addressable system is actually a superset of content-addressable systems as you can always use the hash of the content as the name itself

It's a superset in that sense but not a superset in another sense.

In a content-addressable system, if I post a link to another piece of content by hash, then no one can ever substitute a different piece of content. Like, if I reference the hash of a news article, no one can edit that article after the fact without being detected. This is a super-useful feature of CAS that is not a feature of NAS. Other implications:

* I can review a piece of software, deem that it's not malware in my opinion, and link to the software by hash. No one can substitute a piece of malware without detection.

* Suppose you get a link from a trusted source. Now you can download a copy of the underlying content from any untrusted source, without a care about authentication or trusted identities. This describes BitTorrent.


It’s a protocol, not an implementation. If you say your implementation of the protocol says objects are defined by their hash, then your implementation can also assert that the hash of the retrieved file matches the hash you looked it up with^ (which you should do in your library/app regardless of what the protocol says will be returned, though ideally your protocol would define some sort of merkle tree or whatever to verify the hash piecemeal).

^ which you, by definition, already have


To be fair, IPFS does offer not just content addressing but also a mechanism for mutability with IPNS. You can think of a willow namespace (or iroh document) as a key value store of IPNS entries.

The problem with IPNS is that the performance is... not great... to put it politely, so it is not really an useful primitive to build mutability.

You end up building your own thing using gossip, at which point you are not really getting a giant benefit anymore.


Is the performance a critical design flaw or just an implementation issue?


Difficult to answer.

IPNS uses the IPFS kademlia DHT, which has some performance problems that you can argue are fundamental.

For solving a similar problem with iroh, we would use the bittorrent mainline DHT, which is the largest DHT in existence and has stood the test of time - it still exists despite lots of powerful entities wanting it to go away.

It also generally has very good performance and a very minimalist design.

There is a rust crate to interact with the mainline DHT, https://crates.io/crates/mainline , and a more high level idea to use DHTs as a kind of p2p DNS, https://github.com/nuhvi/pkarr


Design flaw. In IPFS every piece of data (even every chuck of large files) is globally indexable on the same namespace. You need namespaces and/or a path to restrict yourself to just the subset of peers that might actually have the data.

It would be possible to add a layer on top of IPFS to include some context with every hash lookup so the search can be more focused, but the original design just assumed it was ok to do a log2 search for every chunk.


That is not a problem specific to IPNS though. Using a DHT for something like IPNS is fine. Publishing roots of large data sets is also fine(ish).

Using it to publish every tiny chunk of a large file is a horrible idea. It leads to overwhelming traffic.

If you publish a few TB of data, due to the randomness of the DHT xor metric you have to basically talk to every node on the network. Add to that the fact that establishing a tcp libp2p connection is much more heavyweight than sending a single UDP packet like in the bittorrent mainline DHT, and you are basically screwed.

In iroh we don't publish at all by default. But if you have to use a DHT, the fact that we have a single hash for arbitrary large files due to blake3 verified streaming helps a lot.

You still get verified range access.


Imagine if DNS supplied every URL and not just domain names. You need some mechanism to propagate resource changes. IPNS has two practical mechanisms: a global DHT that takes time to propagate, and a pub/sub that requires peers to actively subscribe to your changes.


btlink does DNS per domain name, which you could argue is a sweet spot between too many queries and being too broad. at least in the case of the web, it works nicely.


It's a design flaw.


Being written in Go may have made development of the reference client fast (that was the creators' contention when I asked, anyway) but killed its growth as a standard. Inability to have a portable lib-ipfs that could quickly, easily, and completely give almost any language-ecosystem or daemon ipfs capabilities is a real drag.


Can you quantify the breaking point of IPFS and the number of files. I was considering it for a project that has fewer than 200,000 entries.


It depends on how many files you have, but also the file size. My understanding is that IPFS splits files into 256kB chunks with a content ID (CID), and then when you expose your project, it tries to advertise every CID of every file to peers.

200,000 files could take a while to advertise, but from memory it should work, should hang for less than 15 minutes. But depending on your hardware, file size, quality of connection to your peers, alignment of planets, etc.

If you add one order of magnitude above that, it starts to become tricky. Manageable if you shard over several nodes and look for workarounds for perf issues. But if you keep growing a bit past that point, it can't keep up with publishing every small chunk of every file one by one fast enough.

But it's also very possible perf has improved since the last time I tried it, so definitely take this with a grain of salt, you might want to try installing and running the publish command and see what happens.


"hang" sounds pretty bad.

Even if there's a lot of sharding and propagating and whatever to do, it should happen in the background, and never interfere with user experience.

From your description, it seems their implementation has serious issues.


I'd be very curious what project you have for which IPFS is a good solution.


I'm not sold on IPFS but the idea of using a file system as a top level global index is attractive to me. I find the 2 best references for human information is global location and time. I think an operating system structured around those constants could be a winner.

I'm not sold on IPFS and will look at Willow and IROH.


A global hash-based index is literally an undergraduate project to do well. You could even ride atop bittorrent if you really had to.


Consider https://github.com/anacrolix/btlink. It's a proof of concept, and has all the basics. I designed it and I worked for IPFS, and I am the maintainer of a popular DHT and BitTorrent client implementation.


Still confused here. What is the actual, concrete ‘as-a-user-I-want-to’ application for which this is meant to be an ideal fit? Sorry if a dumb question.


Yeah, it would have helped me if they walked through what it actually means to "use" Willow. Do I install something like Dropbox on my computer? Do I write code that calls Willow as a library?


It should be able to underpin most apps. The sky is the limit. Or your imagination is the limit; whichever comes first.

This is a protocol for generic shared information spaces, where each person still owns & can manage permissions for their pieces of data in the space. It's a general idea that's present & implicit in most existing online spaces.


Same here. no clue what it does.. It could be a syncthing/dropbox something. It could be some sharing protocol. I dunno


It’s like Dropbox, including sharing, but without a centralized service, instead peer-to-peer.


So something like https://syncthing.net/ ?


It's more generic than that.

Syncthing is designed specifically for file system sync (and does a very good job). Willow could be used for file system tasks, but also for storing app data that is unrelated to file systems, like a KV store database.

You should be able to write a good syncthing like app using the willow protocol, especially if you choose blake3 as the hash function.


Syncthing doesn’t have sharing support AFAICS, but yes, Willow could be used as the underlying protocol for something like Syncthing.


Aljoscha and gwil are excellent people, I’m excited to see them working together. Looks to me like they’re solving some of the biggest problems with Secure Scuttlebutt.


So this is pure spec? No implementations at all?


iroh documents are a work-in-progress implementation of willow: https://github.com/n0-computer/iroh

We've been working with the willow team as we go & giving feedback on the spec.

disclosure: I work on iroh.


The iroh name resembles a bit iRODS, another system for distributed file sharing and fine grained permissions.

Quick googling did not give me a proper grasp of the use cases for iroh/IPFS vs iRODs.

Would you be willing to list the benefits of iroh vs IRODS?


I was not aware of iRODS.

Iroh is named by a certain fictional character that likes tea. Any similarity is a coincidence.

But it seems like iRODS is much more high level than iroh. E.g. iroh certainly does not contain anything for workflow automation. You could probably implement something like iRODS using iroh-net and iroh-bytes.


> Iroh is named by a certain fictional character that likes tea.

"The file was in my sleeve the whole time!"


wow, thank you for pointing me to IRODS, I was not aware of the project! Big difference I'm seeing as I read the docs for IRODs is a datacenter-grade data management _service_, whereas iroh is a multiplatform SDK for building your own applications.

Seems like one would want IRODs if they have massive amounts of highly sensitive data that needs fine grained access control. You would want iroh if you're building an app that uses direct connections between end-user devices to scale data sync



Still confused to be honest...

What does it mean that Earthstar will become a Willow protocol? Isn't it an implementation of Willow?


There are at least two implementations there on that page:

- One in typescript - One in rust


A spec without an implementation is a lovely idea at best.


iroh developer here.

willow was not developed in a vacuum.

the willow folks have worked with us while we have implemented many ideas from willow, starting with range based set reconciliation ( https://arxiv.org/abs/2212.13567 )

they have been open to removing parts that have turned out to add too much complexity to implementations.


What's the purpose of subspaces, given that there are namespaces?

What's the purpose of having separators in the keys?


If you are looking for more on Willow, we had gwil over for a chat at one of our tech talks: https://www.youtube.com/watch?v=yx5T7Z5rHGc


>Willow specification published 17/01/2024

Please try to follow RFC3339 when writing dates.

E.g. 11/01/2024 is ambiguous, as it could be January 11th or November 1st, whereas 2024-01-11 is RFC3339-compliant and does not exhibit this problem.


my first question (given that IPFS doesn’t seem to do this well) is “does it scale?”

Still useful otherwise if not… assuming there’s an actual client/server for it on mac/linux…


This is a good question. But it is worth noting that not everything has to scale globally.

E.g. in iroh-sync (which is an experimental impl of the willow protocol) you are not concerned with global scaling. You care only about nodes that are in the same document.

So while if you request hash QmciUVE1BqKPXMSvTTGwHZo1ywYdZRm9FfBvEJkB6J4USb via ipfs, you are trying to globally find anybody that has this hash, which is a very difficult task.

If you ask for some content-addressed data in an iroh document, you know to only ask nodes that participate in this particular document, which makes the task much easier.

Edit: regarding clients, iroh is released for osx, windows and linux. Iroh as a library also works on ios. Download instructions are here: https://iroh.computer/docs/install


other commenters have mentioned ipfs, dropbox, syncthing, etc. but this most closely resembles http://upspin.io/ with the caveat that willow is p2p and upspin uses a centralized key server

https://www.youtube.com/watch?v=ENLWEfi0Tkg


> http://upspin.com/

That does not go anywhere.


Oops, should be io tld. Fixed!


This is kind of exactly what I've been looking for. I've been trying to weave stuff together with libp2p, but this looks very promising as a way to handle a lot of the lower-level junk I don't care about. While I didn't go in-depth on the docs, I can see that this would be able to model a lot of different applications right off the bat. Very cool.


Total erasure of data.

This is disappointing. What's been read can never be un-read; to say otherwise is deceptive.


In the same way, once an attacker can exploit some weakness in a system, it's game over. Yet defense in depth is a thing and makes it much less likely that bad things happen.

In this case, yes, it's impossible to guarantee that some malicious peer doesn't ignore my "plea to delete". But combined with the fact that my data will only be replicated to/by peers I already have a trust relationship with (as opposed to e.g. on a blockchain) it provides another layer of protection that a system without deletion simply doesn't have. Not perfect, but not useless either.


Yes, but that's not what "Total erasure of data" means.

The project's goals are hard and noble. It would be better to under-promise and over-deliver than to make everyone question their claims. Maybe I'm just a grumpy old man at this point, but there are already too many caveat emptors in computing. They could have said "better" erasure of data.


Prefix pruning is a very different approach than tombstones. An update will actually remove data, and not just mark the data as removed.

Maybe "total erasure of data" is a too strong promise, but the fact that you can not force nodes that you don't control to unsee things is common knowledge, so in my opinion this does not need a qualifier.


It's a handy feature to have in the protocol if you're operating detached networks using it, and can control all the clients. If you're using it as internal infrastructure. Which, personally, is the only way I've ever been interested in using these sorts of things.

You're right that it's nigh-meaningless for a public cluster.


Ok, but where exactly is that when the FBI is pulling Bitcoin private keys out of files accidentally synchronized to iCloud Files? These are hard problems.


I don't understand your concern. Guaranteed (if you control the clients, for ordinary values of "guaranteed", not, like, mathematically-rigorous ones) deletion is a handy feature if you need to be able to comply with regulations, or just want to be sure you're not wasting disk space on stuff you intended to delete, without having to do extra work.

Attackers are a whole other matter, and their existence doesn't make the feature pointless, for the above reasons.


deletion is a handy feature if you need to be able to comply with regulations

This is a good point, and "GDPR compliant erasure of data" would be a great way to explain it. As a user I can guess what that means, and as an engineer it doesn't sound like magic.


What is the point of noting trying to accomplish hard and noble things. I'm sure there are plenty of people willing to take shortcuts.

I appreciate people trying to do something hard and noble.


Me too! But let's be honest about how things work.


I thought I was. I don't think it has much to do with the way the world works. I think it might have more to do with how one works the world. There are plenty of people that don't want to try the impossible but the impossible should be explored.


I can appreciate the spirit of the comment. I'm more pedantic than is typical -- perhaps even more than is healthy! Even so, I don't think the claim is deceptive.

Willow's claim has to do with erasure of the _networked_ data. It doesn't claim that copies people make are destroyed. Almost everyone understands and expects that if you can view data, you usually can somehow make some kind of copy of it. The question usually comes down to: how good of a copy?

Perhaps the best way to prevent perfect copying of data is to prevent someone from viewing it on a device they control.


> Almost everyone understands and expects that if you can view data, you usually can somehow make some kind of copy of it.

This is true for the target audience of the article, but certainly not for people in general. It might be true for people in their 20s, but I strongly doubt it's true for any other age range.


I take your point. How well this is understood is an empirical question. I retract my claim that it holds for "almost everyone" from the broader population.


I don't really get what your nitpick is here. There is no conceivable way in the universe to unshare information that has been shared. The idea here is that you can stop further sharing of that information. I think that's a fairly reasonable and obvious interpretation.


"Wrangling the complexity of distributed systems shouldn’t mean we trade away basic features like deletion, or accept data structures which can only grow without limit."

The CALM theorem would like a word with you.

You simply can't have consistent non-monotonic systems.

Forgetting is ok, deleting is not.


So stoked about this. Lower level than holepunch, and it sounds like it has everything I need to get going.


Decentralized and no ICO needed


I really like the illustrations


Why would anyone use this over libp2p?



> Unreviewed Content

> This community has not been reviewed and might contain content inappropriate for certain viewers. View in the Reddit app to continue.

Wow, what absolute horseshit. The march continues to acquire marketing signals at any cost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: