Hacker News new | past | comments | ask | show | jobs | submit login

Freenet's Content Hash Key URIs are one example of this idea in practice.

https://wiki.freenetproject.org/Content_Hash_Key

BitTorrent's "magnet" URIs could be seen as another. I always liked the idea of using torrents to host static web content. There are downsides, but they would be worth it in many cases.

If you used the torrent info hash as the primary identifier of the web content, but also embedded an HTTP URL that the data could be served directly from, you could have secure immutable content with the almost the same performance as a regular website. The torrent data could be used to verify the HTTP data, and the browser would fall back to download from the torrent network if the website was unavailable or served invalid content.

(This would probably require a bit of original design, since I don't think there's an existing convention for getting the actual torrent data over HTTP instead of from peers (DHT), but that's minor.)




Yes, the article is basically talking about turning the web into something it currently isn't. In a strange way, this was what the web was when it was very young; a bunch of inter-linked documents written in static HTML that rarely moved around.

But now we have something of a hodgepodge bazaar. For URLs to truly not move around and survive the creator and his/her circumstances, there needs to be a distributed repository. I don't know if Freenet will be that repository (the one time I tried it, it was glacially slow). Maybe Bittorrent's Sync project will pave the way to create a truly universal, persistent, content repository with permanent URI(L)s.


> I don't know if Freenet will be that repository (the one time I tried it, it was glacially slow).

Freenet is only slow because of all the indirection it needs to do to guarantee anonymity. You could get the same distributed-data-store semantics "in the clear" for a much lower cost, and then layer something like Tor on top of them if you wanted the anonymity back.


As you say, the early web came close to this ideal. What happened was almost entirely political and social, namely censorship, and copyright, and DMCA takedowns, etc (only occasionally would a "webmaster" die, or would a system fall into disrepair). Freenet/anonymity (and the early web had the appearance of anonymity), is one approach to prevent pointer-breakage by simply making the censorship impractical. Another would simply be to accept that the web should be an append-only distributed database like Bitcoin's blockchain or like a de-duplicating filesystem in which additions depend on prior content, making censorship all or nothing (and hopefully we wouldn't throw out the baby with the bathwater). Bittorrent is somewhere between - anonymity through numbers and a high-availability through independent mirroring (without interdependence between torrents, discouraging censorship).


While magnet URIs come close I think this would actually be a better match for a purely functional data store, like Datomic for example.

If you namespaced each Datomic database and add a transaction ID you would get a reference to an immutable snapshot of that entire data store, including pieces of it, like datomic://myhost:<transaction-id>/path-or-query-into-db

The disadvantage of that is that it is data and not a website, however it's possible to use Functional Reactive Programming to let the site essentially be auto-generated from that data store thus giving you the 'website view' again.

That of course still allows your program to be lost, but if you were to add that program to that purely functional data store itself and thus also version your own program, then that is also no longer a problem.

And once you've done that call me, since you'll have built what I've been dreaming of for the past decade.


Hah! I am working on something similar to that. I don't really think of it as a 'website view', though--it's more like a distributed database of versioned hypercard stacks that can contain hyperlinks. It also runs the stacks on a distributed system of mutually-untrustworthy resource-competing agents, built as a multitenancy patch to Erlang's BEAM VM, and each stack-instance is by default accessed "collaboratively", mediated between users transparently using Operational Transformations.

I'm calling the platform Alph--after the river that runs through Xanadu ;)





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: