Hacker News new | past | comments | ask | show | jobs | submit login

I totally agree with the "abolish names and places". Why can't I just write:

    $ cp hash://<somehash> .
and have my computer do whatever it takes to retrieve a file with this hash and copy it on my disk?



I'm not sure if this is sarcasm or not. The usability of such an approach is terrible: humans like names, and like hierarchy. This is the same reason we use DNS instead of IP addresses.

There was URN https://en.m.wikipedia.org/wiki/Uniform_Resource_Name many moons ago that is still used. A URN resolver is a software library that could convert that identifier to a URL.

URLs aren't much different from URNs but they actually specified a default resolution algorithm that everyone could fall back on. They were more successful because there was less need to separate identifiers and locators than originally thought, though it's still a debatable point whether the results are intuitive (eg. HTTP URLs for XML namespace identifiers which may or may not be dereferenceable).

HTTP URLs took advantage of DNS as an existing globally deployed resolver, coupled with a universally deployed path resolver (the web server) the rest was history. You could create a URL scheme called "hash" but it would be hard to see how you could design a standard resolver unless it was one big centralized hash table in the sky - you still would need to, at the very least, map objects to IP addresses.


> humans like names, and like hierarchy.

They do, but that does not mean there should not be other ways to access data. Hashes are universal and unambiguous. There should be a way to retrieve a file given its hash.

> You could create a URL scheme called "hash" but it would be hard to see how you could design a standard resolver unless it was one big centralized hash table in the sky - you still would need to, at the very least, map objects to IP addresses.

There would be an underlying P2P protocol that cp would use. On the other hand, cp doesn't even use FTP or HTTP so maybe that's too much to ask.

Maybe with curl or wget, then.


> Hashes are universal and unambiguous. There should be a way to retrieve a file given its hash.

I'm not sure you've thought through the complexity of what you're asking for.

Hashes require (a) a hash function everyone agrees to, (b) a way to resolve them to an IP address.

Unless you synchronized all global hashes across the Internet on everyone's computer (the git hashed project model -- which we know doesn't scale beyond a certain point unless you bucket things into independent hashes you care about), you'd basically have to do something like Hash://ip_address/bucket/hash or hash://bucket/hash if you want to give a monopoly to one IP address that manages giant hash in the sky

Which is back to URLs and HTTP, and no different from say Amazon S3


Why should there be that? You're talking about an enormous, complicated system. What's the use case that justifies the effort?


BitTorrent magnet links already kind of do this.

Theoretically speaking, isn't it possible to create a virtual BitTorrent FUSE filesystem?


Because this blocks the very human needs for error-checking and maintaining awareness of context.

It's not like you're going to type that in. You're going to copy and paste it from somewhere. So it's just as good to use

http://releases.ubuntu.com/14.04.1/ubuntu-14.04.1-server-amd...

as

hash://b4ed952f6693c42133f73936abcf86b8

In either case, your computer can do whatever it takes to get the file. With a useful URL, you'll have a reasonable notion about what's coming down and whether it matches your intentions.

Without that, the very natural question is, "Did I get the thing I wanted?" For example, it would be easy to paste the wrong hash code.

There are other benefits, like real-time binding. A hash is going to point to a particular sequence of bits. But you may not want a particular file, but rather the best current mapping from an idea to a file. E.g., if Ubuntu discovers a an issue with their released ISO, they can make a new one and replace what gets served up by the URL.


How would you remember the hash? I guess one could have some sort of directory-like system for mapping human-memorable names to hashes...


> How would you remember the hash?

I wouldn't. I'd make a symbolic link.

Basically the current directory/names structure would be an abstract layer above the hash-based system.


Plan 9 already did that in its file system...


And of course it would have to be tree structured to avoid naming collisions and bloat. Oh, wait...


You can.

> aria2c magnet:?xt=urn:btih:1e99d95f....


Didn't know about this. Thanks :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: