I'm not sure if this is sarcasm or not. The usability of such an approach is terrible: humans like names, and like hierarchy. This is the same reason we use DNS instead of IP addresses.
URLs aren't much different from URNs but they actually specified a default resolution algorithm that everyone could fall back on. They were more successful because there was less need to separate identifiers and locators than originally thought, though it's still a debatable point whether the results are intuitive (eg. HTTP URLs for XML namespace identifiers which may or may not be dereferenceable).
HTTP URLs took advantage of DNS as an existing globally deployed resolver, coupled with a universally deployed path resolver (the web server) the rest was history. You could create a URL scheme called "hash" but it would be hard to see how you could design a standard resolver unless it was one big centralized hash table in the sky - you still would need to, at the very least, map objects to IP addresses.
They do, but that does not mean there should not be other ways to access data. Hashes are universal and unambiguous. There should be a way to retrieve a file given its hash.
> You could create a URL scheme called "hash" but it would be hard to see how you could design a standard resolver unless it was one big centralized hash table in the sky - you still would need to, at the very least, map objects to IP addresses.
There would be an underlying P2P protocol that cp would use. On the other hand, cp doesn't even use FTP or HTTP so maybe that's too much to ask.
> Hashes are universal and unambiguous. There should be a way to retrieve a file given its hash.
I'm not sure you've thought through the complexity of what you're asking for.
Hashes require (a) a hash function everyone agrees to, (b) a way to resolve them to an IP address.
Unless you synchronized all global hashes across the Internet on everyone's computer (the git hashed project model -- which we know doesn't scale beyond a certain point unless you bucket things into independent hashes you care about), you'd basically have to do something like
Hash://ip_address/bucket/hash or hash://bucket/hash if you want to give a monopoly to one IP address that manages giant hash in the sky
Which is back to URLs and HTTP, and no different from say Amazon S3
In either case, your computer can do whatever it takes to get the file. With a useful URL, you'll have a reasonable notion about what's coming down and whether it matches your intentions.
Without that, the very natural question is, "Did I get the thing I wanted?" For example, it would be easy to paste the wrong hash code.
There are other benefits, like real-time binding. A hash is going to point to a particular sequence of bits. But you may not want a particular file, but rather the best current mapping from an idea to a file. E.g., if Ubuntu discovers a an issue with their released ISO, they can make a new one and replace what gets served up by the URL.