Hacker News new | past | comments | ask | show | jobs | submit login

It'd be a huge downgrade in privacy. As much as we don't like it, when I'm watching cat videos only Google knows; with bittorrent, everybody knows what you're watching. As much as I love bittorrent (I really do), this is an aspect for which I don't see easy solutions.



Indeed. Maybe it should include Tor like features as well :)


I realize that this would not work as well : tor is routing traffic through exit nodes, so it would basically cancel the advantages of p2p distribution.

The privacy concern makes me realize something else : there's no incentive for users here. They lose privacy, what do they win?

It does not prevent totally the idea of using p2p to distribute content as a low level implementation, but since we (the ones who manage servers) are the only ones to benefit from it, we must first find a way for it to have no impact on users.

The biggest privacy problem is because of how p2p works currently : we have a list of IPs associated to a resource. How can we obfuscate this without going through proxies?


Not "tor". "tor-like". Onion routing. Something that would make sure only your closest neighbor knows what you request, and won't know if it's for you or your closest neighbor.


I don't know. It's already hard enough to incentivize people to share content they already have (in current bittorrent), I believe it would be even more difficult to incetivize them to download content they're not primarily interested in just for a neighbor.

It works if we don't do naive P2P but rather Friend-2-Friend; this is what retroshare does, your downloads can go through your friends so that only they know what you download. But that requires a lot more steps than traditional bittorrent, so I'm not sure it could work in general.


Such networks where you "download a bunch of stuff you're not interested in for the sake of" the network already exist.

Perfect Dark (a Japanese P2P system) is a direct implementation of that concept, where you automatically "maintain ratio" as with a private torrent tracker by your client just grabbing a bunch of (opaque, encrypted) stuff from the network and then serving it.

A more friendly example, I think—and probably closer to what the parent poster is picturing—is Freenet, which is literally an onion-routed P2P caching CDN/DHT. Peers select you as someone to query for the existence of a chunk; and rather than just referring them to where you think it is (as in Kademlia DHTs), you go get it yourself (by re-querying the DHT in Kademlia's "one step closer" fashion), cache it, and serve it. So a query for a chunk results in each onion-routing step—log2(network size) people—caching the chunk, and then taking responsibility for that chunk themselves, as if they had it all along.


For traffic that consists of many small, relatively rare files (which is most HTTP traffic) you would have to do some proactive cahing anyway. I want JQuery 1.2.3. I ask your computer. Your computer doesn't have it, either because it's a rarely used version, or because you cleared your cache. Instead of returning some error code, your computer asks for it from another node, then caches the file, then returns it to my computer. This kind of stuff will be necessary to ensure high availability, and incidentally it would make it hard to see who is the original requestor. It should actually improve privacy. Also, this scheme could be used to easily detect cheating nodes that don't want to return any content. (They usually wouldn't have an excuse of not having the content you requested, so if a node consistently refuses to share, it can be eventually blacklisted.)


> I want JQuery 1.2.3. I ask your computer [...] returns it to my computer

Good ! Now I know you have JQuery 1.2.3, which is vulnerable to exploit XYZ, which I can now use to target you. This is one reason why apt-p2p and things like that can't be deployed in large; it's way too easy to know what version of what packages are installed on your machine.


Indeed, local disclosure is a cool idea to mitigate history leak.

This makes me think that an other feature could be to not disclose all peers available, but randomly select some. This would force someone wanting to look someone up to download a possible big amount of times the same list to check for an ip, instead of just pulling it once per ressource to know as a fact.

Both ideas are not exactly privacy shields, but steps to mitigate the problem.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: