This looks very cool, but I'm wondering about abuse. I watched the video and did some followup research using Google and couldn't find anything covering how to handle abuse within IPFS. While I'm sure there will be a lot of photos of cats and wildlife videos it seems that this solution would be ripe for abuse by the seedier regions of the web. If I'm participating in IPFS how do filter for or stop illegal content from being saved and hosted by my node?
The only content saved and hosted from your node is content that you have requested yourself. In that regard it is no different from BitTorrent. Any content you request is purged from your node over time unless you have explicitly pinned it to keep around forever. So as long as you stay away from the seedier regions of the web, you will be safe.
Also solves the problem of not wanting to donate your CPU, bandwidth, or storage to things like kidporn. That's a serious issue with things like Freenet, where you have no choice about what you cache/host.
Yes that's true. Files are only guaranteed to be available indefinitely if someone has decided the file to be noteworthy enough to pin and their system is online. Ideally, systems like http://filecoin.io/ will be built upon ipfs to enable those that care enough about their files to obtain distribution through paying a little money to incentivize others to host their files.
I'm less concerned about illegal content per-se as I am concerned about spam. I created Neocities (https://neocities.org) and we get quite a bit of SEO and pharmaceutical spam that we need to remove. There would be no way to prevent that with IPFS.
The one thing I could see this being useful for is storage with a private network. If you control all the nodes, then you could filter what goes into them. But I recognize that the point here is to make everybody run on the same network.
Bitcoin has a similar problem. If transactions were free, it would be very easy for someone to just flood the network with garbage transactions and essentially make the blockchain useless for legitimate use. The way this is prevented is by charging a transaction fee (around 7 cents) for transactions that don't meet certain criteria. It's not an ideal solution (because it's fixed in the code and doesn't fluctuate to compensate for market demand), but regardless, it's the only way they've found to prevent this sort of spam.
Without a solution like this, IPFS would not be able to be used in a decentralized, trustless manner. The solution here is probably some sort of monetary restriction, similar to the fees on Bitcoin. But I have no idea how you could implement that here.
That said, if a solution is discovered, Neocities will be front row center to implementing it for our sites. I think this is a genius idea, I've been following it for a while, and I love it.
It sounds like pull instead of push though, so the user presumably is only seeding things they have looked at, so a flag/delete/block button for them to remove and block something from their node might be good enough.
I'd say this one falls into the "sort of" category. In web security we are constantly fighting cross site request forgery (CSRF) bugs. Basically a bad actor places a GET or POST request into a page you visit which makes a request to another asset you are likely to have visited while authenticated like Facebook. This GET or POST then leads to some action on your behalf without your knowledge leading to an action of consequence such as making a wall post on your behalf. Similarly, within the context of this solution a bad actor could embed links to objects they want you to cache without your knowledge by not rendering the objects in a visible manner in the DOM. Yes, this can happen today and yes, the technique could be used to frame someone. In fact there was presumably once a court case where the defense team claimed their defendant didn't browse child porn and to provide the judge an example that it's possible to have child porn on a machine without ever knowingly visiting such content they had the judge browse the defense team website where they loaded up a bunch of images which went into his browser cache but which he didn't see because they weren't visible in the DOM. They then asked the forensics investigator to see if the judge had browsed the particular content and of course he came back that the judge did.
In the case of this solution, a bad actor could use unsuspecting visitors to a site or users of a rogue WiFi network to load up the particular bad content they want distributed.
There's a block exchange protocol where that can be built in: refuse exchanges relating to block X. There is also potential for a web-of-trust built on top of that as well. So rather than a centralized flagging system, if sufficient numbers of people you marked as trusted flag something, it's likely not something you want to make block exchanges with.
All content is self hosted until another party is interested enough to pull it locally. While spam will still be something that needs to be dealt with, spammers will be using their own disk space and bandwidth, and not that of other users.
as dylankpowers says here, you only host content on your node that you choose to by default. We will have a feature later on that will allow the network to adapt to what objects are the most popular and rehost them automatically, but this process will respect a blacklist that we will curate, and is of course, optional per user.
It might be just my opinion, but I think this is a problem with the legal system rather than with the protocol; NTFS doesn't have mechanisms for dealing with "abuse", why should IPFS?
You typically don't open NTFS to the world & if you do, don't be surprised if law enforcement comes knocking. The fact is, there's a lot of child porn on the web as an example and the fact is, if child porn is found on a server you host there's a good chance you will be taken through "legal hell" in many countries as a result.