Hacker News new | past | comments | ask | show | jobs | submit login

This is basically what killfiles were in the days of Usenet. It worked well for the demographic that was on Usenet at the time (tech savvy and dedicated).

I think the problem is that by and large, users are not willing to do this work themselves. When faced with a social platform that has a lot of jackasses on it, rather than individually curate their experience to remove the jackasses, most of them just leave the platform and find another one where this work is done for them already.

And this is why social networks have abuse teams. If it were totally up to them, they'd rather save themselves the expense, but users have shown that they will leave a platform that doesn't moderate, and so all social platforms are eventually forced to.




If I'm hosting an IPFS node and I'm accidentaly hosting some content I'd rather not host, I should be able to remove that content from my node and let other nodes know, 'hey, this stuff seems illegal/unethical/unwanted'. Other nodes could then configure their node to automatically listen to you and remove the tagged content, with parameters of saying 'at least x amount of people tagged this content' and 'of those people, y amount should have at least a trust level of z' where the trust level is calculated from others listening to that specific node. With blacklist/whitelist behaviour for specific nodes. Should do the trick but maybe I'm missing something.


Sure, it works if your starting point is "If I'm hosting an IPFS node." There's a level of baseline tech-savvy that's implied by even knowing what that is.

Understand that most of the general population operates on the level of "Somebody said something on the Internet that offends me; how could this have happened?" And that the maximum amount of effort they're willing to put in to rectify this situation is clicking a button. The realistic amount is that they wish it never happened in the first place. That's the level of user-friendliness needed to run a mass-market consumer service.


You bring up a good point and I was just looking from the technical perspective. The method I describe handles content moderation 'after' it's already available. It seems to me there should also be a system in place for content 'before' it's available so it handles the cases you mentioned.

That being said I don't think it's impossible to have such a system in a decentralized way, there are incentive structures you could build to handle this.


This is a naïve and dismissive perspective on the issue.

Moderation is a fundamental requirement of any social system, and it's one of the two things that Web3 can't yet address meaningfully -- the other being addressability.


A type of shared killfile might work, kind of like how some people or groups of people curate the lists of ad domains in ad blockers.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: