Hacker News new | past | comments | ask | show | jobs | submit login

Their stackable moderation system might actually allow one to implement this relatively easily.

Add a moderation channel per country and let clients apply them depending on location/settings. It's naturally not perfect, but as one can just travel to other countries and get their (potentially less restricted) view or even simpler use a VPN, it's as good as basically any other such censorship measurement.




This wouldn’t still work though. If someone uploads CSAM and it’s distributed to multiple users in a jurisdiction where it’s banned (which is virtually all of them) but only hidden by the moderation filters, then Bluesky would still be in a lot of pain from distributing said material.

Also, filters which are optional on the user’s part can’t really be counted as moderation.


From the root comment:

> There are also "infrastructure takedowns" for illegal content and network abuse, which we execute at the services layer (ie the relay).

My understanding, here, is that any app has the ability to shut down entire accounts from being able to provide content for that app. And my expectation is that states will have laws that say "operators of an app must ensure that they don't provide illegal material" - at least to the extent of CSAM. So you have state motivation for app-runners to moderate illegal content on their app, and you have app-level mechanisms for shutting down content. And while it can still be hosted on whatever relay was hosting it to start with (if that isn't the one that shut down the content), I would be surprised if sharing that content to another relay didn't give away a ton of information that a person doing illegal activities wouldn't necessarily want published. Put more simply: it's unlikely that if I have to shut down some CSAM coming from your relay, that I can't also turn that relay data over to the authorities. Meaning you have a pretty strong incentive to not actually share your CSAM content to any law-abiding apps.


So to cut all sugarcoating off, the problem isn't criminals doing knowingly criminal things. It's Japanese users disporportionately obliterating Twitter-style social media and absolutely hammering the system with bunch of risque selfies that don't look adult to Europeans and anime style arts that don't involve child in making, neither of which qualify as CSAM by local laws and therefore not understandable to offending Japanese users in context of potential legal outcomes such that it would change majority behaviors. It is simply legal as drinking at legal ages.

This Japanese flood casually nears or exceeds 50% of content by volume and is a specifically Japanese phenomenon; it does not generalize into Asian cultures or Sinosphere languages[1] - all the others are easily 1% or less or proportional relative to English. It also isn't happening with Facebook but it is with Mastodon.

To be even more frank, everyone should just set up a Japanese containment server with an isolate IdP, and get Yahoo! Japan or NTT Corp fund it, have it monetized via phone contract or something, and that could solve a huge bulk of problems with microblogging moderation. Then everyone could go back to weeding out those few of actual pedophiles, smugglers and casino spams, occasionally reinstating not-too-radical political activists.

Should "outside" users be eligible for signup with such isolate system is a separate problem, but that will be foreign crime anyway and should not bother the main branch operators that cater to the rest of the world that CAN unite.

1: https://bsky.app/profile/jaz.bsky.social/post/3klwzzdbvi22t

2: worse version of [1]: https://images.ctfassets.net/vfkpgemp7ek3/5kYcWXcFUYSLBAEkrS...


It seems like Bluesky's architecture is ideal for this case. Label it, apps don't show it by default, and let people opt-in to seeing it.


AIUI Bluesky team has a lot of ex-Twitters, who'd fought this problem for years, so it'll be very reasonable that this architecture is good as it gets without departing from their mission(of making a locked-open global microblogging social media).


Wouldn't Bluesky be able to have an admin rule that hides all content tagged with labels that are illegal in Bluesky's own jurisdiction?


The problem is that hiding content isn't enough. It's illegal to even have the content.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: