Hacker News new | past | comments | ask | show | jobs | submit login

Fake news concern isn't the right place to focus. News type content is low-volume, has a structured sharing model (i.e. people usually share the articles), is long-form, and requires fairly high precision. This is where humans excel and where automated language models are the weakest.

On the other hand, comments on sites like Facebook/Reddit/4chan are perfect for language model bots. The content is high-volume, semi-anonymous, usually shared without an explicit network, and can be extremely low precision.

So if you had a bot that could get on any discussion network for planning protests for example, and spammed it with destructive and divisive fake comments, it could actually make organizing pretty hard. And while there's some demand on the language model, many comments are just a few sentences long. And the content doesn't need to be very precise, it just needs to be convincing enough to be distracting.

I also think that the worst abusers are likely to be current power-players like Google/Facebook/Chinese Gov rather than small actors.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: