Hacker News new | past | comments | ask | show | jobs | submit login

Sure, but we probably can work on that a little more rather than throwing in the towel and saying 'toxicity is when text matches regexp'.



Well, we probably could throw in the towel. The definition of the word is ever-changing and context-dependent, AND subjective to the receiver. That doesn't sound like something you can train a model for.


If you had access to the reactions of someone reading the content, you could _possibly_ train an agent to spot textual patterns likely to cause the reader to have negative reactions.

You could do a similar thing with a robot DJ, by feeding it a stream of the dancefloor, and training it to keep that dancefloor grooving.


But think about how this is trained. As with all of the authoritarian anti-offence rhetoric (i.e. not person to person politeness, but politeness enforcement), the response should be: who gets to decide?

Some concepts become less offensive over time; some more offensive. 20 years ago gay marriage was offensive in many parts of the world. Should that be codified into our communication tools? Offence is in no way objective, and this will never change.

There is genuine, vast utility in advocating for this sort of thing, but only if you want to be the person with the power to decide what everyone else is allowed to talk about.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: