Hey, I'm a creator of the tool written in the article!
In our model, we look at hundreds of features per account. Polarizing tweet content is not the only thing the model looks at. If an account is tweeting OC content, it's a clear indication that account is human.
Twitter's policy allows bots of their platform, and most are harmless! There are bots that tweet out the upcoming songs on a radio station. Unfortunately, a CAPTCHA would eliminate all bots, not the ones that spread fake news or claim compromised accounts.
It seems like there's an obvious solution to the problem of how to crack down on the types of bot your AI tries to detect, without removing legitimate boots:
Give users the ability to designate their account as a self-reported bot. Accounts designated this way would be identified as a bot in the Twitter UI, and would be exempt from captchas.
In our model, we look at hundreds of features per account. Polarizing tweet content is not the only thing the model looks at. If an account is tweeting OC content, it's a clear indication that account is human.
Twitter's policy allows bots of their platform, and most are harmless! There are bots that tweet out the upcoming songs on a radio station. Unfortunately, a CAPTCHA would eliminate all bots, not the ones that spread fake news or claim compromised accounts.