Hacker News new | past | comments | ask | show | jobs | submit login

Taking a step back, we rely on countless systems every day where some reasonable proof-of-personhood is critical for the application, yet our countermeasures are best-effort in-house fraud prevention relying on security-by-obscurity. I suspect virtually all systems of today are ripe for exploitation in a rapidly evolving cat-and-mouse game. We’re in the early stages of a dirt cheap, sophisticated AI content generation revolution. Yet, our rhetoric suggests that these are minor, isolated issues with specific companies.

I’m wondering if it’s time to properly categorize these attacks as a separate and omnipresent danger to computer systems that process and distribute human input, and take it seriously. (Is there already a term for this? Sybil-attacks?) Are security researchers or academia largely overlooking an elephant in the room? Maybe I live under a rock, but I haven’t seen any interesting movement in analyzing, much less mitigating, these threats systematically.

Specifically, I fear the huge boost in human-imitation AI will shift the scales in favor of gray/black actors, due to reduced general trust-levels (as if they weren’t already low). If fake reviews cost $5 today, but $.01 tomorrow, there’s simply no way our dams will hold to that pressure. We really need some answers on how to navigate the upcoming sea of bullshit, or we’ll regress quickly into a world that’s disorienting at best, for good actors, and offers plenty exploitation by bad actors.




You're right. It is an issue and a lot of players and researchers are already into it in an area called "Trust & Safety". However, these are problems that pertain to platforms as opposed to other tech enterprises, and as such, they are mostly dominated by these companies interests and strategies. E.g. it can be argued that misleading advertisement is still good business for GoogleAds or Meta. Refer to a discussion here on HN along the lines of "the optimal amount of fraud is non-zero" to get a better idea of the sort of thing I'm talking about. Moreover, aside from being problems of the closed platforms we chose to be the "winners" of the Internet most people use, it looks like the companies don't want any help in solving them, or won't make it any easier. A lot of people has done research on fake instagram accounts, for instance. It now got almost impossible to have API access to followers/following ratio or even scrape it, which is a key metric in that research. Anyway, I digress, but yes, it's an issue, it's a proper issue of platforms that take human input, and it's one they don't want our help.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: