Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How is your concern different than what they are calling political security (~1/3 of the paper)?

>Political security. The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation. We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data. These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates.



I guess it's the difference between what they're calling "authoritarian states", the mustache-twirling dictators of the 20th century targeting their political opponents (a fetish in papers like this), and the _new_ form of corporate-driven mass surveillance and resulting fascism that is emerging.

There is no need for an evil central power targeting threats to itself, but instead we have seen/are seeing the rise pervasive blanket surveillance and classification of _all_ people in a society, even non-political actors, from many different for-profit companies. The is combined with a new type of society that is entirely dependent on corporate services for almost all functions in life (food, travel, healthcare, communications, travel, work, housing, etc). It's a recipe for disaster.

This kind of state is a _new thing_^tm and we have to be aware of it. We're the ones who are going to be oppressed by it, not just the people living under the Saddam Husseins of the world.


You are right on. To confine this worry to 'authoritarian states' is beyond naive.

Could an A.I. program like you are discussing have prevented the school shooting in Florida by alerting the police of the shooter's state of mind and intentions?

If an A.I. could save kids, is there any way we would not be demanding the A.I. protector be installed on every computer and mobile device today? It would be so easy to see how we would voluntarily give up power to this A.I. protector.


An AI could have possibly alerted the police to the shooter's intentions if it was monitoring the right things, however, a human did alert the FBI, and they failed to act.

With correct follow-through, AI could be a useful tool for narrowing down who is a credible threat, but I agree there's a huge risk in relying on it too heavily and punishing people for pre-crime.


I haven't read the paper, but I think:

>undermine the ability of democracies to sustain truthful public debates.

Is coded language for exactly what you're talking about, in the same way that dictators is coded language for geopolitical enemies.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: