Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If intelligence is all you need to dominate the world, why do some of the most powerful world leaders seem to not be more than a standard deviation above average intelligence (or at least they were before they became geriatric)?

It's terribly ironic that you've derided individuals who have been "influenced by Hollywood", and then make a point like this, which is closely aligned with typical film portrayals of AI dangers.

The real immediate danger lies not in cognitive quality (aka "the AI just thinks better than people can, and throws hyperdimensional curve balls beyond our comprehension"), but in collective cognitive capacity (think "an army of 1 million people shows up at your front door to ruin your day").

A lot of people have a tough time reasoning about AGI because of its intangibility. So I've come up with the following analogy:

Imagine an office complex containing an organization of 1,000 reasonably intelligent human beings, except without commonly accepted ethical restrictions. Those people are given a single task "You are not allowed to leave the office. Make lend000's life miserable, inconvenience them to your maximum capacity, and try to drive them to suicide. Here's an internet connection."

Unless you are a particularly well-protected and hard-to-find individual, can you honestly claim you'd be able to protect against this? You would be swatted. You would have an incredible amount of junkmail showing up at your door. Spam pizzas. Spam calls. Death threats to you. Death threats to every family member and person that you care about. Non-stop attempts to take over every aspect of your electronic presence. Identity in a non-stop state of being stolen. Frivolous lawsuits filed against you by fake individuals. Being framed for crimes you didn't commit. Contracts on the darknet to send incendiary devices to your home. Contracts on the darknet to send hitmen do your door.

Maybe your (unreasonable) reaction is that "1000 people couldn't do that!". Well, what about 10,000? Or 100,000? Or 1,000,000? The AI analogue of this is called a "collective superintelligence", essentially an army of generally intelligent individual AIs working towards a common goal.

This is the real danger of AGI, because collective superintelligences are almost immediately realizable once someone trains a model that demonstrates AGI capabilities.

Movies usually focus on "quality superintelligences", which are a different, but less immediate type of threat. Human actors in control of collective superintelligences are capable of incredible harm.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: