Hacker News new | past | comments | ask | show | jobs | submit login

Naw, it's simple. We're talking about an AI achieving human abilities, well, we can protect against dangerous AIs just as well as we protect against dangerous humans...

Oh. Oh dear.




Remember that AI work with electrons, and we are of atoms. We should focus on where electrons control atoms, and reduce those points. Of particular concern is that AI may be a very strong investor with the right prompts. AI could also figure out how to use any other software. Which can be used to gain access to any marketplace, including the dark ones. Which means AI can use money (electrons) to pay others to modify the world (atoms).

Of course, there is already a problem, as you point out. Humans shouldn't have access to these markets either!

But yeah to specifically prevent electron-on-atom violence we need to limit AI physical degrees-of-freedom. by limiting marketplaces. National/global security, not personal morality, should guide these new regulations.

We need to end all drive-by-wire automobiles, and electronic locks. Too many services are habituated to act on electronic signals without human confirmation - particularly the police. There needs to be an unbroken verbal chain between the person who saw an event and the person doing the law enforcement. Breaks in the human chain should be treated very seriously -they should be treated as firing offenses, at least. There are many other similar changes we're going to need to make.

Some folks aren't gonna like this. Regulations are inherently evil, they say. Maybe the mental model should be more like we're the big bad gorilla in the cage. But now there's a tiger in the cage. Regulation restrains the tiger. Also, some folks aren't gonna like it no matter what change you need. The fact of not liking it doesn't mean we don't need it, and it doesn't mean it won't get done. We have to trust that our leaders don't want to die.

And besides, the world will adapt. It always does. AI isn't optional, there's no putting the genie back in the bottle - and personally I don't want to. But I also don't want to be stupid about the stakes. Getting our whole species killed for lack of foresight would be deeply, deeply embarrassing.


I really like your take, but I do not believe it is realistic to expect the response to advanced technology options to be - use even less technology. In the past, new tech has led to integration of new tech. I believe that is the inevitable outcome of AI, and especially AGI once that's a thing.

The tool is too attractive not to use. The tool is too fun not to use. The tool is too dangerous to let out of the box, but that is exactly why we'll do it.

We're curious little monkeys, after all. "What do you think will happen" absolutely is a survival strategy for our species. The problem is when we encounter something that is so much more advanced than us, even if that advance portion is just access to multiple systems of our own creation.

To summarize: I think you make a good point, but I think we're fucked eventually anyways.

I can't wait for the inevitable "does my AI have the right to freedom" case in the supreme court when I'm in my 90's.


No need to be pessimistic. Humans are quite powerful, we have billions of years of brutal iteration in us. I think we can handle AI, even AGI, if we exercise even a modicum of care. It will probably take some major calamity to convince people to take precautions, I just hope it's not that bad. It probably won't be world-ending, so cheer up!


> I think we can handle AI, even AGI, if we exercise even a modicum of care.

HN itself has been spammed relentlessly with people hooking it up to everything they can think of in an attempt to get a worthless reward (karma)

now imagine there's money, power or territory up for grabs instead

we are completely fucked


> There needs to be an unbroken verbal chain between the person who saw an event and the person doing the law enforcement

Leaving everything else aside, how would this look in practice? I think these conversations would need to be in person, since voice can already be faked. Would I need to run to the police station when I need help?


How would it look? If I am a state security person with a gun, and I'm asked to invade someone's home, I would expect to get a face-to-face meeting with the person who really believes this is necessary, with the evidence laid out.

If that is too much trouble to ask, then is justice even possible?


Someone is breaking into my house. I'm hiding in my closet from the intruders. How do I get the police to come to my house and help me?

Another scenario: I'm a police officer and I'm on patrol. My dispatcher had someone come to the police station to tell them that they think their neighbor is experiencing a home invasion. Does the dispatcher just page me and I now drive back to the police station to verify and then drive back out to the home invasion?


>Someone is breaking into my house. I'm hiding in my closet from the intruders. How do I get the police to come to my house and help me?

Lord, give me patience.

Call 911. The dispatcher broadcasts the problem over the radio, and a LEO responds. The dispatcher is a relay that verifies probable cause. The chain of human contact is unbroken between you, the 911 dispatcher, the LEO taking the call. The chain is not broken.

Compare this to a machine that spits out warrants, which are distributed to officers, who never even speak to anyone about the case, do not know the subject of the warrant, and simply execute the warrants.


From my above comment: > I think these conversations would need to be in person, since voice can already be faked.

We are also probably days away from video being trivial to fake.


How do you know it's a person answering the 911 call and not an AI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: