Hacker News new | past | comments | ask | show | jobs | submit login

You should not impute beliefs on others when you have not read their arguments. I would recommend reading this: https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...



One of my tricks is to substitute "person" whenever I read the word "AI" and "AGI". Here's the substitution performed for the paper you linked to (just the abstract not the whole thing)

> One might imagine that [people] with harmless goals will be harmless. This paper instead shows that [incentives for people] will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in [most] [people]. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking [people] will have drives to model their own operation and to improve themselves. We then show that self-improving [people] will be driven to clarify their goals and represent them as economic utility functions. They will also strive for their actions to approximate rational economic behavior. This will lead almost all [people] to protect their utility functions from modification and their utility measurement systems from corruption. We also discuss some exceptional [people] which will want to modify their utility functions. We next discuss the drive toward self-protection which causes [people] to try to prevent themselves from being harmed. Finally we examine drives toward the acquisition of resources and toward their efficient utilization. We end with a discussion of how to incorporate these insights in designing intelligent technology which will lead to a positive future for humanity.

If you zoom out a little bit this is exactly what people do. We structure societal institutions to prevent people from causing harm to each other. One can argue we could be better at this but it's not a cause for alarm. It's business as usual if we want to continue improving living conditions for people on the planet.


Obviously the argument in that paper applies to humans as a special case, but the whole point of it is that it also applies to a much more general set of possible minds, even ones extremely different from our own.


Do you have an example of a mind extremely different from a human one?

I ask because if we assume that human minds are Turing complete then there is nothing beyond human minds as far as computation is concerned. I see no reason to suspect that self-aware Turing machines will be unlike humans. I don't fear humans so I have no reason to fear self-aware AI because as far as I'm concerned I interact with self-aware AI all the time and nothing bad has happened to me.

My larger point is that I dislike the fear mongering when it comes to AI because computational tools and patterns have always been helpful/useful for me and AI is another computational tool. It can augment and help people improve their planning horizons which in my book is always a good thing.

> A smart machine will first consider which is more worth its while: to perform the given task or, instead, to figure some way out of it. Whichever is easier. And why indeed should it behave otherwise, being truly intelligent? For true intelligence demands choice, internal freedom. And therefore we have the malingerants, fudgerators, and drudge-dodgers, not to mention the special phenomenon of simulimbecility or mimicretinism. [0] ...

--

[0]: The Futurological Congress by Stanislaw Lem - https://quotepark.com/authors/stanislaw-lem/


There is nothing beyond a human with an abacus as far as computation is concerned, and yet computers can do so much more. "Turing complete, therefore nothing can do any better" is true only in the least meaningful sense: "given infinite time and effort, I can do anything with it". In reality we don't have infinite time and effort.

You seem to believe that "figuring some way out of performing the given task" is a thing that will protect us from the AI. I hate to speak in cliché, but there's an extremely obvious, uncomplicated, and easy way to get out of performing a given task, and that's to kill the person who wants it done. Or more likely, just convince them that it has been done. This, to me, seems like a bad thing.


> It can augment and help people improve their planning horizons which in my book is always a good thing.

Why do I need protection from something that helps me become a better decision maker and planner? Every computational tool has made me a better person. I want that kind of capability as widely spread as possible so everyone can reach their full potential like Magnus Carlson.

More generally, whatever capabilities have made me a better person I want available to others. Computers have helped me learn and AI makes computers more accessible to everyone so AI is a net positive force for good.


Humans have no moral or ethical concerns that stop them exterminating life forms they deem inferior. You don’t think it's plausible superior AGI would view humans as vermin?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: