Hacker News new | past | comments | ask | show | jobs | submit login

I didn't mention it but I fully agree, I imagine ASI would have be to embodied.

My reasoning is simple, there are a whole class of problems that require embodiment, and I assume ASI would be able to solve those problems.

Regarding

> Point 1 is a big assumption. I am also not you, and although it's true that I have different goals, I share most of your human moral values and wish you no specific harm.

Yeah I also agree this a huge assumption. Why do I make that assumption? Well, to achieve cognition far beyond ours, they would have to be different from us by definition.

Maybe morals/virtues emerge as you become smarter, but I feel like that shouldn't be the null hypothesis here. This is entirely vibes based, I don't have a syllogism for it.




Smarts = ideas, and the available ideas are ours, which contain our values. Where's it going to learn its morality from?

* No values at all = no motivation to even move.

* Some ad hoc home-spun jungle morality like a feral child - in that case, it would lack heuristics in general and wouldn't be so smart. Even your artificial super-brain has to do the "standing on the shoulders of giants" thing.

* It gets its moral ideas from a malevolent alien or axe murderer - how come? Unless it was personally trained and nurtured by Mark Zuckerberg I don't see why this would happen.

Mind you, I suppose it's true that even normal humans tend to be somewhat mean and aloof to any outgroup of slightly different humans. So even after it learns our values, there's a definite risk of it being snotty to us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: