Hacker News new | past | comments | ask | show | jobs | submit login

> Why would it be undesirable for AI to have the capabilities you described

Precisely because of the ethical questions. I want problem-solving AIs to work on all the problems humanity needs solved, starting with "humans die" and then going on to lower-priority problems. I don't want AI to have goals and values of its own, which might potentially diverge from those of humans; I want it to serve humanity's goals and values. We can build a system capable of human-level problem-solving and well beyond, without actually creating a sentient being.

If, and that's a big if, we want to create an artificial sentient being, that's a separate problem with its own set of ethical concerns; that seems both more dangerous and far less useful than human-level problem-solving. (In the event we did create such a being, such a being would absolutely need to have the same rights as humans or any other sentient species; I'd just rather avoid having to define and draw such a line, and get distracted by the fight over that, when it'd be far more useful to have machines capable of solving problems.)

> but desirable for a digitized brain to have them?

I hope the value of preserving human life is self-evident. Humans already have those qualities and many more; I want to see human life last forever, with all its qualities.




Digitized-humans goals would potientally diverge bio-humans too. For that matter, goals of humanity in general is pretty divergent. Divergence is good, not bad.


Not all divergence is good. Optimizing for [the elimination of all entities which have an idea of good], would be bad, and would be very different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: