Hacker News new | past | comments | ask | show | jobs | submit login

Without realizing it you're voicing the concerns of the AI doomers.

We may build something that far exceeds us in capability, but without us understanding it, or it understanding us. This is the alignment issue.




I do wonder if such a thing can actually exist in the realm of intelligence like in the realm of, say, flight. It’s not clear that’s the case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: