Hacker News new | past | comments | ask | show | jobs | submit login

Brain emulations don't have to be smarter than humans to be superhuman, they can be merely more numerous and/or quicker.

The time frame for proper AI not via emulation is actually shorter, cf Shane Legg's peak probability estimate of 18 years. This is because the mechanisms of learning and processing in the brain are well under way to being understood, and they will lead to copycat software using similar conventional "narrow AI" algorithms. These also have a greater potential than neural snapshots to be scaled up fast.

Unfortunately none of the above helps towards making "friendly" AI (that is, avoiding creating a superintelligence whose value system is inimical to ours). This ought to be a serious worry.




In order to worry about Friendliness, you have to be first (otherwise it was pointless, as whether the first mover's AI was Friendly is what matters). So, you have to be relatively confident that you're already going to be first in order to spend much time researching Friendliness. I only know of one group that appears to be confident in that way (SIAI), and I have no reason to think that their confidence is less misplaced than all the other people who've been convinced they'd figured it out.


From the outside, SIAI seems torn along a Goertzel-Yudkowsky axis, with BG saying "whee, this is fun, lets build it", and EY too busy saving the world to panic. But the man himself reads Hacker News, so I'll shut up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: