Hacker News new | past | comments | ask | show | jobs | submit login

This may well be true, but what analysis has he done to support this. The only detailed analysis I have seen from anyone is from Moravec. I have been through his analysis and I can’t find any flaw with it.



Where can I find this analysis? It is impossible that there is no reasonable criticism to be made.



He is in the "all we need is fast enough computers" camp. This claim is far from obvious: We can't even simulate the simplest organisms today despite the fact that we have extraordinary computing power relative to the capacity of these organisms.


This is true, but the evidence so far with AI is that as soon as we have sufficient computational power the tasks that were thought to require human intelligence were then solved by computers.

I don’t think anyone serious thinks that sufficient computational power alone will result in AGI, but we know that we aren’t going to have it until we get to the human level computational.

The interesting question is once we have human level computational power is how long will it take to develop AGI. The evidence from problems that require more limited intelligence (say chess) is that the problem is solved soon after the required computational resources are available.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: