Hacker News new | past | comments | ask | show | jobs | submit login

From my perspective, there are essentially two directions from which we can reach AGI.

One is that we may gain sufficient understanding how minds work and become able to implement one. IMHO it's a reasonable assumption that we currently have sufficient hardware capability to make the proper computing power for a mind with human-equivalent intelligence if only we knew how that mind software should be structured, but we simply don't really know how minds work, how they should work, and how to build them. There's lot of work being done there, some knowledge is being gained, but it seems that incremental advances won't be sufficient and progress will happen only in the case of a major breakthrough which we can't predict or expect any time soon - or, possibly, never; there certainly are people arguing that.

The second direction, on the other hand, does not presume a theoretical understanding of how intelligent minds truly work on a high level, but refers to brute force and/or simulation of low level constructs in human brains which we can understand and implement without needing a theoretical breakthrough. This approach requires immense computational power far beyond our current capability, so it's completely unrealistic to attempt in the near future - but twenty years of Moore's law has brought us twenty years closer to a brute-force solution to AGI. If we look at the estimates made in 2000 or earlier (I seem to recall reading brain-brute-force estimates from 1980s but I can't find them now. Perhaps Kurzweil was writing something like this already back then?) about the expected requirements for computing power and the expected progress, then we're pretty much on track; the computing power lines cross at something like 2050-2060; any hopes of "GAI in 2020" were based on the first approach which requires a breakthrough in understanding.

So from this perspective AGI is inevitable even without any progress whatsoever in "true intelligence" research, as long as our physics research and engineering keeps delivering improvements to raw compute power. We'll reach that point someday, likely within my lifetime, even without a breakthrough. But if we do start to understand how minds should be properly constructed, then that can easily shave many orders of magnitude off of the computing power requirements, and accelerate the arrival of GAI by decades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: