My big problem with Kurzweil's singularity is the massive handwaving he does between 'computers are getting exponentially faster' and 'AI will arise'.
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in hard AI in the last 30 years isn't a good sign.
There's never any progress in AI, because once we figure out how to do something, we stop calling it "AI".
In the last 30 years, computers have won at chess, won at Jeopardy, learned to recognize spam with better than 99.5% accuracy, learned to recognize faces with better than 95% accuracy, achieved semi-readable automatic translation, figured out what movies I should add to my Netflix queue, and started to recognize speech. We've seen huge advances in computer vision and statistical natural language processing, and we're seeing a renaissance in machine learning. Most of this stuff was considered "hard AI" as recently as 1992, but the goalposts have moved.
And if intelligence can't be represented in algorithmic form, then what's the brain doing? Even if we have immaterial souls that don't obey the laws of physics, why do some brain lesions cause weirdly specific impairments to our thought process? A huge chunk of our intelligence is clearly subject to the laws of physics, and therefore can be wedged somewhere into the computational complexity hierarchy.
Well, maybe you went to different classes than I did, but none of those things you mentioned would ever have been considered hard AI as I was taught it.
Hard AI == generalised problem solving (or as wikipedia puts it, 'the intelligence of a machine that can successfully perform any intellectual task that a human being can'), soft/weak AI == problem solving within a specific defined area (such as all of your examples).
Not really. The idea behind hard AI is that you can have one algorithm that can solve general problems, not lots of different algorithms to solve specific problems.
Take IBM's research as an example. Deep Blue is very good at solving chess problems, Watson is very good at solving textual analysis problems, but taken together they don't solve any problems that aren't chess or textual analysis.
The hard AI problem is a direct parallel of the psychological debate over whether a general intelligence capability (the 'G factor') exists in humans, and whether this can be measured (IQ).
Honest question here - does anyone actually still believe a single algorithm can solve general problems? Certainly the human mind doesn't work that way - we are fairly compartmentalized with a lot of communication between compartments.
AIXI is a single algorithm that can solve every problem, given enough time, which is enough to prove that they do exist, however uselessly long it would take AIXI to solve any actual problem.
The book "On Intelligence" does a decent job explaining for non-neuoroscientists a theory where all parts of the brain (imaging, hearing, speach, cognition, etc) use the same pattern matching algorithm.
Half of your examples are, in the grand scheme of things, trivial applications of decades-old techniques. Recommendation, for example, is based on techniques which are almost 100 years old (SVD). Winning chess has been rendered possible through increased machine power, the algorithms necessary for it are hardly ground breaking. If this is AI, then many things are AI: almost any quantitative usage of statistics is AI, for example.
There is no agreement on this, but some leading AI researchers consider that most AI problems should be solvable with decades old computers (i.e. we need some completely new paradigms that nobody has yet thought about). See Mc Carthy for example: http://www-formal.stanford.edu/jmc/whatisai/node1.html
That was exactly his point, wasn't it? His examples used to be considered problems in AI. Now that the problems are understood, they are simply considered to be algorithmic problems. Lots of output of the MIT AI labs in the early days is now simply taught in algorithms courses, yet back then, those researchers considered themselves to be doing AI research.
As I recall, Kurzweil referred to such applications as "narrow A.I." Human-level AI capable of passing the Turing Test is called "strong A.I." in his lexicon.
And because we already have proof of concept machines that perfectly exhibit intelligence, we can be sure that the intelligence spot in the complexity hierarchy isn't somewhere in the intractable regions.
Go and Chess are fundamentally in the same class- perfect information and deterministic. Go simply has a much higher branching factor than Chess, limiting the utility of pure brute-force techniques and increasing the importance of good pruning and heuristics. It is no more surprising that Go takes more effort from programmers than Chess than that Chess is trickier than checkers.
There's probably more to it than that. Humans do typically play Go on 19x19 boards, but Go is still relatively hard for computers (compared to Chess) even on smaller boards like 9x9, where the branching factor is reasonably comparable to Chess. And the branching factor in the kind of Go tactics problems collected as exercises in books for human players isn't that much larger than the branching factor in chess, either. Decades ago chess programmers could remark that their programs solved typical chess tactics problems faster than humans could turn the pages in the book. Even today solving typical Go tactics problems is somewhere between a very serious programming challenge and an open research question.
Broadly speaking the difficulties in Go tactics seem to be similar to the difficulties that computers have historically had in Chess endgame tactics, which clearly wasn't a problem of branching factor: branching goes down in the Chess endgame. (However, today there is an important dissimilarity: the space of Chess endgames is small enough that modern computers can tabulate the solution for a significant fraction of the problem space beforehand.)
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form.
We have a working definition and know that it is algorithmically representable nowadays (http://www.hutter1.net/ai/aixigentle.htm). Now the question is how you can make the algorithm efficiently computable.
The whole framework is extremely abstract. Unless human brains are doing magic that's beyond what a Turing machine can do, whatever humans are doing probably does boil down to something resembling the Solomonoff induction AIXI is based on. It's also pretty trivial to observe that humans are running some kind of very clever approximation instead of the full abstract AIXI, since we can actually do useful stuff with the amount of sensory information we get in reasonable time.
What humans do belongs to the part where you need to figure out how to make this stuff efficiently computable.
Well, that's the point. Take away the weasel word 'magic', and saying that humans are doing what a Turing machine can do is the same thing as saying they're algorithmically representable.
Turing machines are not handed down to us by God; there is no reason to believe they are some kind of Ultimate Representation of Everything.
it isn't just that you can never be sure. you can also never be sure it is false. for anything you find that is potentially uncomputable you never know if you simply haven't figured out how to compute it yet.
Doesn't contemporary cognitive science pretty much go with the hypothesis that human brains are Turing-equivalent? Unless we go with magic, hypercomputing brains would require hypercomputing physics, and so far we know pretty much about physics and all of it seems to be Turing-computable (although slowly with quantum stuff), outside pathologies like time travel.
We don't know if Turing-equivalent formalisms are the Ultimate Representation of Everything, but they seem to be by far the best Representation of Everything anyone has found so far.
>We don't know if Turing-equivalent formalisms are the Ultimate Representation of Everything, but they seem to be by far the best Representation of Everything anyone has found so far.
Well, sure... and Wolfram's 2-state-3-symbol means that an incredibly tiny machine with no memory at all is basically equivalent to any computer we've ever designed. However, that doesn't provide us any insight into how thinking works, or how to write programs, or anything really. It's a mathematical curiosity. In reality, Turing machines themselves are a terrible representation of algorithms, since despite being able to represent anything, anyone who tries to think of writing code in terms of developing simple rules for writing numbers on a strip is going to lose their mind.
Aside from the obvious problem (AIXI is undecidable), there's no real reason to believe that it represents a useful way to analyze the problem of intelligence. For one thing, no progress has really been made on the Hutter prize since its inception -- prediction by partial matching continues to win, and it was developed in the '80s.
even at a low evel the brain is compressing and throwing out such massive amounts of data that I don't think it's fair to call it solomonoff induction anymore.
This depends on the assumption that 'intelligence' (and nobody can really agree on what that means, which is a bad start) is representable in algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in hard AI in the last 30 years isn't a good sign.