Hacker News new | past | comments | ask | show | jobs | submit login

To be honest, I don't know. Six months ago I was ready to write off the whole endeavor, then I see things like this https://plus.google.com/100130762972482716067/posts/BN5qjTEN...

and feel like mankind just took a little baby step in the right direction.

I don't think a 6-8 year old human child (6-8YOHC)is the right goal. Or a Star Trek like computer is either, talking to computers gets boring (talking to 6-8 year olds gets boring too!). It's the AI equivalent of the Gorilla Arm in interface design (perhaps "Gorilla Mind" is a good term?) and like I said, is achieving this really useful? Even if achieving the near term goal isn't useful, does it lead into a next goal that's useful?

I think the right goal is to brain storm ideas, "what would we like a computer to do for me?" then start inching towards those.

As an off-the-cuff example, build an AI scaffold of some sort, point it at Wikipedia and Project Gutenberg and have it generate a Khan Academy style educational program for K-12. Get it to start teaching, then use the feedback from the teaching to better model how humans think and learn.

There, that's my contribution to the world of AI. A quick brainstorm with something that could be useful for people. I would find it hard to believe that after half a century, researchers in the field haven't come up with similar kinds of exercises. But I keep seeing more myopic answers.

Getting an AI to tell me a famous movie star's shoe size is not interesting because it really only saves me time, but doesn't do something new for me. Likewise expert systems for diagnostics, any experienced human with a reference can do this job! AI seems too focused on replacing the "human who can do this job with a reference" (RAHWCDTJWAR) and not enough on augmenting what humans can already do to make them better, or doing complex tasks like instructing a class. The problem is not that we're emotionally driven to keep the "human with a reference" in the equation, but that over time, that human has proven to provide better results!

Any brainstorming ideas that fit this mold should be rejected as vectors for the field. If it even smells like a project is turning into a RAHWCDTJWAR run!

Similarly the 6-8YOHC is the wrong direction. Let's be honest, 6-8YOHC aren't very useful or knowledgeable. Let's stop trying to make AIs that are do-nothing ignoramuses. I don't need an AI that knows that when it's hot out I shouldn't wear a winter coat. I already know this.

Like I said at the beginning, I think Cyc and similar approaches have been valuable as a line of research and inquiry, but have ultimately provided so much failure and so little progress, it's obvious that it's not the right way to go. Knowing this is so very important. But I keep feeling like this message isn't getting across and this basic approach to AI has long long overstayed its welcome.




I understand where you're coming from. In my college days I bought all those thick academic books on "machine learning" when the term meant Lenat more than it meant statistics.

That route does seem to have failed us. Or at least not have gone much of anywhere in the years since. At the same time, I'm not sure the Google-y approach to machine learning has made real progress in the last 10 years.

Sure, we've got cars that may or may not be able to handle actual road conditions, but search, and more importantly any sign whatsoever of computers knowing what we want to do has stalled out for quite some time now.

I dunno. There's got to be another approach that yields more progress than either of the semantic or statistical paths.


That route does seem to have failed us. Or at least not have gone much of anywhere in the years since.

I'd like to reference the recent Higgs result as a compare contrast example from a different field.

The search for the Higgs is slightly younger than the search for AI - but of about the same age so it's worth comparing. It took a very long time to yield basic results -- namely "does it exist?" The search for the Higgs was pure Research. The day after the Higgs discovery nothing changed in the world except that we now know it exists. Given 20-30 more years of R&D we might get a hoverboard, or faster blenders or something, and the total time investment will have been about 70 years from "notion on a chalk board" to "hoverboard".

AI researchers might use the Higgs as an example to not poo poo their field since they are still in that long long time between theoretical proposal and working discovery. Detractors might say "but all you propose is just shoving more factoids into your AI model hoping it springs to life"! The analogy with the Higgs is that researchers were for a long time simply proposing to build bigger and bigger accelerators until the Higgs fell out.

I'm not a physicist but I'm hoping that there was a stronger theoretical framework surrounding the Higgs then "let's keep crashing stuff into each other harder and harder till what we want comes out of it". Likewise, I'm not a Cyc-style semantic AI researcher (or an AI researcher of any particular type), but I'm hoping that the field has more going for it than "let's keeping tossing factoids into our Semantic Graph until it springs to life".

I'm willing to think that a 70 year R&D time is worth it if we end up with commercially ready 1.0 equivalent Minds at the end http://en.wikipedia.org/wiki/Mind_(The_Culture).

But at this point I don't think we're any closer to this then we were 20 or 30 years ago. It's a perpetual Research horizon at this point. To put it back into perspective with the Higgs, 15 years ago they were starting construction on the LHC.

AI researchers will lament the lack of funding in their field, etc. But I have yet to hear a compelling research direction the field would go if it were suddenly gifted the cost of an LHC or two to advance the field.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: