Hacker News new | past | comments | ask | show | jobs | submit login

The speech recognition and synthesis are primitive compared to any smartphone assistant these days. The natural language processing is about equivalent to SHRDLU from the 1960s [1]. It turns out that this approach based on manually constructing syntax trees and applying simple logic can make some fun demos but is ultimately a dead end in terms of making systems that are actually useful, as was discovered in the "AI winter".

The part that controls Mario looks similar (if not identical) to this: http://aigamedev.com/open/interviews/mario-ai/

If you want to see the state of the art in using AI to play video games, look no further than "Playing Atari with Deep Reinforcement Learning" [2], where a single general AI system learns to play many different games. The generality is what makes it impressive. Its only inputs are pixels and score, and its only outputs are joystick and button state, just like a human player. This makes it unlike these Mario systems which are hand programmed very specifically for Mario, and use special instrumentation of the game state that skips pixels entirely.

[1] http://en.wikipedia.org/wiki/SHRDLU

[2] http://arxiv.org/abs/1312.5602




Note that the Atari playing system is what got Google interested in buying Deep Mind (the company behind it). It was a pretty significant advance on the state of the art at the time.

Deep Mind sold to Google for around $500M




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: