Hacker News new | past | comments | ask | show | jobs | submit login

I suspect most optical illusions are based on perceptual cues / heuristics learned by human brains in early childhood, like object size shrinking with distance, or occlusion showing which object is nearer, perspective, or a vanishing point giving structure to a scene composed of straight lines. So I doubt any of these cues will be learned by a deep learning net -- because they're not essential to learning the target objective efficiently.

So no, I suspect AI is unlikely to be fooled by anything other than tricks based on the most obvious visual cues (like perceiving that two humans of greatly different size must be different distances away).

[OK, now I've read the article.]

The article doesn't say what the training objective was for the net. If it was the ability to predict the perceived direction of rotation for a propeller, then it should be trivial to train the net to predict rotation in a specific direction. (Only one of two binary outcomes is needed to declare victory.)

More specifics are needed on the training process (esp the objective(s) and control images) than the OP article provides.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: