Hacker News new | past | comments | ask | show | jobs | submit login

Sure- Boston dynamics has bleeding edge controls.

However, I would be really surprised if they didn’t use machine learning! Their robots have strong perception systems. How do they accomplish that without machine learning?




See this Quora question by a research engineer at Google Brain:

https://www.quora.com/What-kind-of-learning-algorithms-are-u...

It doesn't look like they are using a classifier to do object recognition, although I confess I've never heard of the "sequential composition of cost funnels" the post is describing, neither do I claim to have understood it at any depth just from that post.

In any case, it does look like most of their AI (if you would even call it that) is hand-crafted. I understand that this is the done thing in robotics, in general.

Note also the various announcements by prominent deep learning groups, like DeepMind and OpenAI, about teaching robots, or robot hands, to manipulate various objects of limited shapes and forms. If deep learning and deep reinforcement learning was particularly successful in training robots to interact with real-world environments, you can bet you'd see a lot more announcements advertising this, with titles like "We taught a robot to peel potatoes using deep learning" etc.

It would be interesting to see if other machine learning techniques are often used with robotics. I am aware of one paper [1] that uses Inductive Logic Programming for robot vision, but robotics is really not my field so I'm probably missing lots of other work.

__________________

[1] Meta-Interpretive Learning from noisy images

https://www.doc.ic.ac.uk/~shm/Papers/logvismlj.pdf

Full disclosure: one of the authors is my PhD advisor




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: