Hacker News new | past | comments | ask | show | jobs | submit login
Interpretable Machine Learning Through Teaching (blog.openai.com)
134 points by BooneJS on Feb 15, 2018 | hide | past | favorite | 6 comments



When I teach others, I usually rely on a lot of metaphors/analogies.

For example if I had to describe gradient descent to someone, I wouldn't start with a definition from calculus. I would instead describe it as climbing down a hill.

But that assumes prior knowledge of what a hill is (it has gradients) and what climbing is (you wanna go to the lowest point) and what the physics is like (go towards steepest direction with some momentum). This isn't something that I expect a blank-slate AI to be able to produce.

In short, explaining things intuitively to another human requires knowing about common human experiences and mode of thought. It's the "transfer learning" for humans. This seems like the realm of AGI and not something I expect to see solved well anytime soon. I hope I am wrong!

(Note: even in this example I assumed you understand the experience of trying to explain gradient descent in order to explain my point)


There's an idea that passes in and out of vogue[0] that metaphor is the foundation of semantics in general. It's not been popular lately, but I get the feeling it will come back the more we bump up against the hard limits of tabula rasa-style learning.

I happen to be rather fond of it myself, and agree completely with you here. You can have all sorts of fun building models based around 'metaphoric delta', the bridge that must be crossed between the set of metaphors I understand, and the set you understand. And of course, when they are misinterpreted, or understood differently, or leak, as all abstractions do. You get people talking with the same words but not understanding eachother.

---

[0] most lately, Lakoff. See The Metaphors We Live By


>It's not been popular lately, but I get the feeling it will come back the more we bump up against the hard limits of tabula rasa-style learning.

Most of Neural Networks ideas exist since the 60s/70s, right now we're just converging to a point where tabula-rasa style learning of smaller neural networks has become feasible with our computing power. It's not that we've made such huge strides in theory, it's more that we can now engineer systems that actually do something useful with NNs.

For generalized AI and metaphor based learning, we'll need much much more powerful computers that can stitch together multiple prelearned NNs and modify tweak them just slightly on new data and construct connected multi-domain NNs. This will have the complexity resembling some higher animal brains. Metaphors is kind of another name for using a similar pre built NN as a start for another Neural Network and connecting these in a supernetwork.


Excellent point. AGI will only arise once we can figure out how to create metaphor machines.


Moving the knowledge from one machine to another via learning rather than trying to copy in some form or another seems like a good approach. You don't limit the architecture of the learning or teaching implementations. It even works if the learner is a human, which is a bit different architecture :)

I also like the idea of having teaching output parsable by humans. Then you could ask your teacher AI, teach me (show me) what you know.


This is great stuff. I actually think the obvious application here is foreign language instruction. Particulary chinese character recognition for english speaking elementary students. Most teachers at that level will probably not have mastery of the language themselves. In addition, the "teacher agent" may be able to find patterns that would dramatically boost recall.

Additional meta-learning research from openai posted recently on arxiv:

Evolved Policy Gradients

https://arxiv.org/pdf/1802.04821.pdf

As well as the Ray distributed AI system:

http://bair.berkeley.edu/blog/2018/01/09/ray/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: