Hacker News new | past | comments | ask | show | jobs | submit login

It sounds like you're frustrated that deep learning is rediscovering stuff that your field discovered years ago. I've come to the conclusion that the only solution to that is to try to communicate more effectively; the reason they are not talking to us is that they can't understand what we're saying.



It's a little premature to brag about the success of this or that (still poorly understood) technique. The agents in the article have created very simple languages that are often not necessary for communication- for instance, when they're not allowed to use words they still achieve their goals by guiding other agents or pushing them around. Also, researchers had to impose very rigid constraints on the type of language the agents were learning, because of course, those agents "live" in very limited environments, conducive to very "unnatural" languages (e.g. ones composed of single words for complex combinations of actions, etc).

The main intuition behind this technique is that we can make it much easier for an agent to learn the kind of language we want it to learn, by associating a vocabulary with a given environment, rather than giving it a vast corpus of text and hoping beyond hope that it will learn exactly the kind of language we want it to from a potentially inifinite number of similar languages that could be learned from that corpus. This is an extremely cool idea, but if it turns out that we've just replaced the work of collating a huge corpus with the work of imposing limitations on the types of language that can be learned, it's still possible that it won't scale up much better than supervised learning form huge annotated corpora.

Like I say, this is an extremely cool idea that can lead to gret advances, but getting agents to create languages with the full expressive power of human language is still worlds away.

In any case, when is it a good idea to insult the intelligence of an entire (unnamed and only assumed) field?


Why the frustration? We are progressing, are we not? A sign of us making no progress would be when the new guys keep disproving the old theories.


> when the new guys keep disproving the old theories

Most old theory seems to be incommensurable with deep learnings results, so it's hard to evaluate whether this might be true.


>> the reason they are not talking to us is that they can't understand what we're saying.

Also, who is "they" and who are "us"? This is a very unreasonable and very inappropriate comment.


Hot damn that's meta.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: