Hacker News new | past | comments | ask | show | jobs | submit login

As a neuroscientist, my biggest disagreement with the piece is the author’s argument for compositionality over emergence. The former makes me think of Prolog and lisp, while the later is a much better description for a brain. I think ermergence is a much more promising direction for AGI than compositionality.



Author here. So what! I am not talking about promising directions for AGI, I am talking about having computer systems that we can have confidence in. Sure, AGI if it ever happens will look more like emergence than compositionality, and I'm sure it won't feel a need to explain to us fallible humans why its decisions are correct. In the meantime, I'd like computer systems to be manageable, reliable, transparent, and accountable.


100% agree. When we explicitly segment and compose AI components, we are removing the ability for them to learn their own pathways between the components. We've been proven time and time again the bitter lesson[1]: that throwing a ton of data and compute at a model yields better results than what we could come up with.

That said, we can still isolate and modify parts of a network, and combine models trained for different tasks. But you need to break things down into components after the fact, instead of beforehand, in order to get the benefits of learning via scale of data + compute.

[1]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: