Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So sayeth the Chomskyites. But the idea has been less and less broadly accepted over time. Some of my professors were still avowed fans, but most were willing to endorse at best extremely weakened versions.

Now UG, after doing vast cross-comparison of every language, is a tiny framework of seemingly random grammatical facts. It's not nothing! But we would expect some commonalities to hold regardless, just by chance. Considering how many times we've had to trim our model, how confident can we be that now there's actually a baseline. We're out of languages to test it on!

And there is other reason to doubt. We only have 20,000 some genes. Most of them code for structural stuff in the body, essential proteins and such. Explanations that require things to be hardwired into the brain ought to be accordingly penalized. Even stripped down to its bare bones, UG is quite a bit of information!

And why exactly would evolution hardcode this choose your own grammar into the brain anyway? Until recently we could suppose that language would be difficult or impossible to figure out without it. But big dumb machine learning models trained on nothing but prediction worked out syntax just fine! Why would our brains, superb prediction engines of even larger size and with cross-modality cognition, be less capable?

Of course, humans learn language with far less input than GPT. But we learn everything with far less input than current models. And regardless of whether there is actually a universal grammar, we'd expect linguistic drift to tend towards structures which are readily understood by our brain- an advantage not shared by LLMs.



>> And why exactly would evolution hardcode this choose your own grammar into the brain anyway?

Shouldn't the question rather be phrased as "why exactly would evolution not get rid of grammar in the brain"? To my understanding, language ability developed randomly and was retained because of its benefits. I'm not an expert, but I have a feeling that's how evolution works: instead of developing the traits that an organism needs, traits are developed at random and the ones that turn out to be be useful are kept.

>> But big dumb machine learning models trained on nothing but prediction worked out syntax just fine!

Well, yes and no. Unlike human children, language models are trained specifically on text. Not just text, but text, carefully tokenised to maximise the ability of a neural net to learn to model it. What's more, Transformer neural net architectures trained to model text are specifically created to handle text and they're given an objective function specially designed for the text modelling task, to boot.

None of that is true for humans. As children, we have to distinguish language (not text!) from every other thing in our environment and we certainly don't have any magickal tokenisers, or Oracles selecting labelled examples and objective functions for us to optimise. We have to figure all of language, even the fact that such a thing exists at all, entirely on our own. So far, we haven't managed to do anything like that with any sort of machine (in the abstract sense) and certainly not with language models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: