This article doesn't make much sense to me, although I admittedly am not familiar with linguistic theory.
> ' One of the most foundational claims of Chomskyan linguistics has been that sentences are represented as tree structures, and that children were born knowing (unconsciously) that sentences should be represented by means of such trees.'
I don't understand how GPT-2 tests or attempts to refute this claim. Can't we view children as being born with a pre-trained network similar to a rudimentary GPT?
> 'Likewise, nativists like the philosopher Immanuel Kant and the developmental psychologist Elizabeth Spelke argue for the value of innate frameworks for representing concepts such as space, time, and causality (Kant) and objects and their properties (e.g spatiotemporal continuity) (Spelke). Again, keeping to the spirit of Locke's proposal, GPT-2 has no specific a priori knowledge about space, time, or objects other than what is represented in the training corpus.'
I'm just very confused. Are nativists arguing that these principles regarding language and "innate frameworks" aren't emergent from fundamental interactions between neurons in the brain?
It seems like either
1. They are arguing these aren't emergent, which seems obviously wrong if we're trying to describe how language actually works in the brain, in which all thoughts are emergent from the interactions of neurons.
2. They are arguing that they are emergent, but pre-encoded into every human that is born. This doesn't seem inconsistent with GPT-2 at all.
This article seems like a fine critique of our performance so far in language modeling, but in no way does it seem to be vindicating nativist views of language, nor do I quite understand how such views apply to GPT-2.
Obviously, the idea that you can encode every thought in a fixed length vector is BS (just because thought space doesn't have a fixed dimensionality it can be reduced to), but seems rather irrelevant to the main point of the article.
> ' One of the most foundational claims of Chomskyan linguistics has been that sentences are represented as tree structures, and that children were born knowing (unconsciously) that sentences should be represented by means of such trees.'
I don't understand how GPT-2 tests or attempts to refute this claim. Can't we view children as being born with a pre-trained network similar to a rudimentary GPT?
> 'Likewise, nativists like the philosopher Immanuel Kant and the developmental psychologist Elizabeth Spelke argue for the value of innate frameworks for representing concepts such as space, time, and causality (Kant) and objects and their properties (e.g spatiotemporal continuity) (Spelke). Again, keeping to the spirit of Locke's proposal, GPT-2 has no specific a priori knowledge about space, time, or objects other than what is represented in the training corpus.'
I'm just very confused. Are nativists arguing that these principles regarding language and "innate frameworks" aren't emergent from fundamental interactions between neurons in the brain?
It seems like either
1. They are arguing these aren't emergent, which seems obviously wrong if we're trying to describe how language actually works in the brain, in which all thoughts are emergent from the interactions of neurons.
2. They are arguing that they are emergent, but pre-encoded into every human that is born. This doesn't seem inconsistent with GPT-2 at all.
This article seems like a fine critique of our performance so far in language modeling, but in no way does it seem to be vindicating nativist views of language, nor do I quite understand how such views apply to GPT-2.
Obviously, the idea that you can encode every thought in a fixed length vector is BS (just because thought space doesn't have a fixed dimensionality it can be reduced to), but seems rather irrelevant to the main point of the article.