Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So if we take the Universal Grammar argument - not merely some specialization but specialization towards underlying structure - I don't see how your experiment would disprove that. Chomsky could always argue the model had 1000x the processing capabilities of a human (how could we compare CPU power when the underlying strata is so different?). He'd say it learns language, but not like a human. Moreover, the ability to learn some non-human language would be used _against_ the model.

He won't accept machine learning or counter-examples from existing languages, so I wonder if anything could disprove Chomsky save for building an artificial human brain or building a closed form version of human learning - things we hopefully won't do for ethical reasons.

Whether it's true? IMHO, there's definitely human specializations, but there doesn't seem to be a single universal grammar. The underlying model is much lower level and it doesn't necessarily show in the high level language details.



I can't of course speak for how Chomsky might react and/or rationalize his own biases, so I won't try to. Given that he's an almost 100 year old man with a lifetime of biases of and doubts, you may very well be right that he would not accept such an argument. I will say that he often complains that AI research is not used more to study these types of human hypotheses where human experimentation is deeply unethical, so I'm not as convinced as you are that he would be unhappy with such an experiment.

However, speaking about the argument itself, I don't think processing power would matter. The basis of the poverty of the stimulus argument is that the amount of examples given to a child are insufficient mathematically to uniquely determine a particular language in the space of all mathematically possible languages. Even with an infinity of processing power, if I give you only two sentences you can't uniquely deduce the rest of the language I intended if you're not making a good deal many other assumptions about it: many different rule sets will accept those two sentences as correct. Chomsky is arguing that it is those assumptions which are the built-in component.

If you showed that an infinitely powerful supercomputer can pick the same language a baby does from the space of all languages then one of two things must be true: either the assumptions snuck into the computer's construction accidentally, or the argument is wrong. If you then additionally show that, with the same amount of sentences chosen from a non-human language, the computer can pick out the intended non-human-like language, then that proves that the computer is not relying on the same assumptions as humans are in the other case. So, the only remaining possibility is that the argument itself is wrong.

Of course, it would be interesting to imagine what could replace this argument. Perhaps the amount of input given to a toddler is actually large. Perhaps the assumptions do exist, but they are of a more fundamental nature and not specific to humans (e.g. perhaps human languages are the simplest possible rule sets, in some quantifiable mathematical sense, that match those sentences). Either way, the answer could be quite interesting.


You've given me some things to think about. I see how your experiment would disprove Poverty of Stimulus (if we could roughly assume how much input a baby gets) but I don't see how it would disprove Universal Grammar. If the computer could learn a non-human language, wouldn't that lead to the charge that while our computer is a competent language learner, its methods are fundamentally unlike those of a human?


As far as I understand, the argument is: it's impossible to learn human language from such poor stimulus without some kind of genetically determined universal grammar. If we prove that it is actually possible (even with non-human methods) to learn human language with such poor stimulus while not possessing a built-in universal grammar, that makes universal grammar unnecessary, or even contingent.

So, IF such a machine could be built, it would open the possibility that universal grammar, even if it exists, is only an accident of history (just as much as Indo-European influence on most European languages), not one of human genetics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: