> A computer that is actually fluent in English — as in, understands the language and can use it context-appropriately.
Did you never do grammar diagrams in grade school? :-)
The "context" and structure of language is a formula. When you have billions of inputs to that formula, it's not surprising you can get a fit or push that fit backwards to generate a data set.
This algorithm does not "understand" the things it's saying. If it did, that wouldn't be the end of the chain. It could, without training, make investment decisions on that advice, because it would understand the context of what it had just come up with. Plenty of other examples abound.
Humans or animals don't get to have their firmware "upgraded" or software "retrained" every time a new hype paper comes out. They have to use a very limited and basically fixed set of inputs + their own personal history for the rest of their lives. And the outputs they create become internalized and used as inputs to other tasks.
We could make 1M models that do little tasks very well, but unless they can be combined in such a way that the models cooperate and have agency over time, this is just a math problem. And I do say "just" in a derogatory way here. Most of this stuff could have been done by the scientific community decades ago if they had the hardware and quantity of ad clicks/emails/events/gifs to do what are basically sophisticated linear algebra tasks.
A computer that is actually fluent in English — as in, understands the language and can use it context-appropriately — should blow your entire mind.