How many actual AI researchers from one of the big AI companies do we have here? I ask because they always seem to be extremely quiet, but I believe there is no way you could be involved in the development of LLMs on a deep level and not understand at this point that the entire thing is a scam. LLMs are very much like a magic trick - they seem truly miraculously to those who don’t understand how the trick is done. But those who designed the trick certainly know it’s deception. They’ve done enough research by this point to see that it’s not intelligent at all, but generates a very good illusion of intelligence by returning text that seems very similar to human output (because that’s what it was trained on).
Useful? Yep - it’s like the best autocomplete you could ever imagine. Paradigm-changing even, as we now have a big chunk of human knowledge in a much more easily searchable format. It’s just not intelligent.
I have to imagine that just like a magic trick, eventually someone will come up with a way to clearly communicate to the layperson how the trick is done. At that point, the illusion collapses.
Both Yann LeCun and Richard Sutton (author of "The Bitter Lesson" essay where he argued scaling of general purpose methods brings about significant returns) have already pointed out that LLMs are a dead end.
All that the AI Industry is doing is scaling computation/data in the hope that the result may encompass more of "existing real-world data" and thus give the illusion of thinking. You don't know whether the correct answer is due to reasoning or due to parroting of previously seen answer data. I always tell common folks to think of LLMs as very very large dictionaries eg. with the words from a pocket oxford dictionary you can construct only so many sentences whereas from a multi-volume set of large oxford dictionaries you can construct orders of magnitude more sentences and thus the probability of finding your specific answer sentence is much much higher. Now they can understand the scaling issue, realize its limits and why this approach can never lead to AGI.
Ti'm not at all pro-AI, but the argument were that LLM _alone_ were a dead end to reach AGI. I'm pretty sure we're generations away from AGI, but if we manage to build it, LLM would probably the tool it would use to communicate.
>If we manage to build it, LLM would probably the tool it would use to communicate
But that's the thing - LLMs are just a probabilistic playback of what people have written in the past. It's not actually useful for communication of new information or thought, should we ever find a way to synthesize those things with real AI. They're literally just a search engine of existing human knowledge.
It's a translation tool, and it's great at that? Basically it vectorize words in a multidimensional space (and share dimensions with similar words) if I understand correctly, so llms can 'distinguish' homonyms, find synonyms and antonyms easily, and translate extremely well.
Im admittedly bearish on LLMS but believe there are benefits to be exploited in the underlying technology. I have personally found a way to utilise it as an input to produce a superior user experience within a product than what exists for the market I am targeting.
I agree, its not intelligent. You have to accept that as a fundamental premise and build up from there to figure out where it makes sense to utilise the technology - oh and if you have to refer to the technology and make sure the user knows about it, youve already failed. The user frankly does not care nor do they need to know of its existence.
Useful? Yep - it’s like the best autocomplete you could ever imagine. Paradigm-changing even, as we now have a big chunk of human knowledge in a much more easily searchable format. It’s just not intelligent.
I have to imagine that just like a magic trick, eventually someone will come up with a way to clearly communicate to the layperson how the trick is done. At that point, the illusion collapses.