Saw this the other day and it made me think that too much effort and credence is being given to this idea of crafting the perfect environment for LLMs to thrive in. Which to me, is contrary to how powerful AI systems should function. We shouldn’t need to hold its hand so much.
Obviously we’ve got to tame the version of LLMs we’ve got now, and this kind of thinking is a step in the right direction. What I take issue with is the way this thinking is couched as a revolutionary silver bullet.
We shouldn't but it's analogous to how CPU usage used to work. In the 8 bit days you could do some magical stuff that was completely impossible before microcomputers existed. But you had to have all kinds of tricks and heuristics to work around the limited abilities. We're in the same place with LLMs now. Some day we will have the equivalent of what gigabytes or RAM are to a modern CPU now, but we're still stuck in the 80s for now (which was revolutionary at the time).
It also reminds me of when you could structure an internet search query and find exactly what you wanted. You just had to ask it in the machine's language.
I hope the generalized future of this doesn't look like the generalized future of that, though. Now it's darn near impossible to find very specific things on the internet because the search engines will ignore any "operators" you try to use if they generate "too few" results (by which they seem to mean "few enough that no one will pay for us to show you an ad for this search"). I'm moderately afraid the ability to get useful results out of AIs will be abstracted away to some lowest common denominator of spammy garbage people want to "consume" instead of use for something.
An empty set of results is a good signal just like a "I don't know" or "You're wrong because <reason>" are good replies to a question/query. It's how a program crashing, while painful, is better than it corrupting data.
Reminds me of first gen chatbots where the user had to put in the effort of trying to craft a phrase in a way that would garner the expected result. It's a form of user-hostility.
It may not be a silver bullet, in that it needs lots of low level human guidance to do some complex task.
But looking at the trend of these tools, the help they are requiring is become more and more higher level, and they are becoming more and more capable of doing longer more complex tasks as well as being able to find the information they need from other systems/tools (search, internet, docs, code etc...).
I think its that trend that really is the exciting part, not just its current capabilities.
why is it that so many of you think there's anything meaningfully predictable based on these past trends? what on earth makes you belive the line keeps going as it has, when there's literally nothing to base that belief on. it's all just wishful thinking.
Obviously we’ve got to tame the version of LLMs we’ve got now, and this kind of thinking is a step in the right direction. What I take issue with is the way this thinking is couched as a revolutionary silver bullet.