> (just a hobby, won't be big and
professional like gnu)
Llamas are creating the linux of AI and the ecosystem around it. Even though openAI has a head start, this whole thing is just starting. Llammas are showing the world that it doesn't take monopoly-level hardware to run those things. And because it's fun, like, video-game-fun there is going to be a lot of attention on them. Running a fully-owned, uncensored chat is the kind of thing that gets people creative
This is my hope as well. It would be disastrous if the future of AI is one where only megacorps can run it and where they control all access to it. In that sense, LLaMA is really encouraging and I'm seriously rooting for it to improve.
It's just not there yet. I tend to be kind of bearish on LLMs in general, I think there's a lot more hype than is warranted, and people are overlooking some pretty significant downsides like prompt-injection that are going to end up making them a lot harder to use in ubiquitous contexts in practice, but... I mean, the big LLMs (even GPT-3.5) are definitely still in a class above LLaMA. I understand why they're hyped.
I look at GPT and think, "I'm not sure this is worth the trouble of using." But I look at LLaMA and I'm not sure how/where to use it at all. It's a whole different level of output.
But that doesn't mean I'm not rooting for the "hobbyists" to succeed. And it doesn't mean LLaMA can't succeed, it doesn't necessarily need to be better than GPT-4, it just needs to be good enough at a lot of the stuff GPT-4 does to be usable, and to have the accessibility and access outweigh everything else. It's just not there yet.
I think there's a case to be made for the bottom of the market being the important part.
The aspects of LLMs that resemble AGI are pretty exciting, but there's a huge playspace for using the model just as an interface, a slightly smarter one that will understand the specific computing tasks you're looking for and connect them up with the appropriate syntax without requiring direct encoding.
A lot of what software projects come down to is in the syntax, and a conversational interface that can go a little bit beyond imperative command and a basic search box creates possibilities for new types of development environments.
LoRa has been pretty popular and untill the llama leak was not aware of it, maybe will see something cool out of the open assistant project, we have a lot of English and Spanish prompts and was crazy to see people doing an massive open source project for ML.
They can be modified to produce qualities of output that are unique. This puts them back in the realm of individual control. I will put the human in the artificial in a way that is not true with the industrial models.
Llamas are a licensing ticking bomb, but they showed that reasonably sized models can get things done, and there are clean architecture being trained right now that will unlock the field shortly, likely within the year.
But what about the training data? You can't rely on weights keep being leaked (assuming that that even raises no legal issues) in order for open source AI to advance.
Llamas are creating the linux of AI and the ecosystem around it. Even though openAI has a head start, this whole thing is just starting. Llammas are showing the world that it doesn't take monopoly-level hardware to run those things. And because it's fun, like, video-game-fun there is going to be a lot of attention on them. Running a fully-owned, uncensored chat is the kind of thing that gets people creative