I'm really excited about this project and I think it could be really disruptive. It is organized by LAION, the same folks who curated the dataset used to train Stable Diffusion.
My understanding of the plan is to fine-tune an existing large language model, trained with self-supervised learning on a very large corpus of data, using reinforcement learning from human feedback, which is the same method used in ChatGPT. Once the dataset they are creating is available, though, perhaps better methods can be rapidly developed as it will democratize the ability to do basic research in this space. I'm curious regarding how much more limited the systems they are planning to build will be compared to ChatGPT, since they are planning to make models with far less parameters to deploy them on much more modest hardware than ChatGPT.
As an AI researcher in academia, it is frustrating to be blocked from doing a
lot of research in this space due to computational constraints and a lack of the required data. I'm teaching a class this semester on self-supervised and generative AI methods, and it will be fun to let students play around with this in the future.
Long story short, training requires intensive device-to-device communication. Distributed training is possible in theory but so inefficient that it's not worth it. Here is a new paper that looks to be the most promising approach yet: https://arxiv.org/abs/2301.11913
That's brilliant, I would love to spare compute cycles and network on my devices for this if there's an open source LLM on the other side that I can use in my own projects, or commercially.
Doesn't feel like there's much competition for ChatGPT at this point otherwise, which can't be good.
On the generative image side of the equation, you can do the same thing with Stable Diffusion[1], thanks to a handy open source distributed computing project called Stable Horde[2].
LAION has started using Stable Horde for aesthetics training to back feed into and improve their datasets for future models[3].
I think one can foresee the same thing eventually happening with LLMs.
Full disclosure: I made ArtBot, which is referenced in both the PC World article and the LAION blog post.
> Doesn't feel like there's much competition for ChatGPT at this point otherwise, which can't be good.
Facebook open sourced their LLM, called OPT [1]. There's not much else, and OPT isn't exactly easy to run (requires like 8 GPUs).
I'm not an expect, so I don't know why some models, like the graphics generation we've seen, are able to fit on phones, while LLM require $500k worth of GPUs to run. Hopefully this is the first step to changing that.
I've seen Petals mentioned several times before and I don't think it's the same thing. Correct me if I'm wrong, but it seems Petals is for running distributed inference and fine-tuning of an existing model. What the above poster and I really want to see is distributed training of a new model across a network.
Much like I was able to choose to donate CPU cycles to a wide variety of BOINC-based projects, I want to be able to donate GPU cycles to anyone with a crazy idea for a new ML model - text, image, finance, audio, etc.
The labelled data seems more of a blocker than anything else. As far as I'm aware, the actually NN running the models are relatively simple, it's the human labor involved in gathering, cleaning, and labeling data for training that is the most resource intensive.
The data is valuable yes, but training a model still requires millions of dollars worth of compute. That's a perfect cost to distribute among volunteers if it could be done.
Another idea is to dedicate cpu cycles to something else that is easier to distribute, and then use the proceeds for massive amounts of gpu for academic use.
Yannic and the community he has built is such an educational force of good. His youtube videos explaining papers have helped me and so many others as well. Thank you Yannic for all that you do!
I don't think those people legitimately cared about the welfare of 4chan users who were experimented on. They just perceived the project to be bad optics that might threaten the AI gravy train.
> It is organized by LAION, the same folks who curated the dataset used to train Stable Diffusion.
I'm guessing, like stable diffusion, it won't be under an open source licence then? (The stable diffusion licence discriminates against fields on endeavour)
You are confusing LAION with Stability.ai. They share some researchers but the former is a completely transparent and open effort which you are free to join and criticize this very moment. The latter is a VC backed effort which does indeed have some of the issues you mention.
Yes. The intent is definitely to have the data be as open as possible. And Apache v2.0 is currently where it will stay. This project prefers the simplicity of Apache v2.0 and does not care for the RAIL licenses.
>> As an AI researcher in academia, it is frustrating to be blocked from doing a lot of research in this space due to computational constraints and a lack of the required data.
Computational constraints aside, the data used to train GPT-3 was mainly Open Crawl, which is freely available by a non-profit org:
>> Common Crawl is a 501(c)(3) non-profit organization dedicated to providing a copy of the internet to internet researchers, companies and individuals at no cost for the purpose of research and analysis.
So you just need to find the compute. If you have a class of ~30, it should only take about 150 to 450 million.
Or, you could switch your research and teaching to less compute- and data-intensive approaches? Just because OpenAI and DeepMind et al are championing extremely expensive approaches that only they can realistically use, that's no reason for everyone else to run behind them willy-nilly.
it's sad that upon observing the success of the downstream products such as SD, the creators have chosen to hoard the dataset themselves as the single producers of the downstream products as well
I don't see the relevance of 50k prompt response pairs. With exponential combinations of words this is on the level of what AIML did thirty years ago. Isn't chat gpt trained on (b/)millions of stack overflow and forum responses?
My understanding of the plan is to fine-tune an existing large language model, trained with self-supervised learning on a very large corpus of data, using reinforcement learning from human feedback, which is the same method used in ChatGPT. Once the dataset they are creating is available, though, perhaps better methods can be rapidly developed as it will democratize the ability to do basic research in this space. I'm curious regarding how much more limited the systems they are planning to build will be compared to ChatGPT, since they are planning to make models with far less parameters to deploy them on much more modest hardware than ChatGPT.
As an AI researcher in academia, it is frustrating to be blocked from doing a lot of research in this space due to computational constraints and a lack of the required data. I'm teaching a class this semester on self-supervised and generative AI methods, and it will be fun to let students play around with this in the future.
Here is a video about the Open Assistant effort: https://www.youtube.com/watch?v=64Izfm24FKA