I would note the actual leading models right now (IMO) are:
- Miqu 70B (General Chat)
- Deepseed 33B (Coding)
- Yi 34B (for chat over 32K context)
And of course, there are finetunes of all these.
And there are some others in the 34B-70B range I have not tried (and some I have tried, like Qwen, which I was not impressed with).
Point being that Llama 70B, Mixtral and Grok as seen in the charts are not what I would call SOTA (though mixtral is excellent for the batch size 1 speed)
Miqu is a leaked model -- no license is provided to use it. Yi 34B doesn't allow commercial use. Deepseed 33B isn't much good at stuff outside of coding.
So it's fair to say that DBRX is the leading general purpose model that can be used commercially.
Model weights are just constants in a mathematical equation, they aren’t copyrightable. It’s questionable whether licenses to use them only for certain purposes are even enforceable. No human wrote the weights so they aren’t a work of art/authorship by a human. Just don’t use their services, use the weights at home on your machines so you don’t bypass some TOS.
Photographs aren't human-made either, yet they are copyrightable. I agree that both the letter and the spirit of copyright law are in favor of models not being copyrightable, but it will take years until there's a court ruling either way. Until then proceed with caution and expect rights holders to pretend like copyright does apply. Not that they will come after your home setup either way.
Photos involve creativity. Photos that don't involve creativity aren't usually considered copyrightable by the courts (hence why all copyright cases I followed include arguments that establish why creativity was a material to creating the work).
Weights, on the other hand, are a product of a purely mechanical process. Sure, the process itself required creativity as did the composition of data, but the creation of the weights themselves do not.
People say LLM are foundamentally just statistics so training one on copyrightable materials is okay. Well perhaps, but pure statistics data are not copyrightable. Feel free to use leaked models.
Yeah I know, hence its odd I found it kind of dumb for personal use. Moreso with the smaller models, which lost an objective benchmark I have to some Mistral finetunes.
And I don't think I was using it wrong. I know, for instance, the Chinese language models are funny about sampling since I run Yi all the time.
For all the Model Cards and License notices, I find it interesting there is not much information on the contents of the dataset used for training. Specifically, if it contains data subject to Copyright restrictions. Or did I miss that?
- Miqu 70B (General Chat)
- Deepseed 33B (Coding)
- Yi 34B (for chat over 32K context)
And of course, there are finetunes of all these.
And there are some others in the 34B-70B range I have not tried (and some I have tried, like Qwen, which I was not impressed with).
Point being that Llama 70B, Mixtral and Grok as seen in the charts are not what I would call SOTA (though mixtral is excellent for the batch size 1 speed)