Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given that free alternatives like Vicuna (from the University of California and CMU) are better than LLaMA, are freely and legally available for download, and are compatible with code like llama.cpp, even if every copy of LLaMA is taken down it will have no effect on the development of chatbots. It might even improve things as people who would otherwise go for the better known LLaMA will move towards these newer, better, models.


They are all built on top of Llama…


Yes, but that doesn't matter mow. The University of California has released Vicuna as open source. It doesn't need the Llama model to be installed at this point. Nor do you need any of Meta's code to run it either as you can use llama.cpp (not created by Meta). That's the whole point of the article. It's open source now. There's nothing Meta can do.


This is incorrect. According to the official https://github.com/lm-sys/FastChat#vicuna-weights you need the original Llama weights before applying the Vicuna diff.


Seriously, you can download the Vicuna model and run it locally with llama.cpp. I've done it!


It's built off of llama though - you can't get to Vicuna without having the llama model weights.


It doesn't matter if you merge the LoRA, the resulting weights are still a derived work - assuming, that is, that weights are copyrightable in the first place (which is still a big if).


If the resulting weights a derived work of LLaMA then LLaMA is a derived work of the illegally pirated Books3 dataset (a dataset of a private torrent tracker) used to train it.

There's no way ML models can be protected under copyright.


The problem is that you need to risk getting sued first to prove that point. And hoping that you have deep enough pockets to outlast Meta's lawyers.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: