Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is incorrect. According to the official https://github.com/lm-sys/FastChat#vicuna-weights you need the original Llama weights before applying the Vicuna diff.


Seriously, you can download the Vicuna model and run it locally with llama.cpp. I've done it!


It's built off of llama though - you can't get to Vicuna without having the llama model weights.


It doesn't matter if you merge the LoRA, the resulting weights are still a derived work - assuming, that is, that weights are copyrightable in the first place (which is still a big if).


If the resulting weights a derived work of LLaMA then LLaMA is a derived work of the illegally pirated Books3 dataset (a dataset of a private torrent tracker) used to train it.

There's no way ML models can be protected under copyright.


The problem is that you need to risk getting sued first to prove that point. And hoping that you have deep enough pockets to outlast Meta's lawyers.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: