Given that free alternatives like Vicuna (from the University of California and CMU) are better than LLaMA, are freely and legally available for download, and are compatible with code like llama.cpp, even if every copy of LLaMA is taken down it will have no effect on the development of chatbots. It might even improve things as people who would otherwise go for the better known LLaMA will move towards these newer, better, models.
Yes, but that doesn't matter mow. The University of California has released Vicuna as open source. It doesn't need the Llama model to be installed at this point. Nor do you need any of Meta's code to run it either as you can use llama.cpp (not created by Meta). That's the whole point of the article. It's open source now. There's nothing Meta can do.
It doesn't matter if you merge the LoRA, the resulting weights are still a derived work - assuming, that is, that weights are copyrightable in the first place (which is still a big if).
If the resulting weights a derived work of LLaMA then LLaMA is a derived work of the illegally pirated Books3 dataset (a dataset of a private torrent tracker) used to train it.
There's no way ML models can be protected under copyright.