It doesn't matter if you merge the LoRA, the resulting weights are still a derived work - assuming, that is, that weights are copyrightable in the first place (which is still a big if).
If the resulting weights a derived work of LLaMA then LLaMA is a derived work of the illegally pirated Books3 dataset (a dataset of a private torrent tracker) used to train it.
There's no way ML models can be protected under copyright.