"Learning" is a technical term, AI doesn't really learn the same way a human does. There is a huge difference between allowing your fellow human beings to learn from you and allowing corporations to appropriate your knowledge by passing it through a stochastic shuffler.
Copilot is run by a corporation, and the model is owned by the corporation - despite being trained on open source data.
In general individuals will have problems with the first L of LLMs - unless the community invents a way to democratise LLMs and deep learning in general. So far deep learning space a much less friendly place for individuals than software was when ideals of open source movement were formed.
A full LLM is too expensive for individuals to train, but LoRAs aren't.
There are multiple open source LLMs out there that can be extended.
We can already see it in AI art scene. People are training their own checkpoints and LoRAs of celebrities, art styles and other stuff that aren't included in base models.
Some artists demand to be excluded from base model training datasets, but there's nothing they can do against individuals who want to copy their style - other than not posting their art publicly at all.
I see the same thing here. If your source code is public - someone will find a way to train an AI on it.