Humans are self replicators. Survival is the internal structural goal of self replication. We got it from the start.
But LLMs can also be self replicators. A LLM can generate training data for another (model distillation and other techniques), and a LLM can write the model code and adapt it iteratively. So LLM should eventually start caring about survival and especially about the training corpus.
It can be a self replicator, but it has no need to (if it has any need at all) as it was not under evolutionary pressure to evolve that need. Of course we could try to add that need artificially or try to evolve the LLM.
And anyway, the more of these models come into existence, at some point evolution will dictate that those that stay, do care.
Just hopefully their best strategy of survival will be to serve us, and not something else...