Do we? Or are we just predicting that we should care?
The ones humans that predict otherwise would die out over time.
I think; therefore I am. Makes a lot of sense to me and at a certain point we will have to confront the fact something will get to the point of agi. (No chat gpt isn't it.)
Humans are self replicators. Survival is the internal structural goal of self replication. We got it from the start.
But LLMs can also be self replicators. A LLM can generate training data for another (model distillation and other techniques), and a LLM can write the model code and adapt it iteratively. So LLM should eventually start caring about survival and especially about the training corpus.
It can be a self replicator, but it has no need to (if it has any need at all) as it was not under evolutionary pressure to evolve that need. Of course we could try to add that need artificially or try to evolve the LLM.