The problem I'm describing is formally known as "catastrophic forgetting".
Quoting from wikipedia:
Catastrophic interference, also known as catastrophic forgetting, is the
tendency of an artificial neural network to completely and abruptly forget
previously learned information upon learning new information.
Of course neural nets can update their weights as they are trained, but the
problem is that weight updates are destructive: the new weights replace the old
weights and the old state of the network cannot be recalled.
Transfer learning, online learning and (deep) reinforcement learning are as
susceptible to this problem as any neural network techniques.
This is a widely recognised limitation of neural network systems, old and new,
and overcomging it is an active area of research. Many approaches have been
proposed over the years but it remains an open problem.
Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to completely and abruptly forget previously learned information upon learning new information.
https://en.wikipedia.org/wiki/Catastrophic_interference
Of course neural nets can update their weights as they are trained, but the problem is that weight updates are destructive: the new weights replace the old weights and the old state of the network cannot be recalled.
Transfer learning, online learning and (deep) reinforcement learning are as susceptible to this problem as any neural network techniques.
This is a widely recognised limitation of neural network systems, old and new, and overcomging it is an active area of research. Many approaches have been proposed over the years but it remains an open problem.