Things that require understanding of causation will be safe longer. Progress like this is driven by massive datasets. Meanwhile, real world action-taking applications require different paradigms to take causation into account[0][1], and especially to learn safely (e.g. learning to drive without crashing during the beginner stages).
There's certainly research happening around this, and RL in games is a great test bed, but people choosing actions will safe from automation longer than people not choosing actions, if that makes sense. It's the person who decides "hire this person" vs the person who decides "I'll use this particular shade of gray."
[0] The best example is when X causes Y and X also causes Z, but your data only includes Y and Z. Without actually manipulating Y, you can't see that Y doesn't cause Z, even if it's a strong predictor.
[1] Another example is the datasets. You need two different labels depending on what happens if you take action A or B, which you can't have simultaneously outside of simulations.
Most creative output is duplicated effort: consider how much code each person on HN has written that has been written before. Consider how, a decade ago, we were all writing html and styling it, element by element, and then Twitter bootstrap came along and revolutionised front-end development in what is, ultimately, a very small and low technology way. All it really did was reduce duplicate effort.
Nowadays there’s lots of great low/no code platforms, like Retool, that represent a far greater threat to the amount of code that needs to be produced than AI ever will.
To use a cliche: code is a bug, not a feature. Abstracting away the need for code is the future, not having a machine churn out the same code we need today.