I always say that AI is a forever moving goal post. It is simply a task a human can do that you wouldn't expect a machine to be able to do. So as soon as a machine can do it, people no longer consider it intelligent (i.e. it is just A*, it is just a chess engine, it is just a network picking up on patches of texture, ..., it isn't really "intelligent").
This is because we originally thought "only a human would be able to play chess", "only a human would be able to drive a car". The thinking there is that if we were to solve these problems, we'd have to get closer to a true artificial intelligence (the kind that today we'd call "AGI" because "AI" doesn't mean anything anymore).
This line of thinking has been shown to be pretty faulty. We've come up with engines and algorithms that can play Go and Chess, but we aren't any closer to anything that resembles a general intelligence.
Well, GPT3 is definitely not a general intelligence, but I would say it's much closer than deep blue. Progress is happening! It's just a question of how far and fast we run with the goalposts.