This brings up an important question: Is a topic useful to learn if you will never use it in your life?
To attempt answering this question, we can look at LLMs as an analogy. If you include code in the training set for an LLM, it also makes the LLM better at non-coding tasks, suggesting that sometimes learning something makes you also better at other things. I'm not saying the same necessarily applies for learning these "old school" AI techniques, but it's a decently analogy at least.
To attempt answering this question, we can look at LLMs as an analogy. If you include code in the training set for an LLM, it also makes the LLM better at non-coding tasks, suggesting that sometimes learning something makes you also better at other things. I'm not saying the same necessarily applies for learning these "old school" AI techniques, but it's a decently analogy at least.