It's interesting how one algorithm, a 'master algorithm', can presumably subsume all the others in the book, presumably a neural/evolutionary algorithm, that can simply learn/evolve when the other algorithms are useful for decision making/maximizing reward.
The more assumptions you relax, the more general the algorithms become, for example going from immediate reward to delayed reward means going from supervised to reinforcement learning.
The trade-off is the more general algorithms needs many times exponentially more data and compute to come to a similarly good solution.
That's why reinforcement learning has seen so practical few applications relative to supervised learning. There's no free lunch.
That said, as a ML practitioner I would love it if I could just apply a single master algorithm to all problems, but that is likely many years away.
At the same time, fine-tuning sample efficiency increases with scale, so at some point you can possibly one-shot learn state and get rid of exponential searches, solving NP-Hard problems with heuristics. Sounds like a free lunch to me. At least if you can afford a net large enough.
It sounds like the more general the algorithm, the more stateful it needs to be before it can be useful. On the other hand, specialized algorithms need less to zero state but have limited applications.