Which chapter(s) of "artificial intelligence - a modern approach" are relevant for today's AI practitioners? Or, which chapters do you recommend to someone who just want to apply A.I in practice?
Most of what people call "artificial intelligence" today is machine learning, neural networks, things like that.
Norvig's book is more about the "old A.I." or symbolic AI, which many people felt crowded out neural networks in the old days, but then itself kinda faded relative to other things.
In the early 1980s, for instance, there were games like Zork with a natural language interface. Graphical user interfaces and menus really displaced that, then 3-d came along so that graphical video games are much better than they were in 1980, text games not so much.
The "expert systems" technology has been rebranded as "business rules"; your bank almost certainly has at least one license for IBM iLog and there many products such as Drools, Microsoft BizTalk, etc. That technology is also good for "complex event processing" and it may someday be seen as a rosetta stone for solving the problems of orchestrating asynchronous communication. (It is very good for combining automated and manual processes such as "send this to the loan office for approval")
Other areas in symbolic A.I. research are still active such as constraint programming, SAT solvers, semantic web, etc. They don't get as much press as deep networks. Problems such as route optimization, production scheduling, etc. all involve search processes similar to what is done in computer chess. "NP Complete" problems frequently are involved in "artificial intelligence".
Also many things that were once considered "A.I." are now mundane. For instance, the technology now used to create lexers and parsers for artificial languages was originally designed to model natural languages. Most of the features of LISP have made it into other programming languages such as Python, BASIC, TCL, Java, Haskell, FORTH, etc.
Symbolic A.I. did not die out, but it did fracture in many different directions so that it doesn't really seem like "one field".
Norvig's book is more about the "old A.I." or symbolic AI, which many people felt crowded out neural networks in the old days, but then itself kinda faded relative to other things.
In the early 1980s, for instance, there were games like Zork with a natural language interface. Graphical user interfaces and menus really displaced that, then 3-d came along so that graphical video games are much better than they were in 1980, text games not so much.
The "expert systems" technology has been rebranded as "business rules"; your bank almost certainly has at least one license for IBM iLog and there many products such as Drools, Microsoft BizTalk, etc. That technology is also good for "complex event processing" and it may someday be seen as a rosetta stone for solving the problems of orchestrating asynchronous communication. (It is very good for combining automated and manual processes such as "send this to the loan office for approval")
Other areas in symbolic A.I. research are still active such as constraint programming, SAT solvers, semantic web, etc. They don't get as much press as deep networks. Problems such as route optimization, production scheduling, etc. all involve search processes similar to what is done in computer chess. "NP Complete" problems frequently are involved in "artificial intelligence".
Also many things that were once considered "A.I." are now mundane. For instance, the technology now used to create lexers and parsers for artificial languages was originally designed to model natural languages. Most of the features of LISP have made it into other programming languages such as Python, BASIC, TCL, Java, Haskell, FORTH, etc.
Symbolic A.I. did not die out, but it did fracture in many different directions so that it doesn't really seem like "one field".