The best thing about prolog is how naturally search happens: state facts and rules once, then query them in many different ways. The worst part is, as the paper describes, constantly having to curb this search so that Prolog runs efficiently for normal deterministic code.
Another missing piece that this paper picks up: in the real world we want to generate options in order of likelihood. The Alchemy project for AI incorporates probability this way. My understanding is that it doesn't unfortunately doesn't scale well.
Shameless plug: I revisited Prolog in recent times and wrote Marelle, a tool for test-driven sysadmin. The language is clunky by today's standards, but I still haven't seen nicer syntax for writing non-determinism.
My feeling is that prolog as a language hasn't aged as well as it could have, so learning it now is mainly as a mental exercise. Something like Mercury shows more of the potential of logic programming, seating it properly alongside functional programming as it should be.
Another missing piece that this paper picks up: in the real world we want to generate options in order of likelihood. The Alchemy project for AI incorporates probability this way. My understanding is that it doesn't unfortunately doesn't scale well.
http://alchemy.cs.washington.edu/
Shameless plug: I revisited Prolog in recent times and wrote Marelle, a tool for test-driven sysadmin. The language is clunky by today's standards, but I still haven't seen nicer syntax for writing non-determinism.
https://github.com/larsyencken/marelle