Sterling & Shapiro's _The Art of Prolog_ is fantastic, if _Learn Prolog Now_ piques your interest. Clocksin's _Clause and Effect_ is sort of like "The Little Schemer" for Prolog (though not cutesy).
Also, if you learn Prolog, don't forget to learn the constraint programming extensions! SWI and GNU Prolog are free Prologs with constraint programming included. It makes Prolog much more practical.
You probably got the first (1986) edition. The second (1994) edition is much better, but more expensive. (But, there's a $49 copy on amazon. That's a steal! I paid $80ish, argh...)
They added quite a bit of material to the latter chapters, and it updated to the newer, then de-facto standard Prolog (which became an ISO standard in 1995). Useful Prolog implementations usually have non-ISO extensions for module systems, constraint programming, etc.
I have both. I keep the first edition at the office, and it's my lending copy.
No worries. I got the 1986 edition for $5 and liked it so much I got the newer for $80. Learning Prolog changed the way I think as much as learning Lisp or C.
Has anyone created a probabilistic language akin to Prolog? I feel like confidence interval propagation, with appropriate notions of covariance and error distributions, could be really useful for hypothesis testing.
There's also TreeAge which is popular in health informatics (the name is a pun off of "triage") and, perhaps more generally, BUGS/JAGS which can be used to build and test arbitrary Bayesian network models.
Oh my god. I have dreamed of, and half-implemented badly, exactly this. Not quite what I was thinking of, but still quite useful for that class of problem.
There have been several systems that probabilistically extend Prolog or do something similar. I'm not aware of any that do those things specifically, though. Here's a list of some that I made a bit ago:: http://anyall.org/blog/2009/12/list-of-probabilistic-model-m...
Specifically, I want to express a model in analytic terms:
x = 2y + z^2
provide a dataset of x, y, and z tuples with associated error distributions, and ask questions like "what is three-sigma confidence interval for the model given this dataset", and "what would tunable parameters a and b have to be for the most consistent account".
Ideally, it'd be able to take into account convolutions, as well.
_The Art of Prolog_'s chapter on meta-interpreters* includes an interpreter with uncertainty thresholds. It's about a quarter-page of code - It's on page 318, in the first edition (what I have on hand). Extending it to support statistical significance rather than just a 0-1.0 confidence interval shouldn't be too hard. Combining that with constraint programming would probably suffice.
* Rather like chapter 4 of SICP, on making metacircular evaluator Prologs.
I've always felt Prolog was a massively underrated language. I've been going on a bit of a retro fest looking through languages I dabbled with as a child. I think the only thing I liked more as a child than MicroProlog was possibly Forth.
Agreed -- back in the day Prolog and Forth both exposed me to very different ways about thinking.
I actually wrote my Masters' project on transistor sizing in Prolog. A fascinating experience, although at the time it was horrible for numeric work. I wound up hacking the interpreter to include a couple of optimizations otherwise I'd probably still be waiting for it to finish :-)
Oh wow. I wish I could've done my projects in Prolog. I failed a unit at college on sorting algorithms because I submitted my work in Prolog instead of Qbasic. My lecturer didn't know Prolog and I was too young and stupid not to get into an argument over it. Just as well as I was used to 8-bit Micro-Prolog (this in fact: http://www.worldofspectrum.org/infoseekid.cgi?id=0008429 - you can use it in a browser if you're feeling masochistic!) and had no idea about the differences between LPA and other implementations!
My master's thesis involved using prolog as a query language to analyze source code. The implementation of Prolog I used was written in smalltalk and allowed putting smalltalk code right in a prolog query to, for example, performing heavy computation faster.
We had to write Pascal interpreter in Prolog - that was an experience. I strongly suggest to everybody to try Prolog - then you will understand why it is good idea to use declarative languages as much as possible when building product (including SQL).
Perhaps it is just me, but does anyone else think Prolog is really just a nicely done language that revolves around the an if-then-else condition? Its basically pattern matching... But consider me troll and argue your stand.
Unification is far more powerful than pattern matching, which (among other things) supports working with partial information - you can pass around a list of cons cells where the cdr is an unbound variable, and then bind it to another cons cell with an unbound cdr to get O(1) appending, for example. (These are called difference lists.) Same can be done with trees and other, more complex data structures. In effect, rather than fully immutable or fully mutable variables, you get variables that can only be set once, and then semantically "always were" that value. (but possibly undone on backtracking)
Backtracking means that pattern-matching a variable isn't just pass/fail, but potentially a generator (AKA "iterator", etc.) for all matching values. Sometimes (ok, often) the ensuing combinatorial explosion keeps it from scaling efficiently to real problems, but constraint programming compensates, pruning off a LOT of the space of potential solutions before searching. And, saying, "here's my problem, throw everything at it and figure it out" in very few lines of code is still great for prototyping.
Comparing Prolog to Erlang makes the difference clearest, to me. Erlang doesn't do backtracking (because it throws soft-real-time guarantees out the window!) or full unification (because passing around and subsequently binding unbound variables would be a form of non-local state). Erlang is still a great language IMHO, but it's a very different kind of language, because those two features are what make Prolog Prolog.
Prolog's pattern matching actually isn't much like an if-then-else construct at all, and thinking about it like one will get you into a lot of trouble once you move beyond very very trivial Prolog programs. While it resembles the pattern matching done in Haskell and ML-like languages at first glance, what's actually happening in Prolog is very different.
A look at a simple (although very inefficient) Fibonacci program will illustrate the difference. In Haskell, it'd look something like this: (and work like you might expect)
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
The obvious direct translation to Prolog (I'm using SWI Prolog[0]) is something like this:
/* fib(N,X): X is the Nth fibonacci number */
fib(0,0).
fib(1,1).
fib(N,X) :-
N1 is N-1,N2 is N-2,
fib(N1,A),fib(N2,B),
X is A+B.
This, however, is going to cause problems if you actually try to run it, and the reason is that Prolog doesn't just do pattern matching: it attempts to unify the query term (the thing you give it) with the program terms (the things on the left in the program above), and instead of just using the first result that matches, it will (if you ask it to do so) return every possible result by doing a left-to-right depth-first search on the solution tree (using a process called SLD resolution[1]). The expected behavior of the above program is something like this:
?- fib(0,X).
X = 0 ;
false. /* there are no more possible matches */
What's actually going to happen though is this:
?- fib(0,X).
X = 0 ;
/* runs forever */
When you give it the query fib(0,X), it first unifies that with fib(0,0), and the first answer is what you expect: X = 0. The difference occurs when you ask it for another answer (which you do in the interpreter by typing a semicolon). What you want Prolog to say is that there are no other answers, because there's only one 0th Fibonacci number. What Prolog actually does though is it backtracks[1] and goes back up the resolution tree to see if there are any more program terms the query can be unified with. In this case there are: fib(0,X) can unify with fib(N,X) also. Prolog soon gets into negative numbers (and past the base cases), and so ends up running forever.
One possible corrected version is below:
fib(0,0).
fib(1,1).
fib(N,X) :-
N > 1,
N1 is N-1,N2 is N-2,
fib(N1,A),fib(N2,B),
X is A+B.
Here, we ensure that we only unify with the third query when N is large enough that we don't match the first two, and so we get the output we'd expect.
However, the key of learning Prolog to teach you how to expresses the logic of a program/system without describing its control flow. Basically to describe what the program or system should accomplish (which rules govern the program or system) without thinking how these things will be implemented and executed (it will just happen). This kinda of thinking can be very powerful tool when designing complex systems and products.
Also, if you learn Prolog, don't forget to learn the constraint programming extensions! SWI and GNU Prolog are free Prologs with constraint programming included. It makes Prolog much more practical.