Hacker News new | past | comments | ask | show | jobs | submit login
Learn Prolog now (learnprolognow.org)
86 points by _b8r0 on Dec 6, 2010 | hide | past | favorite | 31 comments



Sterling & Shapiro's _The Art of Prolog_ is fantastic, if _Learn Prolog Now_ piques your interest. Clocksin's _Clause and Effect_ is sort of like "The Little Schemer" for Prolog (though not cutesy).

Also, if you learn Prolog, don't forget to learn the constraint programming extensions! SWI and GNU Prolog are free Prologs with constraint programming included. It makes Prolog much more practical.


Thanks; I just bought The Art of Prolog used at Amazon ($13 with shipping).


You probably got the first (1986) edition. The second (1994) edition is much better, but more expensive. (But, there's a $49 copy on amazon. That's a steal! I paid $80ish, argh...)

They added quite a bit of material to the latter chapters, and it updated to the newer, then de-facto standard Prolog (which became an ISO standard in 1995). Useful Prolog implementations usually have non-ISO extensions for module systems, constraint programming, etc.

I have both. I keep the first edition at the office, and it's my lending copy.


Uh...yes, you're right, this is the 1986 edition.

Oh, well. Saving $30 on a language I'm not likely to use is probably a Good Thing.


No worries. I got the 1986 edition for $5 and liked it so much I got the newer for $80. Learning Prolog changed the way I think as much as learning Lisp or C.


Has anyone created a probabilistic language akin to Prolog? I feel like confidence interval propagation, with appropriate notions of covariance and error distributions, could be really useful for hypothesis testing.


I beleive what you are looking for is Fril:

http://en.wikipedia.org/wiki/Fril

It is a lisp-y prolog that supports fuzzy relations (confidence interval for predicates).

It is not bad if fuzzy logic is your cup of tea.

Here is the reference:

http://www.enm.bris.ac.uk/ai/martin/FrilManual/index.html

Code example (from reference doc of goal set (gs) predicate):

   ((test a)) : (0.5 0.8) 
   ((test b)) : (0.7 0.9) 
   qs ((test X) (test Y)) 
   ((test a) (test b)) : (0.25 0.64) 
   ((test a) (test b)) : (0.35 0.72) 
   ((test b) (test a)) : (0.35 0.72) 
   ((test b) (test b)) : (0.49 0.81) 
   no (more) solutions 
   yes
EDIT: more examples and explanation by its author, Trevor Martin from Bristol Univ.

ftp://ftp.cs.cmu.edu/user/ai/areas/fuzzy/com/fril/fril.txt


I believe the term you are looking for is Statistical Relational Learning.

One formalism for this is Markov Logic.

http://www.cs.washington.edu/homes/pedrod/papers/pilp.pdf

Alchemy is an implementation of Markov Logic.

http://alchemy.cs.washington.edu/

Both of these are from Pedro Domingos, who does a lot of research in this area.


There's PRISM, which is used for analyzing probabilistic systems: http://www.prismmodelchecker.org/


There's also TreeAge which is popular in health informatics (the name is a pun off of "triage") and, perhaps more generally, BUGS/JAGS which can be used to build and test arbitrary Bayesian network models.


Oh my god. I have dreamed of, and half-implemented badly, exactly this. Not quite what I was thinking of, but still quite useful for that class of problem.

You should submit this as an HN post too. Thanks!


There have been several systems that probabilistically extend Prolog or do something similar. I'm not aware of any that do those things specifically, though. Here's a list of some that I made a bit ago:: http://anyall.org/blog/2009/12/list-of-probabilistic-model-m...


Can you elaborate? Do you mean something like the probability monad?


Specifically, I want to express a model in analytic terms:

x = 2y + z^2

provide a dataset of x, y, and z tuples with associated error distributions, and ask questions like "what is three-sigma confidence interval for the model given this dataset", and "what would tunable parameters a and b have to be for the most consistent account".

Ideally, it'd be able to take into account convolutions, as well.


_The Art of Prolog_'s chapter on meta-interpreters* includes an interpreter with uncertainty thresholds. It's about a quarter-page of code - It's on page 318, in the first edition (what I have on hand). Extending it to support statistical significance rather than just a 0-1.0 confidence interval shouldn't be too hard. Combining that with constraint programming would probably suffice.

* Rather like chapter 4 of SICP, on making metacircular evaluator Prologs.


I've always felt Prolog was a massively underrated language. I've been going on a bit of a retro fest looking through languages I dabbled with as a child. I think the only thing I liked more as a child than MicroProlog was possibly Forth.


Agreed -- back in the day Prolog and Forth both exposed me to very different ways about thinking.

I actually wrote my Masters' project on transistor sizing in Prolog. A fascinating experience, although at the time it was horrible for numeric work. I wound up hacking the interpreter to include a couple of optimizations otherwise I'd probably still be waiting for it to finish :-)


Oh wow. I wish I could've done my projects in Prolog. I failed a unit at college on sorting algorithms because I submitted my work in Prolog instead of Qbasic. My lecturer didn't know Prolog and I was too young and stupid not to get into an argument over it. Just as well as I was used to 8-bit Micro-Prolog (this in fact: http://www.worldofspectrum.org/infoseekid.cgi?id=0008429 - you can use it in a browser if you're feeling masochistic!) and had no idea about the differences between LPA and other implementations!


My master's thesis involved using prolog as a query language to analyze source code. The implementation of Prolog I used was written in smalltalk and allowed putting smalltalk code right in a prolog query to, for example, performing heavy computation faster.


Actually, one of the best book for Prolog is book written by my professor Ivan Bratko: http://www.amazon.com/Prolog-Programming-Artificial-Intellig...

We had to write Pascal interpreter in Prolog - that was an experience. I strongly suggest to everybody to try Prolog - then you will understand why it is good idea to use declarative languages as much as possible when building product (including SQL).


Incidentally, for those that are interested you can try the examples in a browser at the article linked from the HN post here: http://news.ycombinator.com/item?id=1976397


If you've got time to spare, check out this video series on Youtube: http://www.youtube.com/watch?v=33S6qap_SIU It's about learning Kolmogorov Complexity with Prolog.


The DCG functionality of Prolog is very nice. I used it to write a crude XML parser just days into learning about the language.

http://pastebin.com/sUpPd5jb


Is there a framework where you can use Prolog as a web development language? Server and/or client-side.


Perhaps it is just me, but does anyone else think Prolog is really just a nicely done language that revolves around the an if-then-else condition? Its basically pattern matching... But consider me troll and argue your stand.


No, it's unification and backtracking.

Unification is far more powerful than pattern matching, which (among other things) supports working with partial information - you can pass around a list of cons cells where the cdr is an unbound variable, and then bind it to another cons cell with an unbound cdr to get O(1) appending, for example. (These are called difference lists.) Same can be done with trees and other, more complex data structures. In effect, rather than fully immutable or fully mutable variables, you get variables that can only be set once, and then semantically "always were" that value. (but possibly undone on backtracking)

Backtracking means that pattern-matching a variable isn't just pass/fail, but potentially a generator (AKA "iterator", etc.) for all matching values. Sometimes (ok, often) the ensuing combinatorial explosion keeps it from scaling efficiently to real problems, but constraint programming compensates, pruning off a LOT of the space of potential solutions before searching. And, saying, "here's my problem, throw everything at it and figure it out" in very few lines of code is still great for prototyping.

Comparing Prolog to Erlang makes the difference clearest, to me. Erlang doesn't do backtracking (because it throws soft-real-time guarantees out the window!) or full unification (because passing around and subsequently binding unbound variables would be a form of non-local state). Erlang is still a great language IMHO, but it's a very different kind of language, because those two features are what make Prolog Prolog.


Prolog's pattern matching actually isn't much like an if-then-else construct at all, and thinking about it like one will get you into a lot of trouble once you move beyond very very trivial Prolog programs. While it resembles the pattern matching done in Haskell and ML-like languages at first glance, what's actually happening in Prolog is very different.

A look at a simple (although very inefficient) Fibonacci program will illustrate the difference. In Haskell, it'd look something like this: (and work like you might expect)

    fib 0 = 0
    fib 1 = 1
    fib n = fib (n-1) + fib (n-2)
The obvious direct translation to Prolog (I'm using SWI Prolog[0]) is something like this:

    /* fib(N,X): X is the Nth fibonacci number */
    fib(0,0).
    fib(1,1).
    fib(N,X) :-
        N1 is N-1,N2 is N-2,
        fib(N1,A),fib(N2,B),
        X is A+B.
This, however, is going to cause problems if you actually try to run it, and the reason is that Prolog doesn't just do pattern matching: it attempts to unify the query term (the thing you give it) with the program terms (the things on the left in the program above), and instead of just using the first result that matches, it will (if you ask it to do so) return every possible result by doing a left-to-right depth-first search on the solution tree (using a process called SLD resolution[1]). The expected behavior of the above program is something like this:

    ?- fib(0,X).
    X = 0 ;
    false. /* there are no more possible matches */
What's actually going to happen though is this:

    ?- fib(0,X).
    X = 0 ;
    /* runs forever */
When you give it the query fib(0,X), it first unifies that with fib(0,0), and the first answer is what you expect: X = 0. The difference occurs when you ask it for another answer (which you do in the interpreter by typing a semicolon). What you want Prolog to say is that there are no other answers, because there's only one 0th Fibonacci number. What Prolog actually does though is it backtracks[1] and goes back up the resolution tree to see if there are any more program terms the query can be unified with. In this case there are: fib(0,X) can unify with fib(N,X) also. Prolog soon gets into negative numbers (and past the base cases), and so ends up running forever.

One possible corrected version is below:

    fib(0,0).
    fib(1,1).
    fib(N,X) :-
        N > 1,
        N1 is N-1,N2 is N-2,
        fib(N1,A),fib(N2,B),
        X is A+B.
Here, we ensure that we only unify with the third query when N is large enough that we don't match the first two, and so we get the output we'd expect.

[0] http://www.swi-prolog.org/ [1] http://en.wikipedia.org/wiki/SLD_resolution [2] http://en.wikipedia.org/wiki/Backtracking


For small programs you are probably right.

However, the key of learning Prolog to teach you how to expresses the logic of a program/system without describing its control flow. Basically to describe what the program or system should accomplish (which rules govern the program or system) without thinking how these things will be implemented and executed (it will just happen). This kinda of thinking can be very powerful tool when designing complex systems and products.


it might be just you and a bunch of other people who don't understand prolog.

if statements are not horn clauses. if statements are procedural and specify execution order and don't pattern match.

prolog's execution is resolution, done using unification over logic variables.

as a simple example: member/2

member(X,[X|_]). member(X,[_|T]) :- member(X,T).

if I query: member(1,[1,2,3]) this is true

if I query:

member(X,[1,2,3]), this is true, for X=1, X=2 and X=3

similarly you can ask append([1,2,3],[4,5,6],X) to get X=[1,2,3,4,5,6]. and ask append([1,2,3],X,[1,2,3,4]) to get X=4

for example:

to produce all possible combinatons of items from a series of lists:

comb([],[]) comb([H|T],[X|Y]) :- member(X,H), comb(T,Y).

so if I do

comb([[a,b,c],[d,e,f],[g,h,i]],X) I get X = a,d,g or X = a,d,h and so on and so on

as you might see, it is a little bit more obvious that it isn't just a series of 'if statements' but a series of relations between clauses


I've been meaning to learn Prolog since I took an AI course that focused on theorem proving.

I can't help but notice that the webpage looks like one of the awful default Beamer presentation styles.


Yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: