We had to use Prolog in my comparative languages class last semester and it was interesting, to say the least. With any class that brushes over a topic, I didn't get a full appreciation for what it can do, but it was an interesting paradigm to be exposed to if nothing else.
I didn't feel like I was doing "real programming", which likely says more about what we're taught than the merits of the language. I'd be interested to known how it's being leveraged presently? I know a lot of the standard use cases, but is there anyone solving really unique problems with it?
One of my college professors has a language that's heavily influenced by Prolog and designed to solve NLP problems called Dyna (I think because many NLP parsing problems can be solved by dynamic programing approaches that logic programs can easily express and optimize for).
I actually think the real reason we don't see more Prolog or Prolog-like languages is not because they are bad approaches, but because it's relatively easy to make your own half-baked constraint satisfaction backtracking solver in a language like Python, and then write rules for it to evaluate, so smart people just write it that way rather than busting out Prolog.
And backtracking is so useful when doing parsing in general. For an NLP class I did we made a shift-reduce parser with dotted items for an assignment, and my liberally commented Prolog source weighs in a 33 lines of code. My friend's Lisp code which only found one parse (but did make parse trees, to be fair), was several hundred lines if memory serves.
The corollary to Greenspun's tenth rule in action, I guess. =)
I'll share an idea for a logic programming project that I've been brewing for quite some time (but never started working on)...
As you may know, the engine behind Prolog is based on the unification algorithm (finding substitution values for variables to make two patterns equal). The same algorithm is behind Hindley-Milner -style type inference algorithms.
I have been considering writing a compiler for a programming language that would do type inference indirectly in two steps. In the first step (a typical syntax tree walk), rather than doing the actual unification directly, a Prolog -style logic program would be emitted. In the second step, the logic program is evaluated which will result in the types for the program.
The advantage of this approach would be allowing function overloading (similarly to C++ or Java) informally without interfaces or type classes. This means that there will be ambiguous types for expressions, something which is quite hairy to implement using a H-M based classic type inference algorithm. If the final output has ambiguous types, it is a compile error in the source program.
This is just a crazy idea I've been brewing, but this would definitely be a nice use case for a Prolog -style logic programming language. In practice, it wouldn't probably be Prolog itself, because implementing similar languages is so much fun and writing the code to call an external Prolog interpreter is about as much trouble but a lot less fun.
Prolog certainly isn't your every day programming tool, but in some cases it's a really good tool to have. And learning logic programming is one of those ventures that will expand your horizons.
> The advantage of this approach would be allowing function overloading (similarly to C++ or Java) informally without interfaces or type classes.
I might be misunderstanding something, but how does this allow function overloading more than ordinary let-polymorphism? Functions can already have a type like, say, ∀a.a→Int. It's also quite common to do type inference in two steps where the first pass creates type variables and a list of constraints. The set of constraints are then passed to a unification algorithm, see e.g. http://www.seas.upenn.edu/~cis552/lectures/stub/FunTypes.htm....
In a classic Hindley-Milner type inference algorithm, you can't have a function "add(int, int) -> int" and "add(float, float) -> float". E.g. in Haskell, you'd need a type class for this.
Modern functional programming languages use a type inference algorithm more sophisticated than the original H-M algorithm which are more flexible than the original algorithm. The link you gave as an example is Haskell with a handful of language extensions related to the type checker - something entirely different to the H-M algorithm.
There can be other ways of implementing a similar type checker in a different manner, this was just a fun idea I came up with.
> Prolog does not have functions. It has functors/predicates.
I know.
I was talking about functions in the source language of the hypothetical compiler. The Prolog -style logic programs would be an internal data structure of that compiler.
It's easy to get confused when talking about compilers and interpreters because the source, target and host language concepts can get mixed up. Adding one or more intermediate languages to the mix doesn't help either. Either I was unclear or you didn't read my comment carefully enough :)
> I wonder if you could do this for something like Python
For something like Python, yes. For Python the language (as it currently is), no, not to the degree that I was talking about (ie. fully statically typed to enable compiling to native code).
But clever type inference algorithms for dynamic languages are a field of active research. Because static type information is the key to compiling fast code, a lot of the research going on in using type inference algorithms in fast JavaScript and other dynamic language runtimes.
At best, programming with a language using a clever type inference algorithm is almost as flexible as programming with a dynamic language - I wish there was more work going on in implementing nicer type inference -based static programming languages for the mainstream.
I was actually more thinking about at edit time - IDEs already do a limited about of inference about dynamic languages (e.g. PyCharm, and VS and JavaScript) - I was wondering about the benefits about having an interactive Prolog model able to make inferences about your application as you write it....
I had a very similar experience. We went over Prolog for five weeks or so, and while I did thoroughly enjoy learning a whole new paradigm, it still felt like it was just a toy. All programs amounted to `YES` or `NO`, without any building of an interface or other components I'd spent so long learning about. I can definitely identify with the feeling that it wasn't "real" programming.
That being said, one use case which I did find interesting was that of graph theory and graph-based problems. I don't know much about graph theory beyond the basics, but simple things like tree traversal, the Shortest Path Problem, Dijkstra's algorithm, and other things revolving around nodes and edges seemed like a perfect fit for Prolog. You don't have to build a node object and write a bunch of code to create a tree and then a bunch more to traverse it. You just tell it some simple rules, give it a graph, and away you go. It can have incredibly low overhead, at least in terms of line count, for problems like these.
I'm not sure whether or not these properties would make Prolog helpful in solving harder problems than these, but I suspect it would. It's just too simple not to be useful in some cases. Here are the rules, find something that satisfies them.
As an Automation Developer I use Prolog (swi-prolog) to generate test cases for some of the systems I test. In paticular I use prolog to model the high level buisness rules. This would be an example of an "expert system".
So I've dabbled in Prolog and used it a few times, like many here. But yes, there are people doing significant and interesting work in Prolog. A friend of a friend who I followed when I was on Facebook uses Prolog quite a bit[1]. He's not an "all Prolog all the time" person. Rather, he's well acquainted with and likes Prolog and knows when it's a good fit for the problem at hand, or for pieces of the problem at hand. Search his site for prolog and you'll get a lot of interesting info.
Yep, the extent of any programming we did with it was writing a bunch of facts about family relationships and then asking questions about relationships. Basic stuff. It was interesting, but hardly ground breaking.
I started learning how to program in Prolog a few years ago. I found Prolog to be very difficult to understand, mostly because I did not understand what symbolic logic was. I then purchased a significant number of books on symbolic logic and started studying them. I don’t think it is possible to understand Prolog without first understanding how symbolic logic works.
My recommendation for someone who is interested in learning how to program in Prolog is to start by reading “Logic for Problem Solving” by Robert Kowalski:
I am currently about halfway through an earlier edition of this book, and it is the first book I have found that clearly explains the ideas upon which Prolog is based.
No and yes. I don't know of a "modern" logic programming language that is a standalone language (they might exist but are not too popular), but there are several embeddable logic programming languages. Clojure's core.logic is perhaps the most popular one, but there are others too. And Prolog itself can be embedded in several different languages.
And this makes all the sense in the world, Prolog isn't a very practical programming language for tasks that are not logic programming (or some variant thereof - like fuzzy logic). So it's nicer to have a logic programming language either as an embedded domain specific language (EDSL) or available as a library.
Implementing a Prolog -like language isn't too difficult (and it's very fun) so sometimes it might be more practical (or fun) to whip up a custom language for that purpose, perhaps with domain specific variations applied.
And besides, Prolog is "modern" in the sense that there are actively maintained and used implementations available. Like Lisp, it has stood up well to the test of time (perhaps because Lambda calculus and Predicate logic are solid theoretical foundations to build on), and there are modern implementations available even if the language itself is decades old.
Parts of the "logic" component has been relegated to libraries -- rule based system, constraint programming, databases (see Datomic and its Datalog based querying languages).
The first time I read about Prolog, I thought to myself, "Man, this is what computing was supposed to do for us! Look at it answering questions, doing planning...holy cow."
And in the last 40 years? Maybe Erlang (a Prolog descendant), Ruby (maybe the best successor to Lisp/Smalltalk), or Elixir (an Erlang descendant looks like Ruby) is as impressive. The rest is either hacky (C++, Javascript, Perl), attempts to clean up hackiness (Java, D, Go, Rust, Clojure), unoriginal (Python, C#), or just plain wanky (OCaml, Haskell)--and that's not counting ones that are beneath mention (PHP).
Don't get me wrong...there are some languages that are actually quite beautiful in their implementation or use (Lua, Clojure, a few others), but the "holy shit this is the future!" feeling I get from looking at Prolog, Erlang, or Ruby is something I haven't had in a while.
Instead, the future of practical programming languages is probably going to be yet another tepid iteration on Algol, with closures and memory safety and incorrectly-handled numerical safety and some lame concurrency story. Pattern matching if history is slightly kind to us, probably still no mandatory tail-call optimization. Sigh.
The problem of Prolog is basically the history of AI: curated logic has mostly failed to live up to expectations and given away to various machine learning techniques. The Prolog approach was blamed for the second AI winter; Japan bet big on it in their 5G computing project (and lost big on it also) [1].
However, the paradigm lives on and is quite successful in various rule-based business logic systems; it just isn't very appropriate for general computation.
Plenty of languages try to invent the future, but they are quite research projects in the spirit of the original prolog (which wasn't even implemented until it went into someones dissertation). The languages that actually hit production are quite conservative in comparison, since they kind of have to be. Anything more radical, you might just write off as even more wankier than OCaml and Haskell.
Go is the present, perhaps, not really the future.
Prolog never delivered on the promise of declarative programming, because its rigid, unintelligent depth-first search strategy makes it necessary to understand the execution model and keep it in mind when writing programs. So it's only mostly declarative. Prolog achieved some popularity because there are times when brute-force depth-first search is all you need, particularly if it can be done very fast; but once you bump against the limitations of that paradigm, Prolog is of little help.
True declarative programming is still an open research problem.
"My goal is to unify programmer experience (PX) with the user experience (UX).
This is an elusive goal. It has been pursued for many years, with many different hypotheses for what such a unification might entail and how it might be achieved. Related projects include Squeak Smalltalk, ToonTalk, LambdaMOO, Morphic, Croquet, Emacs, and HyperCard. A relatively successful effort to unify PX and UX was the Unix command line, where users would build short programs of process pipelines. But that has been marginalized by the development of GUI.
To me, the unification of PX and UX means that programming becomes a casual effort, such that users make it part of their normal workflow and don't even think about it as programming."
Weird, I read some of dmbarbour's articles from a RSS feed, but I never read the basis for its awelon project, basis that have been an obsession of mine for quite a while. Thanks.
Idris and its siblings is quite futuristic. Prolog (and other logic programming like kanren) was even weirder than that, it felt timeless; out of the map. Seeing demos of embedded interpreters for inverted evaluation was unbelievable.
I think a charitable reading is 'excessively academic and not concerned with the real day-to-day aspects of programming', which is a common characterization of languages like Haskell and OCaml, albeit one I'd contest. (A less charitable reading, on the other hand, would likely invoke the commonly-cited Blub Paradox from http://www.paulgraham.com/avg.html )
So, that's not entirely it, though thank you for the opportunity of introspection.
The reason I say "wanky" is that there doesn't seem to be in the popular rhetoric a clear practical reason for their existence beyond "we need this language".
C? "Writing assembly sucks". C++? "Writing C for large project sucks". Java/D/C#? "Oh god C++ is worse than we thought". PHP? "We need to write web servers". Perl? "We need to munge strings". Ruby? "We need to be happy, and long-term exposure to Perl prevents that". COBOL? "We need to write a lot of business software". VB? "We need to write a lot of business software, and are on the MS stack". Fortran? "We need to write a lot of numerical code, and it has to be fast". Erlang? "We need to write distributed, fault-tolerant systems and all we have is C and Prolog". Elixir? "Erlang is awesome but hurts our eyes". APL? "We don't have computers yet, but we still need to write about them". Lisp? "We also don't have computers yet, but we could maybe implement our way there". Ada? "Wow, we can't trust C programs for anything". R? "We need to do statistics". Lua? "We need to be easily embeddable, and Brazillian." The same can be said about Smalltalk, SQL, Javascript, Eiffel, and many other languages. Even academic languages and systems (Scheme, ML, Coq, Simula, Io) were exploring something in particular.
Hell, at least Elm had the clear objective "We want to be Haskell, but in the browser".
Haskell? Haskell seems to be mainly "man, we should define a standard compilable functional language". Its syntax is nothing special if you've been exposed to other ML-descended languages (OCaml, F#), or a language (say, Erlang) which allows pattern-matching. Its a compilable, statically-typed, lazily-evaluated functional language with immutable data...which is not super interesting if you've seen or dealt with other languages.
The bigger problem I have, honestly, is one of branding. The Scheme folks never pretend that theirs is a language that will save programming, merely an exercise in purity and peacefulness like a sort of digital taoism. C programmers, on the whole, never pretend that their language is anything more than the moral equivalent of moving a mountain with only a spoon and stern frown. Java and C# programmers don't seem to claim to be anything more than a sort of Stalinist human wave tackling the problems of enterprise. Even the Rubyists and Javascript programmers have a sort of cheerful guilty pleasure that they get paid as much as they do to have fun with computers and the Internet--and they never claim to be some sort of evolutionary step in computing.
Haskell programmers, though? Haskell folks? Every one of them that I've chatted with seems to be cut from basically the same sort of cloth: quite smart, rather academic, and pretty much useless for any sort of real production programming. Theirs does not appear to be a culture well-suited to product development, to software engineering, to simple explanation or acceptance that most products and business needs are best met with tools of expediency instead of beauty or purity.
Indeed, as outlined above, I find many Haskell people seem to be using a tool few care about to solve problems nobody has in ways that have been solved before, all the while viewing themselves as intellectually superior to the lesser "blub" programmers who are, you know, getting paid, and working on projects way more interesting than whatever the Haskell folks are posting on /g/ about.
Haskell predates all of Erlang, OCaml, and F# by a pretty large margin (by about a decade! [EDIT: for OCaml and F#; see child comment]) and uniquely among most of the languages mentioned is a lazy statically-typed functional language. There is a large qualitative difference between optional laziness and laziness-by-default. Haskell was originally intended as a teaching language, and in particular as a less-license-encumbered version of the language Miranda, which was a popular teaching language at the time. So the original reason the language needed to exist was, "We want a lazy, statically-typed language for research and teaching uses."
Is there a different rationale now? Well, there still aren't many lazy-by-default languages, and Haskell is a good way of showing that there are some big advantages as well as some big disadvantages to that approach. One consequence of laziness is a really hardcore commitment to functional programming—where OCaml can cheat and have implicitly stateful functions, Haskell has to use alternatives like monads or effect systems—which means that Haskell is a fascinating language for exploring and developing new programming concepts in the functional space. So maybe Haskell's one-line selling point is, "We need to deeply understand functional programming and the idioms it affords us." I personally think this is a worthwhile endeavour regardless of how much it pays.
Thank you for the (somewhat incorrect) history lesson, especially the bit about Miranda as a forerunner to Haskell.
The rest of my comments still stand--especially the bit about "functional programming for its own sake", which is exactly what I consider "wanky" in a language whose propagandists are unwilling to promote as merely being of academic interest.
So I've interacted with many of the leading Haskell people (SPJ, Wadler, among others) and I never really got that feeling from them. They were just super smart people with different fascinating ways of thinking (from my perspective). They never came off as very smug, in fact, they weren't really that interested in what you thought about Haskell; they were too busy using it to solve problems. I think in that way, Haskell continues to be the language that defies success, at least in the way that many would define it (mass adoption - Haskell seems to be super successful to those people, and I don't think I would disagree with them).
Haskell programmers do seem to work on more elegant solutions to problems that are easily solved less purely. This is not a useless activity, it can help in understanding those problems more deeply, but it is hard for an outsider to appreciate. I know I will never be a Haskeller, simply because I am interested in different problems.
I didn't feel like I was doing "real programming", which likely says more about what we're taught than the merits of the language. I'd be interested to known how it's being leveraged presently? I know a lot of the standard use cases, but is there anyone solving really unique problems with it?