Hacker News new | past | comments | ask | show | jobs | submit login
Handbook of Constraint Programming (2006) [pdf] (essex.ac.uk)
68 points by espeed on Dec 26, 2015 | hide | past | favorite | 11 comments



For anyone interested in playing around with constraint logic programming I can highly recommend Eclipse (not to be confused with the IDE): http://eclipseclp.org/

If you're looking for ideas, classical examples are work sheduling (nurses), layout planning for production facilities or stuff like soduko (I also did a poker hand ranker once)

Edit: they also have a good self learning course on their site: http://4c.ucc.ie/~hsimonis/ELearning/index.htm


Constraint-based things to check out:

1. Sussman's talk "We Really Don't Know How to Compute" (http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...) and his work on the Propagator (https://github.com/ProjectMAC/propagators).

2. Gremlin Graph Traversal Machine (http://arxiv.org/pdf/1508.03843v1.pdf, http://www.datastax.com/dev/blog/the-benefits-of-the-gremlin...)

3. Clojure core.logic (https://github.com/clojure/core.logic) and core.match (https://github.com/clojure/core.match)

4. Yedalog: Exploring Knowledge at Scale (http://research.google.com/pubs/pub43462.html, https://www.youtube.com/watch?v=SP9zS43FRzQ)

5. Datomic Datalog (http://docs.datomic.com/query.html)


Constraint programming is extremely important, now more than ever. I don't want to fall on the tired old explanation that says, "you program by declaratively describing the answer instead of procedurally describing how to compute it," both because it's sort of not true and because there's a whole declarative-procedural continuum: FORTRAN is more declarative than assembly language, LISP or Python is more declarative than FORTRAN, arguably ML is more declarative than Lisp or Python, Prolog is more declarative than ML, and SQL is more declarative than Prolog. And it varies by application area: probably, if you're describing a circuit layout, Verilog is more declarative than SQL, and MyHDL is more declarative than Verilog.

Unfortunately I don't have a good, compelling way to explain why I think constraint programming is so important. Here are a few attempts.

Constraint programming transforms a program that determines whether a possible solution is acceptable into a program to compute an acceptable solution.

Constraint programming allows you to write the specification of your program and then separately search for ways to fulfill that specification, either manually (by specifying search strategies/proof tactics) or automatically.

Constraint programming allows you to abstract away the question of which values in a subroutine are returned and which are provided as parameters. If you have only two such candidates, like the tired old °F↔°C example, this is a minor saving. If you have many parameters, it can be a major saving.

---

So much for why constraint programming would be important if we could do it. Now what's new about being able to do it, particularly since 2006?

I'm not a specialist in the area, so this may not be the best possible answer to the question.

However, it looks to me like miniKANREN is one important development: it's a logic programming language far more powerful than Prolog. cKanren is an extension to CLP.

Also, SMT solvers ("SAT modulo theories"), which are capable of fairly general-purpose constraint solving, have gotten enormously more efficient since 2006. One of the fun things about exponential-time algorithms is that reducing the base of the exponentiation even by a little bit can produce enormous speedups in practice.

Additionally, constraint solving shades over into optimization. In constraint solving, you have a bunch of absolute constraints, and you try to find a configuration that satisfies all of them. A configuration that barely fails to satisfy any of the constraints is no good. (Although this book talks a bit about "soft constraints", too.) In optimization, you have a "loss function" that you try to minimize (or equivalently a value function you try to maximize) and so you're looking for a configuration with a good value of this function. So if you turn each constraint into a Boolean variable that takes on 0 or 1 and subtract their product from 1, then you've turned a constraint problem into an optimization problem. (Maybe you want to sigmoid out the 0/1 transition a bit in order to keep everything differentiable.) All this stuff we've been seeing about "deep learning" and "machine learning"? Those are optimization problems! You can use those techniques to solve constraint programming problems, and you can use CLP techniques to dramatically speed up optimization problems. This is in its infancy. A really cool example of solving inverse kinematics constraints with optimization (just using gradient descent and reverse-mode automatic differentiation) is http://www.mattkeeter.com/projects/constraints.

Finally, lots of the things we run on our computers — word processing, spreadsheets, HTML rendering, TCP/IP — really don't benefit from the immense increases in computation and memory that have happened since 2006. But constraint solving is NP-hard, and consequently it can eat up all the computrons and DIMMs you can throw at it.


> Constraint programming allows you to write the specification of your program

This captures the core idea from my brushes with various constraint based programming languages. There's nearly always a spec floating around somewhere, it's really a question of how expressive your constraint language is. Even programs in "higher level" languages can be seen as a (rather complex) set of constraints.


This is really cool, but it's also almost 10 years old... Is that a problem, or is it still up-to-date?


Does it matter? Even if it isn't the latest and greatest, it's still valuable for it's original reason: To teach you about the basics of constraint programming. And now it's valuable for a new reason: To teach you about the context and evolutionary path which led to modern constraint programing.

Often, the latest and greatest is defined in terms of incremental changes from the body of generally accepted knowledge. If you just read the most recent stuff, you'll probably lack the requisite knowledge to understand it. This is true up until the point that the new stuff becomes generally accepted and widely known. At that point, somebody writes it up nice, requiring minimal context, but by that time, it's no longer the latest and greatest!


> Does it matter? [...] Often, the latest and greatest is defined in terms of incremental changes from the body of generally accepted knowledge.

I don't know. Some fields that's true, and some fields that's not (a book on the latest and greatest of web development from 2006 would be highly out of date). Constraint Programming isn't a field I'm versed in, so I'm asking to find out whether or not it matters.


That's not an apt comparison. This book is about a general technique and problem domain, not a particular constraint programming language/framework/toolchain.


Constraint programming was a big topic in the 90s. It has kind of tapered off as one since then. There are legitimate questions about whether it can scale or not to general purpose problems.


The link does not work for me.


Current link should work. Does on Android.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: