I wanted to mention Z3, which is one of the most user friendly yet advanced solvers I know. It's actually an SMT solver, so can solve a few more things than only SAT problems.
Yeah, I've been using z3 for a couple of projects and it is an absolute pleasure.
One of my projects involves I'm.entiong part of the JVMs execution in Z3 constraints and we have been able to prove stuff about program equivalence this way which is exciting.
Once the problems get big enough we will need to find clever ways of shaping Z3s search path, maybe by writing a theory, but for now it works for anything we have thrown at it.
Hard constraints are constraints that cannot be violated for the allocation to be valid (i.e.: to be a solution).
Soft constraints are constraints that incur some cost/penalty when violated.
Then the problem objective changes from "find an allocation that is a solution" (SAT) to "find the minimal cost solution (if one exists)" (Weighted SAT / Constraint Optimization).
For instance:
- SAT problem (a,b are binary):
Hard constraint: a = b
Solutions: a=0,b=0 and a=1,b=1
- Constraint optimization:
Hard constraint: a = b
Soft constraint: cost(a,b) = a+b
Optimal solution: a=0,b=0 with cost 0. The allocation a=1,b=1 is a solution because it fulfills the hard constraints, but it is not optimal because it's cost (2) is larger than the cost of another solution (0).
A great related article is the introduction to PubGrub, the new solver algorithm for Dart's package manager. I especially liked the focus on better error messages, which can get pretty cryptic in other package managers.
Yup, had some great success getting solutions to real instances of NP complete problems. Nice thing is that SAT solvers all have basically the same interfaces, you can serialize the problems easily so it is super easy to try lots of solvers out. It's fun to think about problems in terms of SAT, you can build huge logic networks with hundreds of thousands of clauses and solve them in seconds. Solve integer optimizations with binary encodings. cool stuff. Unfortunately, it is super sensitive and hard to control, so if you want particular properties of your answer that are "nice to haves" but not required, it is hard to encode. Some problems will be solved instantly, some will never be solved with any amount of cpu time, so it is hard to have it in the loop of something that needs to always work unless your problems are really well constrained.
It sounds like you are trying to solve a partial maxsat problem. Partial maxsat uses two sets of clauses, hard and soft clauses. The hard clauses must be satisfied and the soft clauses are "nice to haves." The goal for the solver is to satisfy all the hard clauses and as many as the soft clauses as possible.
Its not perfect, but optimization problems are usually solved with SAT as follows: Suppose you have some X that you want to maximize. You encode a constraint X>=5, and see if there is a solution to that. If there is, you try with X>=6.
You continue until you can not find a solution or get unsat.
In your case, X would be the number of "nice to have constraints" that are satisfied. It would be simpler to formulate the these conditions in some SMT language, rather than in SAT directly.
>if you want particular properties of your answer that are "nice to haves" but not required, it is hard to encode
what I like about Reduced Ordered Binary Decision Diagrams (ROBDD) is that they can also tell you how many solutions there are, so you first encode the mandatory requirements, and if there are plenty of solutions, you can try adding requirements for the next-most-desired properties, or give up on a certain property by keeping an eye on the number of solutions.
Definitely agree that they are criminally underused. I used to work for a large virtualization company, and used a SAT solver to walk an extremely complex dependency graph. We had originally been using a simple hill climbing algorithm and the SAT solver absolutely trounced it in terms of performance.
The biggest problem though, was when I would describe how we were using SAT to solve the graph, I generally would get extremely puzzled looks. Most other engineers have no exposure to them whatsoever.
Sometimes discrete graph theory problems can be relaxed into a continuous space where gradient descent can be used. If you want a concrete example of how this happens, take a look at the "Continuous Optimization" section in the "Thirty Years of Graph Matching in Pattern Recognition" paper (graph/subgraph isomorphism).
Maybe it's just me, but I feel the same way about MIPS solvers as well. The amount of optimization that has gone into these solvers over decades, and the kind of problems they're solving is staggering. Yet, you never hear about them since operations research is not as sexy as deep learning.
I experimented with MIPS before, mainly for side projects, and I found them amazing for easily solving optimization problems. Reading this article, I felt the logic used for constructing the Sudoku solver is quite close to what I would have done for MIPS: one constraint for one number per row, one constraint for the columns, one constraint for the 3x3 boxes, etc…
I am wondering if MIPS and SAT relate together somehow?
You can solve SAT problems using linear programming, I believe. I don't know if the free SAT solvers would compare to a commercial LP solver like gurobi or cplex.
Integer programming, not linear programming. Linear programming (without the integrality constraint) is in P, so you cannot use it to solve general SAT problems, which are NP-complete (unless a major and highly surprising theoretical breakthrough is found).
The free SAT solvers are very good, and much better than commercial IP solvers at solving problems that are a natural fit for SAT. (Obviously, you can encode any IP as an SAT formula and vice versa, and the IP solvers are better at solving the problems where you actually have meaningful arithmetic.)
I hadn't even heard of them before now. Mixed-Integer Programming Solvers is what I found looking up the term. What are the best free ones to check out? And anyone got good tutorials w/ examples? I'll start keeping an eye out for CompSci work on them if they're really useful.
We use SCIP in our system and it’s brilliant. They have proper python bindings so the call overhead is low. We reduced what was previously a complex hand optimised algorithm to a handful of equations a weight function. Highly recommended.
I recommend learning linear programming first. It's simple and useful, and makes it easier to understand quadratic programming, convex optimization, etc.
I think music composers could benefit from composing software: i.e. they might specify different "facets" of a score, such as Parson code (next note frequency higher, lower, or equal to previous note) for melodic relationships, harmonical relationships between simultaneous notes, melodical relationships between a note in a variation of an earlier bar and a corresponding note in the same position of the earlier bar (i.e. the melodic relationship between the expected and actual surprising note) etc...
I am writing a Z3 music composer right now actually. Just as a toy project but the idea is solid: you have a passage that you need to harmonize? Here are three different versions that conform to the rules of fuxian counterpoint, etc
Oh cool. Yeah, I looked around from "Music Composition Constraint Solving/SMT/Z3" stuff and didn't find anything which was surprising. I'll take a look at Rosette too, I've heard of it a couple times but never dug in
I have thought about that. The outputs from Markov chains and adversarial networks all sound like crap.
Despite 15 generations attempting to codify tonal harmony, there are only a few relatively few hard rules, followed by an unending literature of exceptions.
If you're interested in cutting-edge research SAT solvers, have a look at the annual SAT competition: http://satcompetition.org/ - source code is available for all of them.
For those hearing about SAT solvers for the first time, its worth mentioning that they are the basic component of hardware (and software) formal verification systems.
This is probably the largest practical application domain, which also drives most of the academic development.
>they are the basic component of hardware (and software) formal verification systems.
They are _a_ component. It's quite possible to perform formal verification without them, for instance the CompCert verified C compiler, or the DeepSpec end-to-end verification project: https://deepspec.org/main. SAT solvers generally have poor support for constructive logic, the kind of logic used in theorem provers based on dependent types like Coq. Systems based on classical logic like HOL integrate better with SAT solvers, but the lack of dependent types makes a lot of things much cumbersome to prove, especially if one wants proofs that can be extracted to programs.
> SAT solvers generally have poor support for constructive logic, the kind of logic used in theorem provers based on dependent types like Coq.
Arguably, this is no longer the case. FStar [1] has dependent types, monadic effects, refinement types and a weakest precondition calculus (i.e. a richer type system than Coq) and uses the Z3 SMT solver to discharge many proof obligations. I've been using FStar recently and it works surprisingly well.
There are other methods, no doubt. But last time I was involved with the field, software verification was still an experimental thing, that kind of worked in some niche cases, while SAT based verification was applied to every new arithmetic hardware design. Not to disregard other methods, but I'd say SAT still qualifies as being _the_ component.
Agreed! I started verifying stuff with different tools as part of my PhD and it is really not that hard to grasp the basic concepts. It becomes more complex the deeper you delve in the internals, but I think it would be great if more people verified part of their code in addition to testing it.
Personally, I started with CBMC [1]: it is quite a staple in the field and their documentation is very clear. Basically, it translates a C program (decorated with assumptions and properties to verify) into a SAT formula: if the formula is satisfiable, a property can be violated (and you even get a program trace that shows exactly how that happens).
I work a lot with sat solvers, but with using in a proper language. Either cbmc for C (not Klee, not Z3 as these require extensive translations to obscure languages), or picat for a proper functional language with support for normal non-functional coding styles.
I used that to build my initial fantasy soccer team, it wasn't too hard to pick up amd the results were quite good. The team ended up performing very badly in real life, but that's not Clingo's fault.
You should look into integer linear programs. They are a much useful "DSL" for discrete optimization. You get a lot of insight into flow problems from studying their LPs, for example, and it's very easy and efficient to solve flow problems with additional constraints using ILPs. Also, the state of the art for solving TSP and related hard path-finding problems uses techniques from integer linear programming.
OP complains that they are underused in industry, and goes on to show how to solve sudoku. That is not going to help me in my day to day job, can someone come up with a more pragmatic example where we could apply solvers?
As an example, the often-talked-about dependency management problem, is also NP-Complete and thus translates into SAT, and SAT could be translated into dependency manager. The problem our group worked on, generating key and lock cuttings based on user-provided lock-chart and manufacturer-specified geometry, is also NP-complete.
An example is something like scheduling school classes where you’re trying to minimise clashes in the timetable.
I haven’t thought too much about how you’d solve it but you’ve got a few things to work with. There are the different classes, students, teachers and spots in the timetable. That’s a real pig to brute force but the sort of work a solver is used for.
The Anaconda distribution's package/environment manager for Python (named conda) at one time used the picosat SAT solver for dependency resolution as well. I'm not certain they still do, they've removed blog entries etc. referring to it at some point.
Yes, it’s still picosat (wrapped in a python library called pycosat). At some point it may be switched out with an alternate SAT implementation, but I suspect SAT in general is very much destined to stay an integral part of conda.
if a problem is NP-hard does not mean that each instance is hard to solve. From my experience many instances appearing in practice are easy to solve even if there very large. The problem with these instances is that it takes to much space to write done the SAT-formual. So you do not even get to the point where you would run the solver. But a simple problem specific search tree algorithm has no problem with solving it. Bottom line, if you really need to solve a NP-hard problem then you might want to look for fixed-parameter tractable algorithms.
Given 2 NP-hard problems and a transformation that translates (in polynomial time) any instance of NP-hard problem type 1 into an instance of NP-hard problem type 2, and if type 1 has a known fixed parameter tractable algorithm, can we "translate" the fixed parameter from type 1 to an equiproblematic parameter in type 2?
That's not the point (as I understand it). The point is that many instances of NP-Hard problems are, in truth, easily solved.
To give an accessible example, consider the problem of finding a factor of an integer. I know this isn't NP-Hard, but it's something most people can easily think about.
If you choose a random large integer, odds are it has a small factor. 50% of the time it's divisible by 2, and most numbers (in a very real sense) or divisible by 2, 3, 5, or 7.
So most of the time a randomly chosen number is easy to find a factor of. In a similar way, most real-world instances of NP-Hard problems are actually easy to find a solution for. The proportion of difficult instances shrinks as the instance size increases[0]. Specifically, if you have an instance of SAT generated from a real-world situation, the likelihood is that it's not actually that hard, so even if it's large, an off-the-shelf SAT solver might be able to dash off a solution.
So when confronted by an NP-Hard problem don't just instantly give up. Choose an algorithm that has a go, and there's a good chance you'll get a solution.
[0] There are formal, unproven conjectures about this.
Yes I understood this, but thanks for making sure anyway...
That still does not answer my question. I see and understand the utility of fixed parameter tractability.
This naturally made me wonder about the proofs that a certain problem type P' is NP-hard (by being able to transform problem from P' to known NP-hard problem P and back): can these translations be used to deduce the relevant fixed parameter for NP-hard problems if one of the 2 problem types (P' or P) has a known fixed parameter tractable algorithm...
I.e. you did not answer my question at all, you just reiterated basic complexity theory
OK, then I don't understand your question. I'm not an expert, so perhaps I'm not in a position to understand, let alone answer, but it's worth clarifying in case someone else can step in.
Suppose we have two problems, A and B. We have polynomial time transforms between them, so they are considered "equally difficult".
In reading and re-reading the thread it appears that you (pl) are talking about having a "fixed parameter tractable algorithm" for one of the problems, say problem A, and you (sng) are asking if the transform then provides an equivalent fixed parameter tractable algorithm for B.
If that's the case then I don't know enough to answer, but my feeling is "probably not in general".
If that's not what you're asking then you might like to be more specific.
In either case I suspect I can't add anything useful.
Your last comment indicates you do understand my question! so I upvote it :)
Not sure what you mean with pl and sng in "you (pl)" and "you (sng)"
Most fantastic would of course be a general method so that given as inputs the transforms between problem types, and the known fixed parameter, and fixed parameter tractable algorithm, gives as output the new fixed parameter, and a new fixed parameter tractable algorithm for the new fixed parameter.
I assume we do not have such a general method, so similarily interesting would be data points for constructing such a method: examples of deducing the fixed parameter for a problem domain, given the fixed parameter (and tractable algorithm) from those of a different problem type.
With enough examples perhaps a general method can be guessed.
> Your last comment indicates you do understand my question! so I upvote it :)
OK, cool, thanks!
> Not sure what you mean with pl and sng in "you (pl)" and "you (sng)"*
"Plural" and "Singular" - in English I can't make it clear that in some cases I'm talking about you and previous speakers, and in other cases I'm talking only about you. Other languages make it easier to make the distinction, in English it must be inferred, or made explicit.
> Most fantastic would of course be a general method so that given as inputs the transforms between problem types, and the known fixed parameter, and fixed parameter tractable algorithm, gives as output the new fixed parameter, and a new fixed parameter tractable algorithm for the new fixed parameter.
I feel like that's a lot to ask, but we are beyond my expertise.
> I assume we do not have such a general method ... With enough examples perhaps a general method can be guessed.
I have no intuition about this at all, so I'll stand aside and let others with greater knowledge comment, if they're around.
Okay let (P,k) be a parameterized problem which admits a fixed-parameter algorithm with respect to k.
That means you have an algorithm which runs in f(k)n time, where n is the input size and f is a function only depending on k (for example (2^k)*n). Hence, if your real-world instances have always a small k then you can solve arbitrary large instances. Note that one problem can have a lot of different parameters and not all admitting such an algorithm (unless the world is very different than we believe). Now let (P’,h) be another parameterized problem and you have a polynomial time reduction from P’ to P. Then, this reduction gives you a fixed-parameter algorithm for P’ with respect to h if the value of k in the reductions depends only on h and nothing else.
There's no general answer to your question, it depends on the problems. You can find parameter-preserving reductions between some problems, but this isn't always the case.
Also, instead of looking at fixed parameter tractability, it often makes more sense to look at approximation algorithms (if your goal is to optimize something, rather than getting a strict Yes/No answer).
Allegedly Windows Update runs on one as well, which is said to be the reason why "update rollups" are needed to keep the system fast (and also the reason why Windows 7 updating has been extremely slow for years).
One industry where SAT solvers get used frequently is in Electronic Design Automation. One comment already mentions its use in hardware/software verification. It turns out we can also use them to find efficient representations of Boolean functions (and therefore electronic circuits). This also generalizes to the synthesis of quantum logic circuits. They truly are amazingly versatile tools.
Does anyone with good knowledge of the space want to chime in on the state of the open source ecosystem? In particular, what are the canonical libraries for Java, C/C++, python, go etc. ecosystems? Does everyone just use Minisat or Z3 bindings in their language?
What does this mean for NP-complete problems? Is the term “fast” only applicable to this small Sudoku board, or are solvers optimized to the point where something like O(2^256) doesn’t take that much time to run?
SAT solvers aren't a magic "asymtotic complexity shrinker" like that. They are more like really clever backtracking algorithms, cleverer than what you would usually write by hand.
The actual difficulty of an NP complete problem depends on what the problem is, and what particular instance you are trying to solve. In a "constraint solving" setting, underconstrained instances are usually trivially solvable with a YES solution, overconstrained instances have an easy to find NO solution, and the instances that have just the right amount of constraints are the ones that are trully hard.
What a good SAT solver does is that it can solve some instances that a naive backtracking algorithm probably would not be able to do. Here are some examples of tricks that SAT solvers perform:
* non chronological backtracking: when a search branch is deemed non-viable, instead of undoing just the last choice, the algorithm determines what choice was to "blame" for the lack of solutions, and undoes back to that instead.
* learning new constraints. When a search branch is deemed invalid, a new constraint is calculated based on the choices made on that search branch. This helps avoid the same sequence of choices from being repeated in a future search.
* aggressive randomization and restarts. The time distribution taken to solve a random instance of an NP-complete problem is fat-tailed. In these cases, an aggressive restart policy might speed things up even without other algorithmic improvements.
It's impossible to create a solver that is efficient on any and all NP-complete problems. That applies to any solver.
Still, it is possible to create efficient solvers for some subset of the NP-complete problem space. They will work fine on some problem exhibiting the proper "shape" / internal structure. What that is is often hard to tell...
So in practice using SAT solvers (or SMT, or constraint, or MIP ones...) has some empirical side to it. It can work extremely well in some cases: it's possible to solve problems with 10s of millions of Boolean variables for example. It can also fail (time out). That depends a lot on the problem type and the solver heuristics. It's common in this area to try different solvers, and be happy if one gets a result in reasonable time. So there can be magic, but also some frustration.
As a practical example, SMT solvers are used for software formal proof. It's typical for those tools (like Frama C) to try several solvers. If any find a proof, you're good and the proof is for free (no human work needed). If none work, it's up to the engineer to reshape the problem to "help" the solver, or bite the bullet and deal with the proof (semi) manually with coq or a similar tool.
All this being said, modern solvers are really powerful. If it's a human scale problem solvable "by hand" with lots of work, a solver should deal with it very quickly.
fast here means compared to a naive implementation. They use a lot of optimization and heuristics.
What would be a O(2^256) problem? A boolean formula with 256 binary variables? In many cases those can be broken down to independent smaller formulas which are easier to solve, in which case a solver might be quite fast on it, yes. But something that provably does not reduce to a smaller complexity would still take forever, I guess. Corrections welcome.
Z3 has a python interface. This said, SMT is an extension of SAT, and you're likely to use a SMT solver even for "just" SAT problems. Most SMT solvers support the SMT-LIB [1] file format, which is easy to generate from python. It also lets you switch engine easily.
Also, there are other kinds of interesting solvers. For constraint problems, one can check minizinc [2]. A bit like SMT-LIB, it's a language to describe constraint problems that is supported by many solvers. It's commonly used with gecode [3]. For many problems using minizinc will be much easier than trying to convert the problem in SAT (or even SMT) format.
In pySMT, we have been working on making the simplicity of the z3 python interface available to other solvers both through SMT-LIB interfaces, but also native API integration for SMT/SAT solvers and OBDDs.
In looking over whether the license would be amenable to distribution as part of a game, I noticed that your github page links to pysmt.org at the top, but that's only a redirect back to the github page :p
Eheh, we tried for some time to come up with a good webpage. At the end of the day, this is a library for developers, and as such we decided that the Github page is the most meaningful place to start. We put a lot of work in the main README, with revisions after each release, and we have a readthedocs for detailed documentation. Nevertheless, the github page gives you also other insights on what is the development status of the library (currently active but not daily), and areas of targeted improvement.
Re the licensing, pySMT is open-source and distributed under APACHE 2.0 .
Actually these SAT solvers and model checkers seem like a valid niche for blockchain and a true bit protocol: as an average user of these (i.e. enough that you run it multiple times a day, but not so often that you have a full pipeline of scheduled problems to check) you can have it run on problemms of others (generating coin) and then when you are done interpreting the results of a previous run and reformulating your own problem, you can use your coin to solve your problem quicker...