Symbolic execution is also known as abstract interpretation. The program is being interpreted, with concrete values abstracted away, generalized to symbolic elements that often denote several or even infinitely many concrete values.
Logic programming languages like Prolog are especially amenable to abstract interpretation, since we can absorb the Prolog system's built-in binding environment for variables, and simply plug in different values for the existing variables. We only have to reify unification, i.e., implement it within the language with suitable semantics.
An impressive application of this idea is contained in the paper Meta-circular Abstract Interpretation in Prolog by Michael Codish and Harald Søndergaard:
As one of the examples, the authors show how abstract interpretation over the abstract parity domain {zero, one, even, odd} can be used to derive non-trivial facts about the Ackermann function.
In particular, they deduce that Ackermann(i,j) is odd and greater than 1 for all i greater than 1, by taking a Prolog implementation of the Ackermann function, and interpreting it with these abstract domain elements instead of all concrete values (which would not terminate, since there are infinitely many concrete values). This is computed by fixpoint computation, determining the deductive closure of the relation.
> Symbolic execution is also known as abstract interpretation
These aren't the same thing! They do both use abstraction of concrete values, but abstract interpretation solves for fixpoints in a lattice of abstract values, which is a fairly different process from how symbolic execution works. The answers here go into more detail: https://cstheory.stackexchange.com/questions/19708/symbolic-...
I believe GP wanted to say that Symbolic Execution is a subset of Abstract Interpretation; because here the prolog expert is wrong: SE is not strictly equal to AI (though I'd gladly believe he didn't actually meant to say that).
AI itself boils down to "Build some lattices, construct a Galois Connection between them and add some abstract transfer function on top of it" (and make sure that all the necessary correctness equations hold; this is the difficult part). For SE you then choose the appropriate lattices and construct suitable transfer functions.
BTW: SE usually also needs a fixed-point iteration to produce meaningful results. At least when considering cyclic control flow graphs (loops & recursion).
Co-author here. We're planning to turn this into a series, with additional posts on how symbolic execution works at a lower level, how to present counterexamples to users, and ideas for future programming tools built on symbolic execution.
One piece of feedback: I've often wished to work on static analysis tools, but too much of the work in this area is focused on Haskell or Lisp. (All of the code in this article is in Haskell or Lisp.)
Does any of this stuff work on Java, Python, or JavaScript? I want to improve the quality of real-world programs written in industry-standard languages.
This "gentle introduction" makes it seem like the first step of the process is "port your app to a homoiconic language," -- or, in other words, "you can prove the correctness of any language you want, as long as it's Haskell or Lisp."
That's static analysis, but not symbolic execution. Looking at the JavaScript section, you can't do any of the stuff described in the article with JSLint.
Abstract interpretation is a wider concept than symbolic execution; check the post in the top thread answered by user mjn to get more info; the fb and mozilla ones seem to focus on more mainstream languages.
Then we need the type of an interpreter which consumes program inputs and produces an output. I'll leave this abstract for now.
data Interpreter = ...
By map, we just mean that we can write a Haskell function with this type
interpret :: Expr -> Interpreter
interpret (Addition a b) = (+) <$> interpret a <*> interpret b
interpret (Multiplication a b) = (*) <$> interpret a <*> interpret b
Well, I'm glad I asked. I get this now. It really is a mapping. But this was a bit of shorthand that I didn't get in the post. You might want to add that.
Typically one would convert a binary executable into some other form, then analyze it to find all possible bugs. Of course one quickly slams into troubles like the halting problem, but it is still usually possible to gain useful understanding of the binary executable.
When a program has a free variable, we can consider an entire space of possible execution paths, one for each possible value the variable could take.
Or we could put on a greasy old pragmatic ball cap and issue an "unbound variable in line 42" error.
A free free variable has no "possible value". If a variable has a value, it has a binding; if it has a binding, it is not a free variable.
If we suspect that the variable is not actually free, but rather its definition has not yet appeared, we can defer the processing. Perhaps capture a continuation of the symbolic execution machine, and dispatch it later.
If the expression which refers to the free variable is not actually evaluated (as our symbolic execution shows), we can squelch the warning, or change its wording.
The idea that a free variable is implicitly universally quantified works in logic, but it doesn't transfer to an ordinary programming language (i.e. not Prolog, and its ilk) very well.
Thank you for this. However the labels 'syntax Haskell' and 'syntax LLVM' do not seem to display properly on Firefox. Wish you could find a way to fix them.
Symbolic execution is also known as abstract interpretation. The program is being interpreted, with concrete values abstracted away, generalized to symbolic elements that often denote several or even infinitely many concrete values.
Logic programming languages like Prolog are especially amenable to abstract interpretation, since we can absorb the Prolog system's built-in binding environment for variables, and simply plug in different values for the existing variables. We only have to reify unification, i.e., implement it within the language with suitable semantics.
An impressive application of this idea is contained in the paper Meta-circular Abstract Interpretation in Prolog by Michael Codish and Harald Søndergaard:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.137....
As one of the examples, the authors show how abstract interpretation over the abstract parity domain {zero, one, even, odd} can be used to derive non-trivial facts about the Ackermann function.
In particular, they deduce that Ackermann(i,j) is odd and greater than 1 for all i greater than 1, by taking a Prolog implementation of the Ackermann function, and interpreting it with these abstract domain elements instead of all concrete values (which would not terminate, since there are infinitely many concrete values). This is computed by fixpoint computation, determining the deductive closure of the relation.