> Crudely their proof works as follows. First, the two scientists note that the up-or-down Ising spin resembles the true-or-false character of a logical statement such as "the car is white." They then prove that any particular 2D Ising model—i.e., with a particular set of coupling and external fields -- is equivalent to an instance of a logical problem called the satisfiability, or SAT, problem, in which the goal is to come up with a set of logical statements, A,B,C, … that satisfy a long logical formula such as "A and not (B or C) …" The theorists present a way to map the SAT problem onto the 2D Ising model.
> Next, they show how any other spin model can also be translated into a SAT problem. That SAT problem can then be translated onto the 2D Ising model, thus making the two spin models equivalent. There is a price to pay, however. The 2D Ising model must have more spins than the original spin model. But De las Cuevas says that the computational demands of the Ising model are only modestly bigger than those of the original model.
Note that SAT is NP-complete. The fact that they reduced from SAT to Ising spin suggests that Ising spin itself might be hard.
And indeed https://www.siam.org/pdf/news/654.pdf shows that Ising spin is NP-complete. So this result looks like a case of "all NP-complete problems in this class can be reduced to another NP-complete problem". Which sounds pretty unsurprising to me.
I'd therefore need more context to figure out why the result is so shocking.
Alas, the article is behind a paywall, so its hard to find out what the full context is, but there is at least one thing I can say.
Usually when a statistical physicist says that they have "solved" a model, it means that they have obtained an (asymptotically exact) formula for the full partition function or free energy as a function of the parameters, including temperature. At any finite temperature one will typically have that the full partition function includes contributions from all possible assignments on the spins (corresponding to all possible true/false assignment in the SAT problem), so that actually computing the partition function looks more like solving a weighted #SAT problem. It is this full problem that was solved by Ising in the general 1D case and by Onsager in a special class of 2D cases. However, if one consider the limit in which the temperature goes to zero the contributions of the terms in partition function with energy larger than the ground state will become negligible. In this limiting case it may turn out the full partition function becomes a number that is a function of the smallest possible number of clauses violated in the formula. When no clauses are violated one is in the ground state and the free energy is zero, which give the mapping to SAT.
If this were the end of the story then it would seem, as you have pointed out, that there is nothing new about the result. However, it needs to be emphasized that this is an approximation, and one that you would only expect to be reasonable at low temperatures. It is typical that, in addition to the ground state, one needs to account for states near to the ground state that give some contribution. There are different techniques (in the wein of Taylor expansions) to handle this in different situation, but AFAIK none of them work in every situation, and certainly aren't universal in the sense suggested in the article. So if there is a proof that all Ising models in the low energy regime are basically the same and can be solved with general these SAT-like techniques this is a new result.
It seems that I wasn't wrong about the context (indeed they focus on solving the full partition function, as opposed to just the ground state energy), but there is some additional information in the paper that is worth sharing.
The first is that their construction relies on a method where (1) the spins used to simulate a target Ising model are a subset of those taken in their universal model based on SAT and (2) that to make a correspondence between the energy levels in the universal mode and the target model one must throw away terms in the universal models partition function with energy larger than some Δ, so that the spectrum of the target model is a subset of the spectrum of the universal model. I think (1) is a neat trick, although maybe not too surprising if you are familiar with, e.g. hidden spin layers in neural network models. With respect to (2) they claim that this allows one to approximate the target model partition function with an error term of size O(exp(-Δ)). I would assume that this means that there can;t be too many states in the full model with with energy greater than Δ, since if they grew exponentially in number this could counteract the exponential damping in contribution in the partition function. Maybe someone who actually read the paper from start to finish can comment?
There is a strong connection between SAT, spin models and inference in graphical probabilistic models. All can be seen as instances of markov random fields.
They can all be solved with the same algorithms that are very fast for most practical problems.
In general, minimising a problem with the universal electron-correlation functional is known to in NP due to this [1]. Which is kind of sad for density functional theory (DFT). I've probably misphrased that.
Also, side remark: Coding up a 2D Ising model solver with Metropolis-Hastings is surprisingly fun. Doubly so if you GPU accelerate it.
I think I see where you're coming from, but in every case where this has been hypothesized so far (e.g. soap bubbles a few years ago) it has been shown to be false. Mostly, physical phenoma result in purely local minima/maxima -- which is generally useless for solving NP-complete problems. At least that's my intuition.
Hopfield was the first to poke at Ising model spin to create memory-having systems, creating his Hopfield model where memory states are represented as attractors in the Hamiltonian search space. Add hidden units to that, and you get the general Boltzmann machine. Restrict the connections, you get a restricted Boltzmann machine. Layer them up, you get the 2006 Hinton and Salakhutdinov advance in deep learning.
> Crudely their proof works as follows. First, the two scientists note that the up-or-down Ising spin resembles the true-or-false character of a logical statement such as "the car is white." They then prove that any particular 2D Ising model—i.e., with a particular set of coupling and external fields -- is equivalent to an instance of a logical problem called the satisfiability, or SAT, problem, in which the goal is to come up with a set of logical statements, A,B,C, … that satisfy a long logical formula such as "A and not (B or C) …" The theorists present a way to map the SAT problem onto the 2D Ising model.
> Next, they show how any other spin model can also be translated into a SAT problem. That SAT problem can then be translated onto the 2D Ising model, thus making the two spin models equivalent. There is a price to pay, however. The 2D Ising model must have more spins than the original spin model. But De las Cuevas says that the computational demands of the Ising model are only modestly bigger than those of the original model.