The problem with using a classical computer to solve an NP-hard problem is that as the problem size increases, in the worst case, as far as we know, the running time increases exponentially.
The problem with using a photonic computer to solve an NP-hard problem is that as the problem size increases, the amount of light at the output node drops exponentially. Even under the assumption that you can get rid of all background noise, that means you need to run your detector for longer so as to be able to detect the output photons, and…the running time increases exponentially.
Perhaps there are interesting quantum effects that could be exploited, but the authors specifically note that they aren’t doing that.
So this is a neat engineering project but an almost certainly irrelevant computational project.
Agreed. Moreover, the physical size of the device will grow exponentially with problem size, so manufacturing will be impractical. The device is small only for simple instances of the SSP. But for those instances you could simulate the device in a few lines of code on a classical computer :)
In fact, this device is based on earlier work on a biological computer [1] which suffers from the same drawbacks [2].
Figure 3 does indeed show an exponential time complexity, but the base exponent is much smaller than for a classical computer. That is an asymptotic improvement at least.
That figure is extrapolated based on the longest photon path length through the device. It isn't based on real experimental results (the device was demonstrated with n = 4), and it does not take into account the exponential quantity of photons you'd need to pump through the device before getting a measurable result.
Fundamentally, there's nothing this device is doing that a classical computer couldn't, so there's no reason the exponential base should be any different under physically realizable conditions.
I don't know if breakthroughs often start with something being almost certainly irrelevant, but the converse - that something being almost certainly irrelevant tends to lead to a breakthrough - is certainly false. Things which are almost certainly irrelevant tend to end up being irrelevant.
You're probably right, although for a single word, it's a powerful one, 'Sir, your certainly going to die tomorrow' Vs 'Sir, you're almost certainly going to die tomorrow'....
Subset sum is only weakly NP-hard. It has a polynomial solution not in the size of the input but rather in the value of the numbers (dynamic programming). Since numbers are encoded logarithmically, this algorithm is exponential in the size of the input.
So, solving subset sum is only impressive if you can solve it for very large numbers.
I actually have this problem in a work project: matching 1-n point of sale transactions, credit card payments etc to 1-n bank transactions for large companies. Ie “these three credit card payments totaling 300 match this bank transfer for 290 2 days later.” Can confirm it’s difficult and would love any creative solutions.
You're looking for a mixed integer programming solver. Although the problem is NP-hard, in practice, state of the art MIP solvers run much faster than brute force search in most non-pathological cases.
Mixed integer programming is a more specialized task than SMT solving, therefore an MIP implementation is likely to be more optimized for the task, even if you might be able to use an SMT solver to solve the same problem. Moreover, SMT solvers are focused on finding some solution to some constraints, whereas MIP solvers are focused on finding the best solution to the constraints according to some linear objective. Finally, an SMT solver is basically a SAT solver combined with a theory solver for reasoning about integers, etc. If the problem can be expressed just as a conjunction of constraints (i.e, it has no interesting propositional structure), then you don’t really need the SAT solver part and can just use a specialized theory solver (optimizer, in this case) directly.
Like the comment above yours says there are algorithms for SSP that are polynomial in the target value of the sum. For values on the order of something someone would buy with a credit card I think they should be practical.
It's for reconciling customers' items such as point of sale, payroll, and other systems with bank transactions. Each client item has date, description, and amount. Each transaction similarly has date, description and amount. The goal is to match all items with corresponding bank transactions. However, it is more complicated because:
* bank dates and item dates may not be the same. A sale may take a few days to clear the bank.
* amounts may not be the same. A credit card sale may be for 100, but when it clears the bank, the CC provider has deducted fees, so it's 98.
* quantities may not be the same. Ten sales may be deposited as one deposit. In some cases, multiple items may match to multiple transactions.
* there are rules about which matches are best. A sale may match ten transactions, but it can be important to match the right one, typically the closest date.
As others have suggested, I can take shortcuts by ignoring some sets. But it is tricky and slow, especially for large sets.
"Waveguide networks in three-dimensional architecture were written by the femtosecond laser with a repetition rate of 1 MHz, a central wavelength of 513 nm, a pulse duration of 290 fs, and a pulse energy of 190 nJ. Before radiating into the borosilicate substrate at a depth of 170 μm, the laser beam was shaped by a cylindrical lens and then focused by a 100× objective with a numerical aperture of 0.7. During the fabrication, the translational stage moved in x, y, and z directions according to the user-defined program at a constant speed of 15 mm/s. The careful measurements and characterization on the geometric parameter dependence of the three types of junction, such as coupling length, coupling distance, decoupling distance, and curvature, were taken to optimize the performance to form the standard elements."
I have to say, this stuff is really cool. Maybe one day we’ll get good enough at shifting atoms around so that custom computing substrates can be built much faster and cheaper.
The problem with using a photonic computer to solve an NP-hard problem is that as the problem size increases, the amount of light at the output node drops exponentially. Even under the assumption that you can get rid of all background noise, that means you need to run your detector for longer so as to be able to detect the output photons, and…the running time increases exponentially.
Perhaps there are interesting quantum effects that could be exploited, but the authors specifically note that they aren’t doing that.
So this is a neat engineering project but an almost certainly irrelevant computational project.