Hacker News new | past | comments | ask | show | jobs | submit login

This sounds wonderful and frightening. On the one hand, now we can engineer drugs at light speed. But wasn't protein folding supposed to be NP-hard?

Can deep learning find the cracks in P vs NP?

Perhaps making clever guesses at prime factors because it learned some weird structural fact that has eluded mathematicians.

If we break crypto, there goes the modern world. Banks, bitcoin, privacy, Internet, the whole shebang.

(I obviously am not an expert in computational complexity and hope that some domain experts can chime in and assuage my fears.)




Far from an expert on complexity theory, but NP-hard problems can be approximated in polynomial time. With Deep Learning you are doing approximation. So this is nothing ground breaking in that respect.


That actually isn't totally true. Approximate methods, in the formal sense, require a guarantee that they perform within X of the optimal solution. Not all NP-hard problems have polynomial approximations and the methods shown here are likely not approximations because they very likely provide no guarantees on performance. They provide zero guarantees.


Yes thank you for elaborating. I agree with you on both counts.


there are also a variety of problems that are hard to approximate.


I think I'm almost as uninformed as you, but I believe it comes down to the difference between perfect solutions and close enough solutions. Consider the classic NP problem of the traveling salesman problem.

"[Modern heuristic and approximation algorithms] can find solutions for extremely large problems (millions of cities) within a reasonable time which are with a high probability just 2–3% away from the optimal solution." [0]

When close enough is enough, NP problems can often be solved in P time, and I suspect this is one of those cases. For crypto however, close enough is not enough.

[0] https://en.wikipedia.org/wiki/Travelling_salesman_problem#He...


> Can deep learning find the cracks in P vs NP?

No. It really is just heuristic building. A core problem with using ML in this sort of use case is that it is often brittle. Once it gets outside of the context it was trained in it may or may not be able to generalize it's training to new contexts. We may have difficulty knowing when it is very wrong.

I think ML in research science could be viewed as a very good intuitive oracle. Even if they are right 95% of the time, you have to do this work prove the long way every time because that 5% matters. The real utility is in "scanning the field" to better focus research on things likely to bear fruit.


I think that this is a heuristic "near optimal" method rather than an exact analytic method (I have little to no idea of what that would be in protein folding). A domain I do understand a bit which is np-hard is the travelling sales man. Computing an exact solution is unrealistic, but doing heuristic searches that get you to 99% of the optimal 99% of the time is relatively doable.

But - you don't know that you are 1% from the solution... even if you are pretty confident that you are. It's quite possible (unlikely) that you are way off the optimal, but if you have a decent solution that's ok.


NP-hard doesn’t say how hard it is to solve finite problems. Even for n = 1,000,000, O(e^n) isn’t necessarily problematic, if the constant is small enough, or if you throw enough hardware at it.

This “uses approximately 128 TPUv3 cores (roughly equivalent to ~100-200 GPUs) run over a few weeks”. That is a moderate amount of hardware for this kind of work, so it seems they have a more efficient algorithm.

Also, this algorithm doesn’t solve protein folding in the mathematical sense; it ‘just’ produces good approximations.


> But wasn't protein folding supposed to be NP-hard?

Yeah, at least some variations of it are NP-hard. SAT is THE NP-complete problem, but there are some really good SAT solvers around. This basically means: They have a solution that mostly does very well on most instances. But because (probably) P != NP, you will never have a polynomial time algorithm for this.


There is probably a team at DeepMind working on cracking simple crypto. Problem is, it can be difficult to cast the problem properly/“correcty”. How does a one way function get represented?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: