Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You don't have to solve the halting problem every time it presents itself. Otherwise, things like Valgrind and fuzzing wouldn't be valid at all. You just have to improve your odds.

EDIT: An important note to newbs: The Halting Problem is correct. However, a problem which maps to the halting problem can still be solved often enough in practice to make it worthwhile. In fact, entire industries have been born of heuristic solutions to such problems.



Note that I was responding to post mentioning 'provably correct code' which is very different opportunistic efforts to improve code.

Valgrind and fuzzing are useful tools, but there is no general method. Being semi-decidable, with enumerable inputs. This means that fuzzing can be useful both with random walks and constrained random walks. But it doesn't come close to generating 'provably correct code'.

The 'Post correspondence problem' is maybe an easier way to see how this applies at the compiler level.

But tools that help are opportunistic, but that doesn't change the undecidability of generalizations.

While crossing domains, but because it is commonly used, considering the VC dimensionality of the problem also helps, or pure math problems like the Collatz conjecture that generalize to the same halting problem.

Generating 'provably correct code' in the general case is simply not possible without major advances in math. There is room to improve with many paths to do so, just not going down this particular path.


Note that I was responding to post mentioning 'provably correct code' which is very different opportunistic efforts to improve code.

Note that parsing a context free grammar maps to a stack machine. The Halting Problem uses a Turing Machine. I don't think it applies! (But if you do still want to show off your knowledge, please elucidate. I've forgotten about half of the automata theory I covered as an undergrad and in grad school.) For parsing tasks corresponding to computational models which are less complex, like regular expressions, the problem is well under the threshold of "semi-decidable, with enumerable inputs" in your words.

Generating 'provably correct code' in the general case is simply not possible without major advances in math.

Note that I was responding specifically to the problem domain of parsing and compiler compilers, and that most parsing problems involve models of computation which are akin to a stack machine or less powerful.

People generally throwing out The Halting Problem as an objection without carefully considering the particulars/context is one of my chief pet peeves.


The Halting Problem is typically used because it was one of the first proven as undecidable, same reason in P vs. NP questions often use SAT.

C++ templates, Haskel templates, Lisp macros, etc... are all examples of metaprogramming facilities that would be considered TC, and thus subject to Rice's Theorem (To avoid HP).

But last I saw the code that was generated was the problem, not the primitives. As the languages are TC, a parser not being so doesn't remove the issue with this CVE.


But last I saw the code that was generated was the problem, not the primitives.

Good! So, if what we want the compiler compiler to do is, say, just to produce an output isomorphic to the input, then this also reduces the complexity of what we're asking to do. I think this falls well inside what we could automate with some kind of guarantee of correctness.

A good mental habit for programmers is to constantly ask, "Can the stated problem be reduced in scope, such that we satisfy the goal?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: