Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: A reference implementation of Turing's paper “On Computable Numbers” (github.com/planetlambert)
63 points by jekude on Sept 20, 2023 | hide | past | favorite | 14 comments
Hey HN!

I've been curious about the history of computer science and decided to try to read Turing's 1936 paper where he conceptualizes the Turing Machine, etc.

I had trouble understanding the paper, read The Annotated Turing by Charles Petzold (which is wonderful), but felt that reading a reference implementation would help formalize my understanding. When I couldn't find an open source implementation, I decided to write my own.

The implementation includes:

  - Abbreviated tables (m-functions)
  - Conversions to Standard Descriptions and Description Numbers
  - A working universal machine
  - A walkthrough of the paper section-by-section in the context of the codebase
I'm still working on sections 8-11 (the math/logic is a bit difficult for me) - if you understand these sections well enough to explain, I'd love to talk to you!

In general I am interested in new mediums for learning source material (web annotations, walkthroughs, reference implementations, etc.), and was inspired by Karpathy's Zero to Hero course in particular for this project.




> The appendix is a great segue to Church's Lambda Calculus, which will probably be my next project.

Note that a straightforward universal machine for the lambda calculus can be orders of magnitude smaller than for Turing machines [1].

[1] https://gist.github.com/tromp/86b3184f852f65bfb814e3ab0987d8...


Hey John, huge fan, and thank you for the link! I've been struggling with which paper to use to implement the Lambda Calculus (I prefer original source material, because I feel that I learn a little more that way). I started with "An Unsolvable Problem of Elementary Number Theory" [1], but have now temporarily settled on Church's book "The Calculi of Lambda-Conversion" [2] which is a bit more explanatory and is less focused on the decision problem. Curious if you have a recommendation?

[1] https://www.ics.uci.edu/~lopes/teaching/inf212W12/readings/c...

[2] https://compcalc.github.io/public/church/church_calculi_1941...


In what language do you want to implement the lambda calculus? I think that while Church's writings are great background material, they do not make the best guides for implementation.


I have a secret goal to simulate a Turing Machine in the Lambda Calculus, and vice versa, so I was hoping to implement both in the same language so that interoperability would be easier.

I chose Go for the Turing Machines because I enjoy writing it, and planned to blindly use Go again for the Lambda Calculus for the reason above, but if you have a recommendation I'd love to hear it!


Go is similar to C in that neither supports closures (in the form of lambda expressions with untyped arguments). For my own implementation of lambda calculus in C, I chose to implement a so-called Krivine machine, which is one of the simplest abstract machines for the call-by-name lambda-calculus.

Although I never wrote a Turing Machine interpreter in the lambda calculus, I did write one for its close cousin, the Brainfuck language [2].

[1] https://www.irif.fr/~krivine/articles/lazymach.pdf

[2] https://gist.github.com/tromp/86b3184f852f65bfb814e3ab0987d8...


Is the unary encoding of the indices in some way optimal or could one compress the representations further using some [variable length] binary encoding, maybe at the loss of some other desirable properties?


A unary encoding is certainly optimal in simplicity.

For encoding size, it would be optimal if index i occurs roughly with frequency 2^-i. In many lambda terms of practical interest, one does see higher indices occurring much less frequently, so it's not terribly far from optimal. Some compression is certainly possible; within n binding lambdas, index n could be encoded as 1^n instead of 1^n 0, but again that severely complicates the interpreter itself.


My first comment was worded stupidly, or at least I forgot to actually ask what I was really wondering.

I noticed that some of the program lengths ended up in expressions of lower and upper bounds. Also lambda terms represented with De Bruijn indices are essentially lists of numbers and a binary encoding could give exponentially shorter representation as compared to an unary encoding at the price of some overhead when dealing with the binary numbers which I thought might be a constant.

But I did admittedly not read the page too carefully and would probably need to refresh my knowledge to properly understand the details. The programs there are also mostly short, so a binary encoding would probably make them longer. And if it really mattered, than this is of course such an obvious thing to do, that it would certainly not have been overlooked.


> I noticed that some of the program lengths ended up in expressions of lower and upper bounds.

Several of the most celebrated theorems in AIT, such as Symmetry of Information, employ programs that need to interpret other programs. So constants in these theorems significantly suffer from complicated index encodings.

A binary encoding is not as simple as you might think, as the code needs to be self-delimiting. So you first need to encode the number of bits used in the binary code. And that must also be self-delimiting. As you can see, you need to fall back to a unary encoding at some point, as that is naturally self-delimiting.

An example of an efficient binary self-delimiting code can be found at [1]. Note that the program for decoding it into a Church numeral (the very best case) is already about 60% the size of the entire self-interpreter.

Many programs used in AIT proofs use only single digit indices (the self-interpreter e.g. only uses 1-5) and these would be negatively impacted by a binary encoding.

[1] https://gist.github.com/tromp/86b3184f852f65bfb814e3ab0987d8...


I had been fascinated with the Busy Beaver function for a few months and as my wife has been away for a few days and it was raining at the weekend so I spent a couple of hours writing a BB TM simulator and running a few machines - mostly so I can gaze in wonder as to what on Earth they are doing....

I'd love to see how far you have got - I had more of an interest in lambda calculus (and particularly combinators) but I have now developed a serious fascination with TMs. And yes the CS course I did covered them - but that was a long time ago!


Note that a Busy Beaver function is even more simply defined for the lambda calculus, and since it's measured in the more natural unit of bits rather than states, more values can be computed [1].

[1] https://oeis.org/A333479


I embarrassingly don't actually know too much about Busy Beaver functions. If I have a free night this week I will play around and implement them!


Here is the YouTube video that I watched and grabbed my interest:

https://www.youtube.com/watch?v=kmAc1nDizu0

Also the BB Challenge site - where I've been getting definitions of TMs to run:

https://bbchallenge.org

e.g.

1RB0LD_1RC0RF_1LC1LA_0LE1RZ_1LF0RB_0RC0RE

I've ran that one even though there is no hope of me ever seeing it terminate

https://bbchallenge.org/1RB0LD_1RC0RF_1LC1LA_0LE1RZ_1LF0RB_0...


As an aside, there are a number of Turing machines defined in Lean's mathlib. https://github.com/leanprover-community/mathlib4/blob/2c3ee3...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: