Hacker News new | past | comments | ask | show | jobs | submit login
Rust's type system is Turing-complete (2017) (sdleffler.github.io)
114 points by lukastyrychtr on March 15, 2021 | hide | past | favorite | 87 comments



The article is fascinating and I love fun stuff like this, but the discussion is a bit off the rails. There are a lot of things that are accidentally Turing complete. Turing complete is not synonymous with complicated and unwieldy to use. I mean, okay, maybe if you are doing the stuff that the author is doing it is complicated and unwieldy - in fact to the point that you would never do it in the first place. But Turing completeness is really not a high bar to reach.


To further push the point that many things are Turing complete and that it should not be seen as synonymous with complex, Gwern has a nice list of Surprisingly Turing complete things [0] that includes Peano Arithmetic, Pokemon Yellow and Magic the Gathering.

I personally find lists of programming language that made the conscious decision to not be Turing complete (such as Coq the theorem prover) more intriguing.

[0]: https://www.gwern.net/Turing-complete


> To further push the point that many things are Turing complete and that it should not be seen as synonymous with complex, Gwern has a nice list of Surprisingly Turing complete things [0] that includes Peano Arithmetic, Pokemon Yellow and Magic the Gathering.

If you know how to reach Turing completeness in those systems, you wouldn't call any of those "simple".

MtG for instance is considered one of the more complicated card games.

Pokemon Yellow's turing completeness proof involved arbitrary code execution by swapping memory locations around.

Peano axioms / Arithmetic underlies an entire theory of abstract mathematics.

---------------

EDIT: I think decidable is an important attribute of systems that are weaker than Turing Complete. There are still very powerful decidable equations (ex: 3SAT, most famously), which has an answer (either satisfiable, or unsatisfiable), but may need O(2^n) time (or longer) to calculate the result.

Turing Complete is only semi-decidable: if something halts it will halt in finite time. If something won't halt, you'll never know.


While I do agree for MtG and Pokemon Yellow (their main value as examples is that they are fun and inexpected), Peano arithmetic is simple.

It is fondamental but still probably the simplest way to implement natural integers : a natural number is either 0 or one plus a number.


The general thing about complexity (be it 3SAT or Turing Completeness) is that the underlying rules are extremely simple.

The issue is understanding the implications of those rules. For something like 3SAT, all possible 3SAT puzzles can be solved (aka: either satisfied with an example, or unsatisfied with a "certificate of unsatisfiability").

For problems in the Turing Complete complexity class, the "implications" of the rules you list are semi-decidable.

Ex: Is there an odd-perfect number (https://en.wikipedia.org/wiki/Perfect_number)? Well, all we gotta do is try all numbers. Once we find an odd perfect number, we'll know its possible. But if we never find an odd perfect number, we remain in a state of uncertainty.

Though I guess, a future mathematical proof may prove if an odd perfect number exists or not. So I guess I'm back to the halting theorem as an example. For the halting-theorem: you can "solve" any computer by simply executing the program and waiting for it to end. If it ends, you know its a program that ends.

If it doesn't end... well... semi-decidable. You don't know that it never ends.

-------------

Anyway, if a hypothetical type-system were "only as powerful as 3SAT", then all calculations under that type system would be proven to come to a solution eventually.

EDIT: And even weaker systems, such as "Horn-clause Logic", (which is the basis for Prolog programming language) can have further proofs of efficiency (not only will all systems come to a solution eventually, but we have strong proofs about the number of steps a computer will take to reach that solution).

These attributes are likely useful to elements of programming language design. Sure, we are making a Turing complete language (aka: machine code). But that doesn't mean that all sub-languages in the language need to also be Turing complete. Imagine if "printf" or "scanf" was Turing complete, it'd be a complexity nightmare!!


Yes, there is a big difference between how complex something can theoretically be, and how complex the typical use case is.

There are two main drawbacks to accidental Turing-completeness:

- You can no longer statically analyze the system for properties like termination, because you can't solve the halting problem (at least not without extra bounds).

- Some people will try to abuse turing-completeness to solve their problem, leading to a jumbled mess. If this becomes an accepted way of using your system, it has basically failed.


You are correct, furthermore having a Turing complete system does not forbid one from choosing an expressive subset of the system that isn't Turing complete.


That's true, but I you really don't need the computational power, you might want to have that well defined limited semantic to either not accidentally shoot you in foot (the least important), or being able to statically verify it's properties in finite time.


To me the main takeaway of results like this is: you're already paying the full price of using whatever complicated type system feature you care to name. So I have limited sympathy for people who are scared of the complexity of, say, higher-kinded types, or dependent types, or kind polymorphism. Your type system is already as complicated as possible, adding more features won't make it more Turing-complete - rather, make sure that you're getting a return on that complexity by ensuring that your type system has the features that you need to do useful things.


I strongly disagree. Just because a feature doesn’t increase the underlying theoretical complexity doesn’t mean it’s a good idea.

As a trivial example C plus “come from” is fundamentally the same language as regular C, and one can trivially translate it to remove come from. That doesn’t mean that adding come from to C would be a good idea.


For anyone confused like me. [1]

Incidentally, Googling "C plus come from" only returns two relevant results, the parent comment and a manual for an esoteric programming language called C-INTERCAL that uses the COME FROM statement and compiles to C. [2]

1. https://en.wikipedia.org/wiki/COMEFROM

2. http://catb.org/~esr/intercal/ick.htm


COME FROM is also the foundation of concurrency in some versions of INTERCAL - if there are two COME FROM statements for the same origin, then when execution reaches that point, it forks.

Threaded INTERCAL actually has a very disciplined approach to safety. Each thread gets its own copy of all variables, so threads do not share any mutable data at all. However, they still share the same code, which is also mutable, so they can communicate through that.


> a very disciplined approach to safety [...] they still share the same code, which is also mutable, so they can communicate through that

Disciplined, maybe; usable/sane, certainly not. Then again, it's INTERCAL, so that was most certainly the intention.


The comefrom wiki article led me to an april fool's for adding goto/comefrom to Python[0], and. Wow. It's really gross

[0] http://entrian.com/goto/


Amazing


COMEFROM is also known as aspect oriented programming.


On this notion, a very good talk "On the expressive power of programming languages"[1][2] which attempts to provide a formal basis for distinguishing between expressive power of turning complete languages.

[1] https://youtu.be/43XaZEn2aLc

[2] The paper it's based on https://homepage.cs.uiowa.edu/~jgmorrs/eecs762f19/papers/fel...


I think an even better example would be the "entry" keyword which would add multiple entry points to C functions[1]. We dodged a bullet here.

[1] https://stackoverflow.com/questions/254395/whatever-happened...


Eh, I don't agree, despite being very sympathetic to type system sophistication.

The fact that you can get Turing complete behavior in some part of a system with some encoding doesn't mean it's accessible enough that it's relevant when deciding how I am going to accomplish something or (typically) when deciphering what someone else has written.


But as soon as it's turing-complete someone will try to write whatever algorithm with it and you'll have to maintain it.


The type system is turing complete, and its implementation already supports this, since its required for correctness.

People using this feature does not incur any "extra" effort.


And you can refuse to merge those changes and promptly kick the idiot who did that from your team. Or maybe, just maybe, it’s that one case where it actually makes sense.

Not all problems are technical. This is the sort of thing that can be solved with people skills.


I disagree. A terminating type checker is much more valuable than a possibly non-terminating one. Also I think you're possibly conflating multiple issues here. Higher-kinded types, dependent types and kind polymorphism don't actually mean your type system becomes undecidable. They do vastly complicate the implementation in other ways, though.


> A terminating type checker is much more valuable than a possibly non-terminating one.

My point is that your type checker is already possibly-non-terminating. That ship has sailed. So you might as well make it the best possibly-non-terminating typechecker that you can.


> I disagree. A terminating type checker is much more valuable than a possibly non-terminating one.

I don't think that's true at all, and I think there are two extreme examples that drive the point home:

Consider on the one hand a language where typechecking is entirely terminating so long as you don't use one particular feature; however, that feature is also not very useful in practice (even notwithstanding the potential nontermination) and so it doesn't actually get used by real people writing real code. Adding that feature doesn't improve the language, but I contend that it doesn't make the type checker "much [less] valuable."

On the other hand, consider a language where type checking provably terminates in all cases, but based on subtleties sometimes completes in seconds and sometimes takes decades. I don't think this provides "much more [value]" than a similar system that sometimes fails to terminate only in some of the cases where the other would take (say) more than a week.


It's easy enough to turn a non-terminating type checker into a terminating one, just add a recursion limit.


That makes it nearly impossible for the programmer to predict what will type check and what will not.


So what? We can't predict how much RAM the user's machine has either, but that doesn't mean we can't write useful programs.


This isn't a problem at all for modern type systems. OCaml, Haskell, and even C++ have Turing complete type systems and this is one of the last concerns for developers.


I believe Haskell's type system is only Turing complete with UndecidableInstances or other extensions. But these extensions are not uncommon.


Not really, I can predict that anything I will actually do that has valid types will type check.

Some pathological valid cases invented by someone trying to break the type checker, might not type check.


for c++, I know both clang and gcc make the template resolution depth accessible to the developer via a compiler flag, allowing those that bypass the reasonable defaults to extend that depth.


I've written pages and pages of c++ template metaprograms and not once this has ever been an issue


I agree that piling on complexity forever is never a good idea. But to imply this is what happens in Rusts case is a claim that IMO requires citation.

Because being touring complete alone has been reached a while ago. Why even bother developing peogramming languages after that? Probably because just being touring complete alone doesn't necessarily enable you to comfortably and quickly create abstractions and structures that are easy to write, modify and compose to a bigger whole.

The claim that what Rust is doing with its type system isn't useful is one you need to show us examples for. Simply stating something unfavourable about some project out of gut feeling is bad form.


"Turing completeness" is extremely orthogonal to complexity.

Being "Turing complete" only means that your thing supports unlimited recursion; or, in other words, that whatever objects it encodes can be arbitrarily large.

See: https://en.wikipedia.org/wiki/Rule_110

P.S. This also means that, technically speaking, the computers we use aren't Turing complete because they have RAM and storage limits.


Just because something is possible doesn’t mean it’s easy or intuitive to use.


Discussed at the time:

Rust's Type System Is Turing-Complete: Type-Level Programming in Rust - https://news.ycombinator.com/item?id=13843288 - March 2017 (117 comments)


It’s important to understand that this is a big problem. If a type system is Turing complete, than it either has to be inconsistent or incomplete. In other words it either must allow type invalid programs through, or it must prevent some type valid programs (or both!).


> It’s important to understand that this is a big problem. If a type system is Turing complete, than it either has to be inconsistent or incomplete. In other words it either must allow type invalid programs through, or it must prevent some type valid programs (or both!).

No, you're mixing up concepts (consistency vs. soundness).

If a type system has non-normalizing types (or non-normalizing terms which appear in types), then it is inconsistent as a logic, which means that via Curry-Howard you can prove anything (ex falso quodlibet). Regarding incompleteness, any sufficiently expressive logic is incomplete per Gödel's first incompleteness theorem.

A type system which is inconsistent as a logic can still be sound as a type system. Soundness is the property that you described in your second paragraph (not letting "invalid" programs through). A simple example is System U. It's sound as a type system, but inconsistent as a logic. That means it works for programming, but not for proving.


Lol no.

The only thing this means is that type checking a Rust program might never terminate.

Nothing more, nothing less.

It might take 10 seconds, it might take 10 days, it might take till the universe freezes.

In practice, the compiler - like pretty much every single compiler with a Turing complete type system - has a recursion limit, that rejects programs that accidentally take too long to type check, but users can increase this limit by adding a source annotation (that's suggested by the compiler).

This is completely orthogonal to whether the programs are valid or not.

If your program is invalid, the compiler will never accept it, no matter how much compute time you throw at it.

Type checking always terminates for valid Rust programs (by definition), so if you throw enough compute at any valid Rust program, the compiler will eventually accept it.


> In other words it either must allow type invalid programs through, or it must prevent some type valid programs (or both!).

I don't see how this is the case. The type system being Turing complete just makes it undecidable, ie. it's not guaranteed to finish type checking. In a way you could call that "preventing some type valid program", though I wouldn't consider an infinite loop of types valid.


Which doesn't matter that much because there is a fairly small limit on how deep recursion can go which you can easily exceed without Turing-completeness, and the distinction between a program that will halt eventually but not within the limit you give it and one that will never halt doesn't matter for practical purposes.


The point isn't that there is an infinite loop of types that is rejected, the point is that there must be a finite loop of types that is rejected because it is indistinguishable from an infinite loop of types.

(I think the person you replied to is exaggerating the significance of this, though: it's obviously not a big deal in practice.)


It’s a consequence of Goedels incompleteness theorem. You cannot create a type system that stops if the program is type valid, and does not stop (or stops indicating invalid) if the program is type invalid. The practical consequences of this are (probably) minimal.


The proof is trivial. The valid proofs in Zfc are ennumerable. Let f be a proposition in zfc that is undecidable in zfc. Write a computer program c that checks for a proposition p whether proof number n is a proof of p for all n. Apply that computer program to p. A rust program that is valid iff c(p) either doesn’t stop type checking or is invalid.


Do you mean apply program c to proposition f? That is compute c(f)? Of course no one disputes that if one encoded this in Rust's type system, Rust will enter an infinite loop. That's the point of this article.

But none of this has to do with whether Rust's type system is sound or complete as a type system, only that as a logic it is undecidable.


Yes (changed the post)... my point is that the program is valid! Rusts type system can fail to stop on valid programs. I’m probably making a mess of all this though and should shut up before I confuse things more!


Also I obviously fucked up the trivial proof. What I want to write is a rust program that type checks iff an an undecidable true proposition is true! But c doesn’t do that


type systems often feel like a rudimentary typeless haskell or prolog layered as a templating system over the underlying code, particularly if the types are used in generating code for generic types in the underlying typed language.

for any reasonable program, you're not going to run into problems with the turing complete aspects of the type system. it's only when you start cleverly lifting logic into the type system to run in that typeless compile-time space that you'll feel punished by it.

if you want to check if a type adheres to some constraint and then choose between different things to do, and then do that recursively et cetera, sure, you'll blow the template expansion stack.

because you're not longer programming in the typed language, but in the typeless meta-language of its type system.


Out of curiosity — has there ever been a type system that was discovered to be Turing-complete and then actually have caused a bug for a real application that was caused by the fact that it allowed an invalid program through?

I'm specifically not talking about something like Typescript which is widely known to be unsound, but a type system that was believed to be "safe" before someone figured it is Turing complete.


I suspect the opposite situation is what you run into: it is possible to write programs that are valid and out to be accepted by the type system, but get rejected because you blew the stack of the type checker. Had the type checker been allocated more memory / time, it could have accepted the program, but it crashed before that.

This reminds me of that: http://litherum.blogspot.com/2019/03/addition-font.html

You can craft font files that perform arbitrary calculations… if you give the font shaping engine a (practically) unlimited stack, where it usually limits recursion to 6 levels. If you do so, you can make a font that does math (or whatever you want) as part of computing ligatures.

But in practice, you hardly ever hit those kinds of limits unless you're trying to do something silly on purpose. Sure you cannot port Doom to font substitution lookup tables, despite them being Turing complete. But that doesn't make your font shaping engine useless, or even broken.


Can you elaborate on this? I'm not sure what you mean by valid and invalid programs. By "valid", do you mean ones which are well typed under the type system? But then, what does it mean for it to be well typed under the type system, other than that the type system, err, allows it?

edit: by which I mean, the theoretically defined type system, not the actual program implementing it.


You mean in case the type-inference doesn't halt?

Aren't such programs not very interesting by definition?

And why would you call them "valid" programs?


> If a type system is Turing complete, than it either has to be inconsistent or incomplete.

How is Turing-completeness related to soundness? I am behind on CS theory, but what little I know about category-theory tells me this isn’t the case.


One of the reasons I like weak type systems or duck typing as is with vanilla JS is because I don't have cognitive overload... I don't have to think about abstractions as types I only need to think how to solve the problem in front of me. My unit tests can check for correctness after I've solved the problem. The fact that you can have Turing completeness in a type-system (not the language itself but within it's type system) only furthers my belief that the complexity of some type systems is not worth it.

I feel I'm in a minority here... but I like the nimbleness of dynamic languages, or languages where the type system has a minimal presence.


I have the inverse experience. I find it far more cognitively burdensome to have to keep track of the types without annotations or assistance from a type checker. Getting rid of the type checker doesn’t make the type go away, it just means the full burden of managing them correctly falls on the programs. And it’s not for lack of effort—I’ve been using Python among other languages for 15 years, but I would still find myself prototyping big changes in Go before porting to Python (and crying for the lost performance, maintainability, and developer time).


I personally found Go far more unproductive than Python - doing if err!=nil a zillion times and writing the nth for loop..it made me cry worse than cutting fresh onions.

Python is not Perfect (bad general-purpose performance), but IMHO honestly a more productive language than Go.

And Python has type hinting nowadays which is sweet when you want it. Type hinting is good for self-documenting stable code.


I never minded the boilerplate. Pushing buttons on a keyboard never slowed me down very much, especially compared to finding and fixing bugs or even writing gratuitous test cases to guard against type errors. If I could only press a few buttons per minute or if I could write many test cases per minute I might feel differently.


Go's err != nil has nothing to do with types though. It's just its own quirky convention.

Python could similarly declare that exceptions are only for dead-end panics, and all error handling should be done by returning 2-element tuples.


What kind of types are you dealing with that makes them so hard to keep track of? Are you talking about strings and integers, or a really really complicated objects and functions thing with functions of functions of functions and that kind of thing?


Honestly I find this question crazy. Have you never made an asynchronous data call returning an array of objects with properties that are complex types? You're already at Promise<Array<ComplexObject>>. What if you have to keep track of multiple of those calls at once in a data structure? Have you never used map or filter operations on streams of data? Do you really think functions returning functions is "really really complicated"? It makes me wonder whether you've ever actually worked on a moderately complex system.


>Have you never used map or filter operations on streams of data?

If that's the standard for complicated programs, then I really don't understand how types can get confusing; filter doesn't even change the types of its arguments. I guess structs of several promises are a good example because you could mix up which fields contained which future objects, and that code would be distant from the API call to produce the object.


> filter doesn't even change the types of its arguments

But map, which 90% is used after filter, does.

  const foo = client.someRequest()
    .filter(item => /* some condition */)
    .map(item => /* some transformation */)

  someFunction(foo)
Without a strong type system, how do you ensure "someFunction" is called with the correct data structure ?

The answer is either:

  - you implement your own type system (based on json schema?)
  - you write the type checks manually in your tests
With a strong type system, it is already done by the compiler.

Every REST/GraphQL APIs relies on a strongly typed schema.

A type system in your programming language is a feature that allows you to "encode" the schema within the language itself.

The most expressive the type system is, the less validation code you need. The less code you have, the easier it is to maintain your code.

You don't like the verbosity? That's what type inference is for, take a look at that Typescript example:

  const handlers = {
    foo: () => { /* ... */ },
    bar: () => { /* ... */ }
  }
  const action: keyof typeof handlers = someOtherFunc()
  handlers[action]()
If your function returns just a string, Typescript would scream at you. If your function return type is ('foo' | 'bar'), then you ensure that handlers[action] will never be undefined, without writing an if/else/throw block.


Strings and integers are rarely just strings and integers. For example, is this "integer" an order ID or a customer ID? Without a type system to tell them apart, mixing them up is a bug waiting to happen.


Today I was refactoring a codebase I maintain.

In one domain the program thinks of time in 1/256ths second ticks. And in another it thinks in terms of 1/32768ths of a second. Elsewhere it thinks in terms of ms.

Having them all represented as an int is often confusing. For me that wrote the code. God help anyone else.


If there is a practical reason you aren't using a uniform format for time I would consider this a case where compile-time type validation make sense... but for most applications I work on I would argue this is not a concern.


Well, sometimes you get really crazy types, like large implicit unions where many of the variants are dictionaries with specific structures or a class that has an attribute created dynamically. Or when some API says it wants a “file-like object”, does it mean “object with read() method”? Or read() + close()? What about seek()? Etc. But the spirit of my comment was about making sure that the types align—that I’m correctly passing the right kind of data into some function for every function. It’s easy to make type errors, and I know it’s not just me because type errors of various kinds were the number one kind of error in our production logs at our Python shop even though we had high unit test coverage.


Usually it's just many small uncomplicated structs.


> I don't have to think about abstractions as types I only need to think how to solve the problem in front of me.

There's a function in a popular Python scientific library (I can't recall the name of the library or the function, unfortunately), which takes a parameter n and returns a float if n=1, and a list of floats if n!=1. This behavior wasn't documented explicitly – I found out during testing – and is just annoying to deal with when the parameter n isn't a constant.

This wouldn't happen in a statically typed language, because 1) the function's signature would hint at this unexpected behavior, and 2) the designer of that function has to think about the assumptions they make and the guarantees they give, and not just "how to solve the problem in front" of them.


You do realize that you're moving part of the cognitive load until later and increasing it when you have to implement a type checker in unit tests for every program you write.


Yes and?

I have the full power of the language to force type correctness instead of some janky meta language.


And if you're really good, you'll use that full power to develop a toolset of useful ways to verify that certain properties about your code hold! You'll see common errors and write infrastructure to help your unit tests ensure those errors aren't happening. You might even add annotations into your code as you write it to automatically write those unit tests for you. And you will have invented ... the Inner Type System!


Yes, a great thing to do for a dsl that you write in your real language. Not something that a real language should have.


You also don't have the power to force anything, just the power to check if it's correct in a few of the infinitely many cases.

And yes, as a sibling comment said, dependently typed languages let you use the entire language to specify types.


Most dependently typed languages allow you to use the full power of the language to specify types.


I think we should strive for strong type systems with a lot of type inference. I especially like Typescript for that.

Type systems are very useful to establish the semantics of your operations. For example `int + int` is not the same operation as `string + string`, and definitely not the same as `int + string` (which does not exist in most languages). You could call `+` an addition, but addition only exists for numbers, for strings it is a concatenation.

Is `int + string` the same as `string + int` ? (commutativity)

Is `int + int + string` the same as `(int + int) + string` or `int + (int + string)`? In that case, what does `int + string` returns? (associativity)

Now consider this code:

  def foo(a, b):
    return a + b
vs

  function foo(a: int, b: string) {
    return a + b
  }
In the second case, your function obviously seems wrong.

Finally, in math an object does not "have a type" but "belongs to a set/class". For example, "1" is an integer, an odd number, a real, a complex, a scalar, a rank 1 tensor, etc... Saying that "1" is only an integer is incorrect, but that's what most programming language do.

I've tried to design a language where "typeof variable == type" does not exist but instead you have "variable isof type" running the type checking code, example:

  class user(v: struct {
    name: string,
    logged_in: bool
  })

  class logged_user(u: user) check
    # this code is evaluated when 'isof' is called
    u.logged_in = true
  end

  class allowed_user(u: user) check
    # =>, <=> operators are logical connectors
    # A => B is true if A and B are true or A is false
    # A <=> B is true if A and B are both true
    u isof logged_in <=> u.name in ['admin', 'root']
  end

  let alice = { name: 'alice', logged_in: false }
  assert alice isof user = true
  assert alice isof logged_user = false
  assert alice isof allowed_user = false

  let bob = { name: 'bob', logged_in: true }
  assert bob isof user = true
  assert bob isof logged_user = true
  assert bob isof allowed_user = false
But lazyness got the best of me and I only implemented the parser :(


Interesting... What framework did you use to write your parser? I've been curious about tinkering with some exploratory language design myself.


I used the pest[1] library in Rust, it uses something a bit like an EBNF syntax to write the grammar and generate a PEG parser.

[1] - https://pest.rs/


BTW you should finish your project. You built the parser already!


I wanted to use the LLVM Rust bindings to generate LLVM IR, but I still need to do some more reading on how to represent high-level constructs in such a low-level language. Also, generics and type inference is hard to implement! :p

I don't have the focus required for that at the moment, but it's still in my TODO list :)

I might publish the parser on github and post it on Hackernews to see if anyone would be interested to help.

NB: the project started with the thought "what my ideal language would look like?"


Thanks


I think most people, myself included, that dislike verbose type systems aren't opposed to type checking itself. We're just not overjoyed at learning and using a meta-programming language to do it. The argument tends to center around the friction it introduces to the development cycle. Those in favor of these languages often argue that you would need to spend the extra time verifying that the code is correct anyways but I don't think it equals out in the end, it certainly doesn't make things any faster. I'm sure that much more could be done to infer types or automate the process, I'm unsure of why this isn't done in the more popular languages. Maybe it's just a hangover from C++ and we're just repeating bad patterns when there is a better way? It's not like we haven't done that before.


I agree that where strict typing exists it should have the ability to get out of the way as is with languages that support type inference.

Personally I've never found type mismatches to be particularly a large problem, as you can still support input validation where needed, generally at some application message boundary like an http request/response, and the type of bugs caused by mismatched types are easy to diagnose and test for.

The class of bugs that to me are much more difficult to fix unlike type mismatches are not immediately visible or do not give descriptive messages for example race conditions, memory leaks, logical errors, etc...


Relatedly, I found this out the hard way yesterday evening: https://stackoverflow.com/questions/66631251/overflow-evalua...


> Most if not all Turing-complete systems are also known to be Turing-equivalent. (I regret not being able to find a better citation for this.)

It links to Wikipedia (WP), which defines "system" by:

> a computational system (such as an abstract machine or programming language)

there is no better citation[0], because it's not true under some definitions of "computational system", per WP's definition of "computational system" [3]. For example, under the WP definition of "abstract machine", a non-computable function satisfies the definition. Or we can define it as follows: input: strings in an alphabet A, output: {true, false}, operations allowed: check for membership in a language L in alphabet A.

So we see that theoretical formal languages are a subset of computational systems. But there are uncountably many formal languages and only countably many TM. Crucially, we can easily turn each language into one that acts as a universal turing machine while still being distinct. Hence there are uncountably many Turing-complete languages for a given alphabet A[2]

[0]: incidentally, the linked WP page claims slightly differently (unimportantly) that "All known Turing-complete systems are Turing-equivalent, which adds support to the Church–Turing thesis."

[1]: "A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter." Absent a more precise definition, I assume the set may include arbitrary mathematical operations (and it is useful to define non-computable abstract machines, and in general usage in the field abstract machines may refer to languages more powerful than Turing machines).

[2]: Sketch: Let U' be a Universal TM in alphabet A (hence U is Turing-complete), let M' denote the language it accepts, and let 0 be a symbol of A. Define M such that if i' in M', then i in M, where i is i' with every symbol duplicated and 0 prefixed (hence if i'="123", i="0112233") Define N such that if i' in complement(M'), then i in N, where i is i' with every symbol duplicated and 0 prefixed. Observe that every language disjoint from N that contains M is Turing-complete (wrt a slightly modified description for TMs and the simulated input than the one for U'). Now, let L be an arbitrary language in A. Define f(L) = { j | j' in L, j is j' with every symbol duplicated }. Then we see that f(L), M, N are all disjoint. And if T, U are distinct languages, so (f(T) union M) and (f(U) union M) are distinct languages that simulate Turing machines (i.e. they are Turing-complete). Hence there are uncountably many Turing-complete languages. There are only countably many Turing machines, hence only countably many Turing-equivalent languages, so in fact most Turing-complete languages are not Turing-equivalent languages.

[3]: of course,


Systems which are turing-complete but not turing-equivalent are stronger than turing machines. See: https://en.wikipedia.org/wiki/Hypercomputation

The "All known Turing-complete systems are Turing-equivalent, which adds support to the Church–Turing thesis." thing seems to have an implied "Turing-complete systems that are physically implementable." (I edited the article to add that.)


This is impressive and hilarious at the same time.

Thanks for sharing!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: