To me the main takeaway of results like this is: you're already paying the full price of using whatever complicated type system feature you care to name. So I have limited sympathy for people who are scared of the complexity of, say, higher-kinded types, or dependent types, or kind polymorphism. Your type system is already as complicated as possible, adding more features won't make it more Turing-complete - rather, make sure that you're getting a return on that complexity by ensuring that your type system has the features that you need to do useful things.
I strongly disagree. Just because a feature doesn’t increase the underlying theoretical complexity doesn’t mean it’s a good idea.
As a trivial example C plus “come from” is fundamentally the same language as regular C, and one can trivially translate it to remove come from. That doesn’t mean that adding come from to C would be a good idea.
Incidentally, Googling "C plus come from" only returns two relevant results, the parent comment and a manual for an esoteric programming language called C-INTERCAL that uses the COME FROM statement and compiles to C. [2]
COME FROM is also the foundation of concurrency in some versions of INTERCAL - if there are two COME FROM statements for the same origin, then when execution reaches that point, it forks.
Threaded INTERCAL actually has a very disciplined approach to safety. Each thread gets its own copy of all variables, so threads do not share any mutable data at all. However, they still share the same code, which is also mutable, so they can communicate through that.
On this notion, a very good talk "On the expressive power of programming languages"[1][2] which attempts to provide a formal basis for distinguishing between expressive power of turning complete languages.
Eh, I don't agree, despite being very sympathetic to type system sophistication.
The fact that you can get Turing complete behavior in some part of a system with some encoding doesn't mean it's accessible enough that it's relevant when deciding how I am going to accomplish something or (typically) when deciphering what someone else has written.
And you can refuse to merge those changes and promptly kick the idiot who did that from your team. Or maybe, just maybe, it’s that one case where it actually makes sense.
Not all problems are technical. This is the sort of thing that can be solved with people skills.
I disagree. A terminating type checker is much more valuable than a possibly non-terminating one. Also I think you're possibly conflating multiple issues here. Higher-kinded types, dependent types and kind polymorphism don't actually mean your type system becomes undecidable. They do vastly complicate the implementation in other ways, though.
> A terminating type checker is much more valuable than a possibly non-terminating one.
My point is that your type checker is already possibly-non-terminating. That ship has sailed. So you might as well make it the best possibly-non-terminating typechecker that you can.
> I disagree. A terminating type checker is much more valuable than a possibly non-terminating one.
I don't think that's true at all, and I think there are two extreme examples that drive the point home:
Consider on the one hand a language where typechecking is entirely terminating so long as you don't use one particular feature; however, that feature is also not very useful in practice (even notwithstanding the potential nontermination) and so it doesn't actually get used by real people writing real code. Adding that feature doesn't improve the language, but I contend that it doesn't make the type checker "much [less] valuable."
On the other hand, consider a language where type checking provably terminates in all cases, but based on subtleties sometimes completes in seconds and sometimes takes decades. I don't think this provides "much more [value]" than a similar system that sometimes fails to terminate only in some of the cases where the other would take (say) more than a week.
This isn't a problem at all for modern type systems. OCaml, Haskell, and even C++ have Turing complete type systems and this is one of the last concerns for developers.
for c++, I know both clang and gcc make the template resolution depth accessible to the developer via a compiler flag, allowing those that bypass the reasonable defaults to extend that depth.
I agree that piling on complexity forever is never a good idea. But to imply this is what happens in Rusts case is a claim that IMO requires citation.
Because being touring complete alone has been reached a while ago. Why even bother developing peogramming languages after that? Probably because just being touring complete alone doesn't necessarily enable you to comfortably and quickly create abstractions and structures that are easy to write, modify and compose to a bigger whole.
The claim that what Rust is doing with its type system isn't useful is one you need to show us examples for. Simply stating something unfavourable about some project out of gut feeling is bad form.
"Turing completeness" is extremely orthogonal to complexity.
Being "Turing complete" only means that your thing supports unlimited recursion; or, in other words, that whatever objects it encodes can be arbitrarily large.