The general thing about complexity (be it 3SAT or Turing Completeness) is that the underlying rules are extremely simple.
The issue is understanding the implications of those rules. For something like 3SAT, all possible 3SAT puzzles can be solved (aka: either satisfied with an example, or unsatisfied with a "certificate of unsatisfiability").
For problems in the Turing Complete complexity class, the "implications" of the rules you list are semi-decidable.
Ex: Is there an odd-perfect number (https://en.wikipedia.org/wiki/Perfect_number)? Well, all we gotta do is try all numbers. Once we find an odd perfect number, we'll know its possible. But if we never find an odd perfect number, we remain in a state of uncertainty.
Though I guess, a future mathematical proof may prove if an odd perfect number exists or not. So I guess I'm back to the halting theorem as an example. For the halting-theorem: you can "solve" any computer by simply executing the program and waiting for it to end. If it ends, you know its a program that ends.
If it doesn't end... well... semi-decidable. You don't know that it never ends.
-------------
Anyway, if a hypothetical type-system were "only as powerful as 3SAT", then all calculations under that type system would be proven to come to a solution eventually.
EDIT: And even weaker systems, such as "Horn-clause Logic", (which is the basis for Prolog programming language) can have further proofs of efficiency (not only will all systems come to a solution eventually, but we have strong proofs about the number of steps a computer will take to reach that solution).
These attributes are likely useful to elements of programming language design. Sure, we are making a Turing complete language (aka: machine code). But that doesn't mean that all sub-languages in the language need to also be Turing complete. Imagine if "printf" or "scanf" was Turing complete, it'd be a complexity nightmare!!
The issue is understanding the implications of those rules. For something like 3SAT, all possible 3SAT puzzles can be solved (aka: either satisfied with an example, or unsatisfied with a "certificate of unsatisfiability").
For problems in the Turing Complete complexity class, the "implications" of the rules you list are semi-decidable.
Ex: Is there an odd-perfect number (https://en.wikipedia.org/wiki/Perfect_number)? Well, all we gotta do is try all numbers. Once we find an odd perfect number, we'll know its possible. But if we never find an odd perfect number, we remain in a state of uncertainty.
Though I guess, a future mathematical proof may prove if an odd perfect number exists or not. So I guess I'm back to the halting theorem as an example. For the halting-theorem: you can "solve" any computer by simply executing the program and waiting for it to end. If it ends, you know its a program that ends.
If it doesn't end... well... semi-decidable. You don't know that it never ends.
-------------
Anyway, if a hypothetical type-system were "only as powerful as 3SAT", then all calculations under that type system would be proven to come to a solution eventually.
EDIT: And even weaker systems, such as "Horn-clause Logic", (which is the basis for Prolog programming language) can have further proofs of efficiency (not only will all systems come to a solution eventually, but we have strong proofs about the number of steps a computer will take to reach that solution).
These attributes are likely useful to elements of programming language design. Sure, we are making a Turing complete language (aka: machine code). But that doesn't mean that all sub-languages in the language need to also be Turing complete. Imagine if "printf" or "scanf" was Turing complete, it'd be a complexity nightmare!!