> No, being Turing complete makes it impossible to be 100% secure and trustworthy, by definition.
Being Turing-complete means it's possible to write some contracts/programs for which certain features are undecidable by analysis.
It doesn't mean that programs that could be also be written in any more limited language can't have all the properties about then that could be proven in the non-Turing complete language proven when written in the Turing-complete language.
> It is possible to have trusted contracts, iff their logic is (mathematically) proven.
Trustworthiness of programs (including “smart contracts”, which are only loosely analogous to traditional contracts) is about whether the actual people using them get what they expect out of their behavior. Mathematical proofs of formal properties may be useful to that in some cases, but are not generally sufficient.
> It is possible to have trusted contracts, iff their logic is (mathematically) proven.
Suppose a non-Turing-complete language were used instead. Now, imagine a contract that is highly complex. If a layperson decides to trust that contract without reading the code, does it matter whether the language used to implement the contract was Turing-complete or not?
I'd argue that trust of contracts is pragmatically more of a social concept than a computational concept, since at scale the vast majority of contract users would make their trust decision about a contract based on non-technical factors.
> If a layperson decides to trust that contract without reading the code, does it matter whether the language used to implement the contract was Turing-complete or not?
Um, yes? Literally the point of weaker languages in this context is that they can be statically provable.
It is possible to have trusted contracts, iff their logic is (mathematically) proven. But the VM can't make contracts more secure by itself.