Among that, the TPM enables verification of a particular state of your system, i.e., a particular set of binaries and OS configuration.
Simplifying the description of the process a bit - at every bootup it checks the checksum of all programs loaded at every boot stage (UEFI, kernel, userspace) with respect to one that is known to be approved - process called "attestation".
So in worst case, if your attestation server is very strict, any new binary installed on your machine will prevent it from booting or satisfying the attestation.
This is the main concern that TPM enables.
> the TPM enables verification of a particular state of your system, i.e., a particular set of binaries and OS configuration
That is a bit misleading. The TPM is a passive device, it cannot verify any state. It is the OS who measure the system (in Linux via the IMA system). And is the Linux kernel the one that, if you have a TPM, can produce a process where a 3rd party can be sure that the measurements are "true" and "legit" (via PCR#10 extension).
As you state later, it is this 3rd party the one that assert (verify) if you are state considered OK or not.
Maybe I am too simplistic, but I do not see the evil in the TPM here, but only in the 3rd party policy.
TPM can be abused but, as a developer, I am happy that we can use the TPM for good and fair goals in open source projects.
It is the user who can decide to use the TPM or not, and should be noted that in the TCG specification it is stated that the TPM can be disabled and cleared by the user at any moment.
>
Maybe I am too simplistic, but I do not see the evil in the TPM here, but only in the 3rd party policy.
The evil is that the "Trusted" in "Trusted Computing" and "Trusted Platform Module (TPM)" means that one deeply distrusts the user (who might tamper with the system), but instead the trust lies in the computing (trusted computing) or TPM. In other words: Trusted Computing and TPM means a disempowerment of the user.
I'm not sure if I understand your argument. As long as you can put your own things on your TPM and use it for your own good it's not too bad right? And in corporate environments it's reasonable to not own your own device right?
Sure Infineon can probably get my data, but that's far beyond the scope of my threat model.
As long as the system is open to putting your own keys on there I'm fine with it.
> I'm not sure if I understand your argument. As long as you can put your own things on your TPM and use it for your own good it's not too bad right?
As long as software that uses the TPM cannot detect whether you tampered with the TPM or not, it is principally all right.
But as I wrote down: this is exactly the opposite of what trusted computing was invented for: make the machine trustable (for the companies that have control over the TPM/trusted computing), because the user is distrusted.
Indeed, so the user should not buy a computer where they're not in control of the TPM, if you can't disable it/add your own keys, then don't buy that computer
> That rapidly converges on "you can't buy a computer and use it", because economic interests favor trusted computing devices.
I would rather argue that it converges to "you become more and more morally obliged to learn about hacking (and perhaps become a less and less law-abiding citizen) if you buy a computer and use it".
So in worst case, if your attestation server is very strict, any new binary installed on your machine will prevent it from booting or satisfying the attestation. This is the main concern that TPM enables.