you can assume in good faith that this is barely used feature generally but also most implementations are not really vulnerable so they might have checked but couldn't find anything actually wrong, so there might have been a tremendous amount of effort to actually secure most implementation, on the other hand...
TPM 2.0 first appeared on the market in 2014. Rust first appeared in 2015. And Rust took a while to become a mature system. TPM 2.0 was also an improved version of TPM 1.2, which came out in 2009.
For something like TPM, IMO, using a memory safe architecture should be table stakes nowadays. I would go further: the design should be as simple as possible to achieve its requirements, and the implementation should be formally verified.
Interesting article if light on technical detail. I wonder how long until this becomes used for either (a) DRM key extraction (Good!) (b) horrible, semi-permanent rootkits, (c) for unwitting fde key extraction scheme, or (d) to further push Microsoft's über-DRM-HSM Pluton even further.
Most TPM chips are unaffected. Those that are will get new firmware. Virtual TPMs are affected because those are implemented using a TPM simulator based on the TCG code that has this bug. In all cases any compromise should be of the local host, not of other things unless those things are accessed using keys stored on the affected TPM. In any affected cases where the TPM can be replaced then the simplest recovery method starts with replacing the TPM and rebuilding the host then re-running any enrollment protocols that need to be re-run, otherwise flashing a compromised TPM while running on a system that can have been rootkitted is a problem.
By virtual TPM you mean the implementation on modern Intel and Ryzen platforms in the cpu correct? If so does that mean that a firmware/bios/microcode update will be able to patch the vulnerability on those machines?
TPM (read: SecureBoot) or no, the only proper response to a computer exposed to even the possibility of a rootkit that can't be removed is wholesale replacement.
TPMs are used for Measured Boot, not Secure Boot (although the Secure Boot configuration is one of the things that's measured). Secure Boot exists on platforms that don't have TPMs.
In a way, this is a nice balance for everyone (:P / /s / I'm sorry to say). Corporate interests get "good effort" security, almost something you could legal distinguish and prosecute for bypassing. And hobbyist users get repeatable workarounds.
It's impressive in attempted scope. I imagine this doesn't affect google's chromebook boot chain. It's really hard to coordinate across vendors.
Apple phone jailbreaks were easy on the iPhone 3, and have slowly gotten harder since then. TPM3 will be stronger than TPM2, but the only way we get there is by making mistakes and learning from them.
For someone doing related work in the open (and least remarkably in the open),
Oxide Computer told some storied about the difficulty of bring up of a new motherboard, and mentioned a lot of gotcha details and hack solutions for managing their AMD chip.
They talked about their bring up sequence, boot chain verification on their motherboard, and designing / creating / verifying their hardware root of trust.
I heard mention of this on a podcast recently, trying to find the reference. I'm pretty sure it was [S3]