Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Fixing the TPM: Hardware Security Modules Done Right (loup-vaillant.fr)
84 points by todsacerdoti on Aug 18, 2023 | hide | past | favorite | 85 comments


Hmmmm...

I could not care less about Microsoft's problem, but could this approach fix the major problem that I have with TPM?

TPM allows software to leverage my machine against me. If DICE allows me to revoke keys that have been stored for other applications (or, even better, to prevent the HSM from being used by anything without my overt permission), that might go a long way toward reducing that issue.


Similar sentiments, I'm already seeing this with Android and custom roms.

Right now you get away with spoofing basic integrity checks, but custom roms cannot pass hardware attestation. It's not a hard requirement right now in order to maintain backwards compatibility required to target 90%+ of android devices, but give it a few generations and it will be. At that piont I expect a majority of apps will simply be unusable on custom roms, and we'll have no choice but to embrace planned obsolescence.


Yeah. We used to have complete control over our android devices and the apps that ran on them. This hardware attestation business is gonna destroy all that. Right now I can't even launch my bank's app if I have developer mode enabled. They say it's only gonna be for the apps that "really need it" but that's just Google boiling the frog slowly.

Might as well buy an iPhone if that's how it's going to be.


That’s a sad jump. I’d say support/embrace Linux mobile, but we know the app makers won’t support it (and shitty apps like Signal require a primary device running one of the mobile duopoly OSs) & we we know Google with its Web Integrity wants to lock down our web browsing sanctuary that runs on any platform. Yeah, we might be forced into it… or be prepared to go low-tech like going to the bank & calling.

For me this will be really bad if I’m traveling abroad since the website requires 2FA & SMS is the only option which will be roaming & even the annual number swap I do when I get a new SIM requires paperwork & waiting a week for processing. But maybe owning a phone is more of a burden due to lack of device control & privacy concerns.


> custom roms cannot pass hardware attestation.

So this is going to be the interesting question, because obviously they can, they just have to extract the keys from something first. Devices are made by all kinds of vendors and it only takes one vulnerability before you have a market for keys. There are <$200 phones, those cheap phones are more likely to have a vulnerability, and when one of them gets a cracked screen the market value drops to near-zero which makes it cheap to buy one to extract the key.

Which is also why remote attestation is totally worthless. An attacker can do the same thing, so relying on the attestation is completely insecure because in practice anyone with a couple bucks will be able to spoof it.

But if extracting a key is mildly inconvenient or legally questionable, it will be enough to deter honest casual users but not profit-motivated attackers, and you get the worst of both worlds.

So we need to do everything to make sure that doesn't happen. One way is to make extracting keys as easy as possible, of course.

Another is to make sure that everybody understands the perversity of this nonsense. It's snake oil the likes of export-grade encryption or EV certificates. Anyone caught using it should be mocked mercilessly and added to a list of disfavored vendors with uninformed security staff. Anything in a position to do so should warn if anything attempts to use it, or disable it to make it more difficult for anything else to rely on its availability. Warnings should emphasize that using it is a security risk that can enable hard to detect malware and is an indication of malice or incompetence.

It needs to go away.


$200 dollar phones likely aren't rolling their own HSMs. It'll probably be one of a few made big Co.s like Google, Samsung, Qualcomm, etc. I expect it'll be about as hard as getting them off $2000 dollar phones.

Even if it's possible, that too will be a cat and mouse game which won't work for long. Hardware DRM has been an evolving industry, and now with the commonality of TPMs and HSMs it's here to stay.


> $200 dollar phones likely aren't rolling their own HSMs. It'll probably be one of a few made big Co.s like Google, Samsung, Qualcomm, etc. I expect it'll be about as hard as getting them off $2000 dollar phones.

HSMs often misbehave or leak data when operated out of spec, as when the OEM's cheap logic board has voltage drop. Moreover, using the same piece of hardware across a hundred million devices is its own vulnerability, because then that's the one everyone is focused on breaking, and when anyone succeeds now they can extract the keys from a hundred million devices.

> Even if it's possible, that too will be a cat and mouse game which won't work for long.

The nature of it is to be physical hardware, so vulnerabilities commonly require the hardware to be replaced. Now you've got an active vulnerability for the lifetime of that hardware, which is typically at least 3 years. Meanwhile they find new ones every year.

Preventing this may not even be physically possible. The key has to be inside the device and the attacker has physical control over the device. Even if the equipment needed to extract it is expensive, the device itself can't be if you expect everyone to have one, so someone with that equipment makes a business out of extracting keys and selling them over the internet.

> now with the commonality of TPMs and HSMs it's here to stay.

Nothing lasts forever. Especially if you make a good show of burning it to the ground.


Aren't HSMs built to fail closed? Surely the designers would have added some protection mechanism that shuts down/resets the module when it encounters anomalous voltage inputs. Or is the attack surface simply too wide for that to be feasible?


> But if extracting a key is mildly inconvenient or legally questionable, it will be enough to deter honest casual users but not profit-motivated attackers, and you get the worst of both worlds.

it will also create a market for people selling and buying keys

it'll be like buying a grey market software key, you hop onto aliexpress, pay $5 and you'll have an iphone 11 attestation key

remote attestation as a process sows the seeds of its own destruction


The keys would just be revoked and the cheap phones you got the keys from would no longer pass attestation either.


You don't tell them which keys you extracted. If they have to revoke every phone of that model then lots of normal users have them and vendors can't rely on people having phones that support it anymore, so good.

And the attacker doesn't care if the key gets revoked after they've stolen your money.


That issue with Android is what made me decide to give up smartphones. I cannot adequately secure a phone without installing a custom ROM (and even then, it's iffier than ever), and the integrity/attestation checks make custom ROMs very problematic.

So, once my smartphone dies, I'll be switching to a dumbphone.


You should take a look at GNU/Linux mobile operating systems every once in a while. They're making great strides and both popular GUIs KDE and GNOME have been working on making mobile a target in their ecosystems.

Fairphone comes with a five-year guarantee for repairs and parts, so none of that early planned obsolescence, and it works with all the Linux mobile distributions I have had my eyes on.


No planned obsolescence but they removed the headphone jack & started selling the earbuds which are one of the kings of e-waste compared to getting a nice pair of headphones or IEMs that will last a decade or more.


Try GNU/Linux phones, Librem 5 or Pinephone. They're repairable, with the headphone jack and lifetime updates.


Sorry boo, but the telephony is generally pretty garbage on all the Linux options & a phone still needs to be a phone. A lot of those important apps won’t be supported as well as they only cater to the Android/iOS duopoly.


Calls work fine for me on Librem 5. There are many posts on their forum with similar experiences, e.g., https://forums.puri.sm/t/librem-5-phone-review/21085


That’s one phone with overpriced features unfortunately. I understand you’re paying for a small batch, niche device (that some users have had a lot of issues getting the thing shipped) to get a slow CPU, measly RAM/storage, meh camera, & an IPS panel. I’d rather go the route of postmarketOS with BYOD that with better specs (even a 4–5 year old phone can easily outcompete) & an OLED panel, but much of the effort is not in telephony (I understand it’s all volunteer hackers, but it’s an unfortunate effect). The PinePhones have the same spec issues even with a better sticker price. It’s just too hard to justify. I flashed postmarketOS & Ubuntu Touch on an old OnePlus 1 which has similar specs & it’s gruelingly slow. Phones in the last 3 years are now all ‘good enough’ (hence sales across the board plummeting as upgrades have meant a lot less finally), so until the specs can get there, it’s gonna be a pass.


But remote attestation doesn't completely prevent you from running custom ROMs. Rather the problem is if you want to use various proprietary crapps/websites that insist on performing attestation. So there doesn't seem to be a point into choosing a "dumb phone" besides as some sort of protest. Rather for each of your devices you'll choose having a libre user-representing (ie secure) OS, or a locked down web TV equivalent. I'd personally choose my pocket device to be running secure libre software, and then use any proprietary crapps on a tablet that stays at home. If remote attestation gets so bad that I can't even do things like access store apps on a libre/secure device, then I'll carry two, with the libre/secure device supplying wifi AP to the other.

(this comment is most certainly not meant as any sort of defense of remote attestation)


The problem is, that's where the network effect bites you in the butt. With functioning remote attestation, platforms can lock down their APIs, so there often will not be a libre alternative. I.e. if your friends are on signal or whatsapp or twitter or whatever, you'll have to use a second phone to contact them, which will probably soon become your primary phone.


Sure, if things get really bad and I couldn't convince friends to use libre-supporting services and/or texting no longer works, then I'd have to choose my main pocket device to be locked down. I'd probably still try to carry the libre device as a secondary one to have access to trustable apps while out, but I can see that being too inconvenient and giving up on it.

But my point was that still doesn't indicate moving to a "dumb" phone. A dumb phone is just a locked down device with even fewer capabilities.


Even then you are yellow booting and that is another security issue.

What dumbphones you looking at? Everything retail I am seeing is a stripped down version of Android.


I don't know of any dumbphones on the market. I help people out who are on parole and they are not allowed to own smartphones. The best we can do is try to trick the parole agents by buying them flip phones, but even the most basic stupid flip-phones for Grandma still have Alexa built in and have some form of Internet access.



A device that can't watch Netflix in HD isn't "unusable". If your bank complains about custom ROMs for wireless payments, the exact same feature can work perfectly fine with your bank card (better even, in some cases).

I'm quite annoyed that I can't watch HD movies on my phone but it's not as if I should toss it into the bin now.


The issue is that it likely won't stop at Netflix and banking. McDonald's won't let you use their app with an unlocked bootloader [0].

My comment was suggesting that the majority of apps, not just banking/Netflix will likely adopt and require hardware attestation when 90%+ of Androids in use support it. Things like Uber/Lyft, Metro, food ordering, email, social media, etc are just as likely to pick it up and that's my point. No, it won't be a brick, but you'll likely lose quite a few apps.

[0] https://forum.xda-developers.com/t/mcdonalds-app.4067887/


Sorry, a DICE-enhanced TPM would be just as evil as a regular TPM I'm afraid. You can think of DICE as giving you a gazillion TPMs for the price of one actual TPM, one per program you care to load. What would happen is, your bootloader would just inject the standard Treacherous Computing™ firmware into the TPM, and since its keys is tied to it being what it is, in the end you end up with a regular Treacherous Platform Module, same as today.

I would _love_ to stop programs from using my TPM against me, but I'm afraid DICE isn't it.


> Their motivation however was different: they wanted to better protect the device secret. Many devices store their secret in a fuse bank, and changing it is often impossible. Leaking it is especially bad, because it can render the entire device forever unusable: no one wants an HSM with a leaked secret, or a remote IoT device that can be impersonated.

A computer I cannot choose what software I run and what state I expose to the software, and allow the software to impersonate the original software, is a SafetyNet locked-down device controlled by the owner of the TPM keys and programs running on it.


I’m sorry, did you reply to the wrong comment? Or did you quote the wrong quote? I genuinely can’t make link your response with either my previous comment or what you just quoted.

Taken in isolation your sentence is correct, I agree 100% with it: of course I want control over what runs on my computer. Of course the TPM is often used to steal that control away from me. Of course that is bad.

What made you think I don’t think that?


I'm not upset at you, I'm agreeing with the comment you wrote. But since I found out you're the author of the article, I find it odd you're now upset about the idea of TPMs and remote attestation, when your article praises an alternative TPM architecture fully capable of remote attestations and disobeying the user.


I see, I may have separated those two subjects a bit too much. The TPM is two things: evil, and bloated. This post focused on the "bloated" part, with only the slightest recognition of the evil at the very end of my post. I understand it therefore looks like I'm approving the evils of the TPM.

The evil part would need its own post, but frankly, other people have addressed this part much better than I ever could. Cory Doctorow's talk on The Coming War on General Computation is as relevant as ever.

Even then, TPM-like hardware can still be useful to me as a user, even if my computer is 100% Free Software. It is a shame though that its main use is stripping me of my freedoms instead.


> or, even better, to prevent the HSM from being used by anything without my overt permission

Can't you already do that with an existing TPM? You just set an owner authentication password and an endorsement authentication password and no application can use it anymore unless you provide the password.

Technically it would still be possible to use it as a very slow cryptographic coprocessor I guess, but that benign and useless. It does still provide access to some platform measurements, but they can't be signed by a authenticated (or even safely stored) key, so they are easy to fake.

In addition to that the OS of course can be used to completely block access to it if needed.

The problem is not that people can't stop applications from using it, it is just that in practice people don't care.


> Can't you already do that with an existing TPM? You just set an owner authentication password and an endorsement authentication password and no application can use it anymore unless you provide the password.

How would one go about doing that?


On Linux with tpm2-tools installed you can run

To set the owner password (mainly for Storage) ``` tpm2_changeauth -c owner file:- ```

To set the endorsement password (e.g. to verify that the TPM is authentic): ``` tpm2_changeauth -c endorsement file:- ```

To set the lockout password (to recover the system without requiring a full reset): ``` tpm2_changeauth -c endorsement file:- ```


If you can do that, doesn't that mean that you won't be able to even boot Windows after that?


Potentially. Last I tried to boot Windows was a Windows 10 which could deal with this, it just disabled some functionalities relying in the TPM (aka. Windows Hello(?)). It might be that Windows 11 will not like it that much.

Then again, if you want to control what runs on your system, you probably don't run Windows in the first place.

Also if you want to stop Windows from booting, it's much more reliable to change the Secure Boot keys (and of course not adding the Microsoft keys afterwards). Then your system is guaranteed Windows free.


> to prevent the HSM from being used by anything without my overt permission

That sounds quite doable? Basic sandboxing (flatpak/snap/whatever) and not assigning the tss group to system daemons will do that for you.


How does that solve the issue, exactly? Apps could simply refuse to run, and platforms refuse to provide you with service, if you don't accept their keys in your TPM.


At this point it seems like only regulation could stop this. They should be required to interoperate by law. Refusing service because the user has "unauthorized" software should be literally illegal. Without this, all useful services will exclude us.


I think the better regulation would be that it is illegal to sell hardware that includes embedded keys without making the secret key (or the private key if the embedded key is a public key) available to the legitimate owner, with no strings attached.

Hardware vendors could chose to, for example, make the keys available in printed form in an envelope with difficult to open without breaking vendor-branded seals that are expensive to manufacture in small quantities, and warn consumers not to open them if they don't know what they're doing.

This would prevent its use to lock consumers out from installing their own software on their own hardware, while still making it useful for legitimate applications. Consumers could chose to securely destroy the key if they wanted (with the caveat they could then not on-sell the device).


Installing our own software is not the problem. Their ability to even know what software we're running is the problem. Web services should be required to work with anything that speaks their network protocol, not just a cryptographically blessed official app. That way we can adversarially interoperate with them.


This reads like "the full TPM spec is too complicated for my use case, so I made my own TPM for my personal use case". That's not fixing TPM, that's inventing your own, custom TPM, that only works for a subset of the intended audience of the thing you're replacing.

It's like replacing all private cars by bikes and public transit. This solves the pollution problem, the traffic casualties program, and would solve transportation for the vast majority of people traveling on the road. It doesn't solve some niche use cases, like "trucks" or "construction work", but those are just bloat almost nobody needs in the first place, right?

From this description, Tillitis sure seems like a good alternative for TPMs. However, there's no Tillitis chip in my laptop or my desktop, but I do have a TPM. Things like SSH and PGP are already implemented. Tillitis isn't very interesting to me in its current state as advertised in this article.


Author here. Did you miss the part where users can load arbitrary programs into the "new custom TPM"? If we can do that, solving all use cases for all users is very easy: just write the appropriate program whenever a new use case pops up. This is not supporting a subset of current users, this is supporting all users. Every last one of them.

Believe me, given the current complexity of the TPM 2.0 interface, writing a custom program for any single use case is not any harder than wading through the current TPM documentation. Given suitable crypto libraries I'm guessing it's quite a bit easier in most cases.

> However, there's no Tillitis chip in my laptop or my desktop

Yeah, that's the thing with new approaches: they're new. Now if someone made an HSM with the same pinout as a discrete TPM, with a DICE-aware approach under the hood, you could plug it on your motherboard today.


What if the firmware program has a bug that needs to be fixed? Fixing it would change the hash and thus lock it out from key access but leaving it unchanged will mean the keys can be compromised.

How does Tillitis handle this case?


Could you not have a tiny certified kernel program with an embedded public key that reads the main program, hashes it, checks the signature, and executes it (providing the keys to the main program). Obviously, if you change the kernel program, you would change the keys, but you could change the main program. Anyone with the private key then has the power... they could migrate by running a new kernel (while the TKey is under their physical control) and generate a keypair (deterministically from the new secret key) and export the public key. Then the controller of the private key could sign a program to run under the old kernel that will encrypt that old key with the new public key.


It's tedious, but simple: revoke the old key, rotate to the new one. If you need the old key to revoke it (say you encrypted a disk, or you have wrapped credentials you need to unwrap first), then use the old buggy firmware to do the necessary decryption, then encrypt again with the new one.

Writing the migration program that loads the two pieces of firmware (the old, then the new) to the dongle however is a pain in the butt. Especially if you can't restart the device without physically plugging it out and back in again.


This is soo cool! Very happy to see this.

One thing I mentioned in a talk about at the TKey at FOSS-North this spring was that the internal name for the project at Mullvad that ultimately lead to the TKey was "TPM-ish". The idea was to develop a evice with just the parts of the TPM API needed to perform measured boot, but that we could control and trust.

This idea got simplified into a hardware Root of Trust device that could only do Ed25519 signing. Basically an iCE 40 UP FPGA mounted on a PCB talking SPI or LPC. And since it was based on RISC-V it didn't take long until Mullvad founder Fredrik Strömberg proposed that by combining with the DICE concept we could generalize it into what has become the Tkey.

The TKey Unlocked will be available very soon. These devices are not provisioned and not locked by Tillitis. This allows you to provision the UDS and UDI yourself, and do anything else you want with the TKey. This includes modifying the FW and the FPGA design. There will also be a TKey programmer to allow you to program TKey devices:

https://shop.tillitis.se/


> This is soo cool! Very happy to see this.

Glad you like my article, it means a lot.

> The TKey Unlocked will be available very soon.

That's excellent news, I can't wait to play with those. If I could pre-order one right now I would. :-)


This article simplifies the problem and commits factual errors.

TPM is not HSM, not enclave, and it does not allow running arbitrary computation. TPM is a specification of a secure element, that provides some cryptographic primitives, secure storage, signing mechanism (endorsement key), and a few more. Since it is available since very early boot stage it is used for storing and signing integrity measurements.

HSM or TPM never release the signing key as authors writes. DICE does not release the initial seed used to derive the inital hash/key.

DICE addresses different market. It was designed by the same organization that designed TPM but addresses IoT devices. Microsoft extended the spec so that one gets chain of signed measurements instead of an aggregated hash as an attestation proof (at a high level).

DICE gets more popular in TEE designs because one does not have to rely on an external chip vulnerable to physical attacks. However, it is the same set of features needed for DICE and TPM to enable attestation. TPM offers additional features, like mono tonic counters, secure storage, sealing, etc, that can be used for other use cases.

Finally, TPM became a standard and has been implemented as part of complex processor’s firmware (Intel PTT), discrete TPM (what the author of the article is familiar with), software TPMs enabling attestation for VMs and recently used also for confidential computing VMs (check Intel TDX). Linux kernel supports runtime integrity measurements with IMA security subsystem that relies on TPM protocol for attestation.


Your nitpicks and misinterpretations are not my factual errors.

> TPM is not HSM

Most TPMs we care about are their own piece of silicon. The software and firmware ones aren't real HSMs, but come on, we all know those are shortcuts to the real thing.

> TPM is […] not enclave

Never mentioned "enclave" in my article.

> TPM […] does not allow running arbitrary computation.

Hey, I said as much, that's the whole point of my article.

> TPM is a specification […]

Oh please. Can't I use the same word to refer to the TCG specs and actual instantiations in hardware or software?

> HSM or TPM never release the signing key as authors writes.

I never wrote that.

> DICE does not release the initial seed used to derive the inital hash/key.

Which is why I never said it did.

> DICE addresses different market. […]

Yes, I'm aware. I also noticed how the TCG manages to promote DICE without noticing it makes their baby TPM 2.0 obsolete. I'm guessing this is motivated cognition. In any case, I think the TCG should start working on a DICE-capable TPM 3.0 right away, and spare us the now needless complexity of TPM 2.0.


> I also noticed how the TCG manages to promote DICE without noticing it makes their baby TPM 2.0 obsolete.

Alas, TCG seems to have recognized the mistake, so they have recently proposed for the DICE to rely on a separate entity (the DPE) to handle all secrets and cryptographic operations:

https://trustedcomputinggroup.org/wp-content/uploads/TCG-DIC...

"Examples of environments that could be used for a DPE implementation are a secure coprocessor, discrete secure hardware, a Trusted Execution Environment (TEE), a type-1 hypervisor, operating system kernel, or another type of environment isolated with a hardware-backed mode switch."

The DPE re-introduces a whole class of problems (i.e. authentication and secure communications) that the DICE was meant to simplify away in the first place.


I think you misunderstood the OP.

From what I got, the OP was not claiming that a TPM is simply a HSM (despite the first sentence making it seem that way).

What they claimed was:

- You only need to provide a HSM, a general-purpose microcontroller and a specific, very simple trusted bootloader.

- Then clients can supply the rest of the TPM implementation themselves as untrusted code to the bootloader.

- The resulting system has the same security properties as a TPM implemented in firmware.

- It would lead to simpler implementations and a lot less complexity in general, as clients only have to implement the parts of the TPM spec they need an not the entire thing.

I'm not enough of a crypto guy to be able to judge whether OP is right - but I think you'd need some more substantial cryptographic arguments to disprove the claim.

(In particular, I wonder how easy it would be to cause a collision - i.e. pass a program to the bootloader that results in the same hash and CDI as the program that you want to attack and still lets you do something useful, such as leaking information about the CDI to the host)


This article is correct that having a general-purpose owner-controlled programmable secure enclave is highly desirable. The design where each program which runs receives a unique cryptographic identity derived from a fused key is also something I've advocated: https://www.devever.net/~hl/secureboot

The TKey is a good design here.

TPMs are a red herring here though, as the TKey is not a plausible replacement for a TPM, which exists to measure a platform boot process. There's not really any way to use a TKey for this, since a) you'd have to load firmware at every boot before the first measurement is taken (i.e., before the BIOS even starts running), which no PC is setup to do, b) you would still be vulnerable to classical MitM of the device, as for any discrete TPM, and unlike modern functional TPMs.

The lack of controlled storage it should be noted does create vulnerability to rollback attacks. It's not really possible to delete data this way.

In any case, with regards to the lack of user-programmable secure elements, it's the industry attitudes here that are the problem. This kind of technology absolutely exists, but it's all under NDA and you can't have it. Smartcards are the most obvious example; you can get nice flash-based programmable smartcards now with 32-bit ARM cores, and no you can't have one. It's ridiculous.

So the TKey is built out of a COTS FPGA, one of the few FPGAs with an open source toolchain (painstakingly reverse engineered). This means it doesn't have any of the silicon hardening that smartcards and other secure element chips have - but there's no choice but to build out of something like this because those chips are all under NDA. The hardware industry doesn't seem to believe in Kerchhoff's principle.

IMO the TKey is basically the best you can do with the publicly available silicon today. In that regard it's pretty good. But a TPM is literally the one application it's least suitable for as a secure element.


> TPMs are a red herring here though, as the TKey is not a plausible replacement for a TPM, which exists to measure a platform boot process. There's not really any way to use a TKey for this, since a) you'd have to load firmware at every boot before the first measurement is taken (i.e., before the BIOS even starts running), which no PC is setup to do, b) you would still be vulnerable to classical MitM of the device, as for any discrete TPM, and unlike modern functional TPMs.

Well, yes, the TKey specifically is missing bits and pieces that makes it unsuitable as an actual TPM on current computers. We could however add them in. We wouldn't have to load the firmware at boot time for instance if like you suggest in your article it is stored in a flash chip and automatically loaded by the discrete HSM itself at boot time. We still need the option to reload new firmware from another source or change the flashed firmware for maximum flexibility, but at least this problem could be solved.

The other point about discrete TPMs being vulnerable to MitM attacks… yeah, I haven't found a way around that. As far as I know, I don't see a way around executing one bootloader on the PC, and have the TPM measure another bootloader (say the one approved by Intel and Microsoft). My web searches around that are eerily silent, and the best technical explanations I could glean tend to show that measured boot and discrete TPMs are fundamentally incompatible.


>My web searches around that are eerily silent, and the best technical explanations I could glean tend to show that measured boot and discrete TPMs are fundamentally incompatible.

Indeed. It is actually remarkable how the TPM implementation on PCs has been blatantly insecure for most of the lifetime of the TPM ecosystem (with fTPMs only being relatively new), and remarkably few people pointing out this obvious fact.

Honestly there are so many issues with the PC TPM ecosystem I have a draft blogpost going through them all, but it might be a while before I can finish it.


> Honestly there are so many issues with the PC TPM ecosystem I have a draft blogpost going through them all, but it might be a while before I can finish it.

I'm interested if that can encourages you. (Also, I liked your linked article, interesting and well argued.)


How about as yubikey replacement ? What are the good applications?


Sure, this is fine.


Fixing the TPM is hard because taping out semiconductors at scale is not yet facile [1] and there's also still a proprietary PDK in the way of 'traditional' manufacturing [2]. The article submarines [3], which looks very interesting. I do wonder how the Lattice iCE40 will fare under fault injection and just how many grams of epoxy you'll need to stick to it.

[1] https://atomicsemi.com/

[2] https://www.bunniestudios.com/blog/?p=6606

[3] https://github.com/tillitis


I'm in the minority on this, but I want BIOS to make a comeback. I don't want TPM, or any of the rest of this broken garbage. Just let me boot my OS, and get the fuck out of my way...


I agree with you actually. I do however like the other uses of hardware tokens: two factor authentication, or even replacing password based login. I would very much like to have a TPM/HSM flexible enough to do the same things a YubiKey does, and more.

But the powers that be decided that the user is the enemy… Maybe they should be more careful that such a decision may make an enemy out of the user. I for one sure don't want to be their ally.


coreboot et al sounds like a better option.


The TPM spec is a monster and I understand the urge to throw it out and create something simpler. It's also true that you're trusting proprietary firmware in the TPM to be correct. Errors and bugs in the spec are also near impossible to fix. So the idea of "just run whatever code you want" has appeal. And the reference tpm2_tools and software stack is a hot mess of C code that should definitely not be written in C at this point.

A device brewed on a RISC-V SOC in an FPGA is probably very hard to secure against hardware attacks. It's fun (in the sense that FPGAs are "fun") and it's definitely a worthy pursuit to have an open hardware + firmware device replace proprietary TPMs. But getting hardware security right is just as important here. A TPM-replacement is not useful if I can solder a JTAG connection to it and read out the memory.

Rewrapping keys when there's a firmware/code update is a real weakness here. There's probably a solid solution, like being able to provide a compound/asymmetric CDI to wrap between versions. Like "generate an ECDH key pair for the next hash". It would be a pain if every client application had to implement this themselves. The other hot use case for TPMs is boot chain attestation, where hashes of UEFI firmware, boot loaders, and kernel images are appended to create a verifiable hash. The device attests to the hash being authentic.

One major weakness of TPM 2.0 is that it's monolithic for the whole system. If you're running VMs or even just multiple processes in the OS, it's not really easy to use across domains. So lightweight code swapping would be pretty cool.

Interesting stuff nonetheless.


> A device brewed on a RISC-V SOC in an FPGA is probably very hard to secure against hardware attacks.

Makes sense. Which is why I didn't insist too much on the FPGA nature of the TKey. For maximum security I would want an ASIC system on a chip (ideally some RISC-V profile), with a real fuse bank, neatly lockable ROM for the bootloader firmware, and all the real hardware security I basically know nothing about.

An FPGA is such a sexy prototype, though.


Interesting discussion!

There are some exciting things that could be done with an ASIC, but at the same time an ASIC would require extensive supply chain security to be in place (which is a big task). There are a lot of hands touching the design and silicon from point of design sign-off, to ASICs in your hand.

Supply chain attack is more difficult on a FPGAs, partly by processes implemented by the vendors and partly by the fact how FPGAs work, since there is no functionality in the FPGA, malware injection is more difficult (close to impossible?).

Glad to hear your reasoning around this.

(Full disclosure: I work at Tillitis)


Yes! I've been wanting a TPM 2.0 extension that lets users provide something like a bytecoded program that runs like a secure enclave: if its hash matches a secret's authz, then the program is authorized to use it via well-defined APIs (e.g., sign data), and the bytecode interpreter would keep the program from doing things it shouldn't.


Having a look at their documented threat model: https://github.com/tillitis/tillitis-key1/blob/main/doc/thre...

I love this particular detail, listed under Assumptions:

> The end user is not an attacker. The end user at least doesn't knowingly aid the attacker in attacks on their device.

I love this, it's exactly what I want from a HSM device. However, sadly, most vendors today deploy TPMs in such a way that the end-user is an attacker (see: Google SafetyNet) - and the TKey is kinda incompatible with that, I suppose.


It's an important topic, but the basic tradeoffs with TPMs and HSMs are that either you, a) trust the vendor is generating root secrets with sufficient entropy, whether they are a private key or a symmetric secret, b) you trust the personalization process for replacing the OEM secrets with personalized ones, or c) you trust the firmware to not yield the unique secrets you generate in it.

There are issues with all of these, but it's a question of in which security context you are generating your root secrets and keys. e.g. at manufacture, at personalization, or whenever the end user wants to. The catastrophic failure mode of TPM's depending on shared root secrets may actually be a privacy feature, imo, because in all the digital identity work I have done, this was where every scheme fell over.


Even if flawed, it can still act as a safety in numbers game. You have to be able to abuse TPM and be in a position to do so, in the context of fde, even a flawed tpm v1 approach stops 90% of potential threat actors capable of abusing an unencrypted device.


A solution that was rarely implemented was batch keys, where your secrets installed by the OEM were diversified somehow so when attackers eventually extracted your TPM secrets from one device, it would not compromise all devices. It still has a terrible and unacceptable failure mode when peoples lives depend on it, but for infrastructure, I think the risk is more managable.


If your threat model involves people sniffing TPM signals in transit, you’re in a different league to me. In most cases I assume physical security (security guards, epoxy the motherboard, etc) become the mitigating factors. I’m just trying to avoid evil maids.


Making an TPM genie isn’t that hard, and once we have that the Evil Maid can fairly easily unplug the discrete TPM, plug the Genie, and plug the TPM into the genie. And voilà, you can now sniff TPM signals in transit. Only works with discrete TPMs of course, but Man-in-the-Middle is not out of this world.

https://github.com/nccgroup/TPMGenie


I’m more referring to laptops. If you’re using a desktop, you also have other issues (like key injectors unless you inspect the cable going from your pc to your desk every time, etc).


I wish the whole "trusted computing" thing just disappeared.


A reprogrammable HSM is a neat idea, but I think the author has not really understood the use cases TPM 2.0 is trying to support. The TPM 2.0 architecture document contemplates three distinct roots of trust, and the TKey can't really serve as any of them, at least not while maintaining its operational flexibility and simplicity the author likes about it.

The first is the root of trust for measurement which consists of the first immutable boot code run on the application processor, which must be trusted to measure the first mutable code correctly, and the trusted hardware that receives these measurements. This trusted hardware needs to be present and running from the very earliest boot stage, and must keep running until the system is reset. If the application processor can reset the TKey after boot, it could reset the measurements and then imitate the legitimate boot chain, defeating the purpose of measured boot. If it can't then the TKey is running fixed firmware that can, at best, be changed by rebooting the system, and needs to be shared by all applications simultaneously. For a general purpose operating system that needs to be able to run arbitrary applications that in turn need to be able to support a wide range of systems, this pushes you inevitably towards something like the TPM 2.0 spec that tries to support all use cases at once.

What about just embedding DICE into the application processor and having the system serve as "its own HSM"? That only works for the very simplest boot policy of "these secrets should only be accessible to a device running a single fixed image". Maybe you're fine with reprovisioning your EV charging stations after every software update, but my devices get updated more often than that.

The second is the root of trust for storage, which is a container for non-volatile memory with read and/or write controls. This is an easy one, to serve in this role you need protected non-volatile storage, the TKey has none, gg.

You need this for audit logs, and also for any kind of policy that might change over time. What if I want to change my password? What if I want to revoke access to a secret from an old OS image? Or record all uses of a signing key? All of these require some kind of storage that can't be rolled back to an earlier state.

I think you could have a scheme where the chip stores the root of a Merkle tree of the NV state for every trusted application and relies on the host to provide the actual state for a specific trusted application + a proof that it's in the Merkle tree at boot, which would allow different trusted applications to be run on the same physical chip without interfering with each others data, but that is going to drastically complicate the design of this system and require some kind of runtime OS on the chip to control how the root is updated (otherwise a trusted application could roll the state back for other trusted applications).

Finally you have the root of trust for reporting, which can sign trustworthy assertions about the system state. For example, attesting that a key is actually bound to the secure element, or that the system booted is a particular state, or to the audit log. For this you need a key that relying parties already know is bound to the secure element for this purpose. If you have different trusted applications with different secrets, and you want them all to be able to provide remote attestation, then you need to either go through a manual provisioning process for each application (someone needs to connect the device to a trustworthy system and check the attestation key for the application), or you need the firmware to sign something derived from the CDI using something derived from the UDS (which the TKey doesn't). It doesn't require a trusted runtime OS on the secure element though, so at least it has that going for it.

I think the hardest of these is the measured boot use case, because to be useful it needs to be combined with anything that relies on measured boot. There's no point in measuring your boot process if you can't either remotely attest to it, or bind a secret to it, and it needs to be able to support whatever the host OS needs it to do, so I think attempts at TPM 2.0 style fully general trusted applications are close to unavoidable here.

Maybe with some very clever ideas you can make a secure element that can actually replace all these use cases while being reprogrammable at runtime and having only small, purpose-specific trusted applications, but the TKey that exists today isn't it.


> The first is the root of trust for measurement which consists of the first immutable boot code run on the application processor, which must be trusted to measure the first mutable code correctly, and the trusted hardware that receives these measurements. This trusted hardware needs to be present and running from the very earliest boot stage, and must keep running until the system is reset. If the application processor can reset the TKey after boot, it could reset the measurements and then imitate the legitimate boot chain, defeating the purpose of measured boot. If it can't then the TKey is running fixed firmware that can, at best, be changed by rebooting the system, and needs to be shared by all applications simultaneously. For a general purpose operating system that needs to be able to run arbitrary applications that in turn need to be able to support a wide range of systems, this pushes you inevitably towards something like the TPM 2.0 spec that tries to support all use cases at once.

First, as far as I could gather so far, measured boot and discrete TPMs are fundamentally incompatible. Just boot whatever you want on the application core, and when it sends the bootloader to be measured to the TPM, just MitM the thing with a TPM genie, and have the genie give another bootloader to measure, one that the TPM would approve of. This unlocks the TPM and we just broke the chain of trust (power to the people).

So okay, the TPM must be fused next to the application core to prevent any kind of MitM. It still needs a default firmware that does whatever is needed for the measured boot. After that though, why would the original firmware be needed? You only need to measure the bootloader once. Once you did the measured bootloader can measure the kernel etc. all the way to user space. Similarly, once the TPM has given away the hard drive's encryption keys to the application core, those keys aren't needed any more. So why couldn't we reset the TPM after boot?

Even if I missed something there, we could imagine going in stages: have a measured boot core that's always running, but allow running additional code on top of that basic firmware (and give it derived keys in a DICE fashion). That way the only use cases the immutable firmware has to solve are secure/trusted/measured boot, and loading custom firmware on top for arbitrary HSM functionality. Couldn't that work?

> The second is the root of trust for storage, which is a container for non-volatile memory with read and/or write controls. This is an easy one, to serve in this role you need protected non-volatile storage, the TKey has none, gg.

That memory is orthogonal to the DICE approach, we don't necessarily need to forego all persistent state like the TKey does.


I'm not an encryption guy so maybe this is a stupid question, but doesn't this mean you can't update the firmware without losing all your encrypted data?


Not a stupid question. CDIs are groovy for minting secrets that are bound to the exact firmware that's running, but are a bit less ergonomic out of the box when it comes to keeping long-lived secrets around across a firmware update. Firmware changes --> CDI changes --> anything derived from or sealed to the CDI is gone, by design.

A more ergonomic approach for sealing long-lived data is to use something like a hash chain [0], where the chain starts with the equivalent of a DICE UDS, and the chain's length is (MAX_VERSION - fw.version). The end of that chain is given to firmware, and the firmware can lengthen the chain to derive older firmware's secrets, but cannot shorten it to derive newer firmware's secrets.

This presumes that the firmware is signed of course, since otherwise there'd be no way to securely associate the firmware with a version number. If the public key is not baked into the HSM, then the hash of the public key should be used to permute the root of the hash chain.

[0] https://en.wikipedia.org/wiki/Hash_chain


Yes, and no.

If you lose the firmware entirely you would indeed lose the derived decryption keys. But if you keep the firmware somewhere safe (or even fetch it again from wherever you got it first), then loading it again would derive the same keys, and you can decrypt your stuff again.

This makes reproducible builds very important by the way: if you rely on a source control system to hold on to old versions of the firmware (just in case someone needs it to decrypt old files), you really really want a way to re-generate the same binary blob from the same source code.


most systems for encrypting large amounts of data (e.g. a whole hard drive) don't use the user-derived key directly for encryption; the data is encrypted with a content encryption key (Microsoft) or master key (LUKS), which is then encrypted with the user-derived key and stored in the encryption header. this allows the user passphrase to be changed by reencrypting the CEK/MK rather than the whole drive.



There is no need to fix TPM. It can be used with free software. See Librem laptops with Librem Key.


If you have no problem with needlessly complex APIs and accidental complexity, sure. But I do.


Subvert the manufacture of the UDS...


With TKey Unlocked you will be able to generate the UDS as you please.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: