Hacker News new | past | comments | ask | show | jobs | submit login

Hi Mike! Long time no see.

> simply asserts that a particular key was generated inside a CPU ... There's currently no good way to prove this step

Yes, but there are better and worse ways to do it. Here's how I think about it. I know you know some of this but I'll write it out for other HN readers as well.

Let's start with the supply chain for an SoC's master key. A master key that only uses entropy from an on-die PUF is vulnerable to mistakes and attacks on the chip design as well as the process technology. A master key memory on-die which is provisioned by the fab, or during packaging, or by the eventual buyer of the SoC, is vulnerable to mistakes and attack during that provisioning step.

I think state-of-the-art would be something like:

- an on-die key memory, where the storage is in the vias, using antifuse technology that prevents readout of the bits using x-ray,

- provisioned using multiple entropy source controlled by different supply chains such as (1) an on-die PUF, (2) an on-die TRNG, (3) an off-die TRNG controlled by the eventual buyer,

- provisioned by the eventual buyer and not earlier

As for the cryptographic remote attestation claim itself, such as a TPM Quote, that doesn't have to be only one signature.

As for detectability, discoverability and deterrence, transparency logs makes targeted attacks discoverable. By tlogging all relevant cryptographic claims, including claims related to inventory and provisioning of master keys, an attacker would have to circumvent quite a lot of safeguards to remain undetected.

Finally, if we assume that the attacker is actually at Apple - management, a team, a disgruntled employee, saboteurs employed by competitors - what this type of architecture does is it forces the attacker to make explicit claims that are more easily falsifiable than without such an architecture. And multiple people need to conspire in order for an attack to succeed.






Hello! I'm afraid I don't recognize the username but glad to know we've met :) Feel free to email me if you'd like to greet under another name.

Let's agree that Apple are doing state-of-the-art work in terms of internal manufacturing controls and making those auditable. I think actually the more interesting and tricky part is how to manage software evolution. This is something I've brought up with [potential] customers in the past when working with them on SGX related projects: for this to make sense, socially, then there has to be a third party audit for not only the software in the abstract but each version of the software. And that really needs to be enforced by the client, which means, every change to the software needs to be audited. This is usually a non-starter for most companies because they're afraid it'd kill velocity, so for my own experiments I looked at in-process sandboxing and the like to try and restrict the TCB even within the remotely attested address space.

In this case Apple may have an advantage because the software is "just" doing inferencing, I guess, which isn't likely to be advantageous to keep secret, and inferencing logic is fairly stable, small and inherently sandboxable. It should be easy to get it to be audited. For more general application of confidential/private computing though it's definitely an issue.

The issue of multiple Apple devs conspiring isn't so unlikely in my view. Bear in mind that end-to-end encryption made similar sorts of promises that tech firm employees can't read your messages, but the moment WhatsApp decided that combating "rumors" was the progressive thing to do they added a forwarding counter to messages so they could stop forwarding chains. Cryptography 101: your adversary should not be able to detect that you're repeating yourself; failed, just like that. The more plausible failure mode here is therefore not the case of spies or saboteurs but rather a deliberate weakening of the software boundary to leak data to Apple because executives decide they have a moral duty to do so. This doesn't even necessarily have to be kept secret. WhatsApp's E2E forwarding policy is documented on their website, they announced it in a blog post. My experience is that 99% of even tech workers believe that it does give you normal cryptographic guarantees and is un-censorable as a consequence, which just isn't the case.

Still, all this does lay the foundations for much stronger and more trustworthy systems, even if not every problem is addressed right away.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: