Hacker News new | past | comments | ask | show | jobs | submit login

No amount of remote attestation and "transparency logs" and other bombastic statements like this would make up for the fact that they are fully in control of the servers and the software. There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

So unless they offer a way for us to run the "cloud services" on our own hardware where we can strictly monitor and firewall all network activity, they are almost guaranteed to be misusing that data, especially given Apple's proven track record of giving in to government's demands for data access (see China).






> No amount of remote attestation and "transparency logs" and other bombastic statements like this would make up for the fact that they are fully in control of the servers and the software. There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

You are right. Apple is fully in control of the servers and the software, and there is no way for a customer to verify Apple's claims. Nevertheless system transparency is a useful concept. It can effectively reduce the number of things you have to blindly trust to a short and explicit list. Conversely it forces the operator, in this case Apple, to explicitly lie. As others have pointed out, that is quite a business risk.

As for transparency logs, it is an amazing technology which I can highly recommend you take a look at in case you don't know what it is or how it works. Check out transparency.dev or the project I'm involved in, sigsum.org.

> they are almost guaranteed to be misusing that data

That is very unlikely because of the liability, as others have pointed out. They are making claims which the Apple PCC architecture helps make falsifiable.


> There is absolutely no way for a customer to verify their claims that the data is not saved or transferred elsewhere.

Transparency logs are capable of verifying that, it's more or less the whole point of them. (Strictly speaking, you can make it arbitrarily expensive to fake it.)

Also, if they were "transferring your data elsewhere" it would be a GDPR violation. Ironically wrt your China claim, it would also be illegal in China, which does in fact have privacy laws.


Are transparency logs akin to Certificate Transparency but for signed code? I’ve read through the section a couple times and still don’t fully understand it.

Yeah, it's a log of all the software that runs on the server. If you trust the secure boot process then you trust the log describes its contents.

If you don't trust the boot process/code signing system then you'd want to do something else, like ask the server to show you parts of its memory on demand in case you catch it lying to you. (Not sure if that's doable here because the server has other people's data on it, which is the whole point.)


One approach would be a chip design where a remote attestation request issues a hardware interrupt, and then the hardware hashes the contents of memory, more specifically the memory containing the code.

That's not quite enough but yes.

(You need to prove that the system is showing you the server your data is present on, and not just showing you an innocuous one and actually processing your data on a different evil one.)


That makes no sense at all. They control the servers and services entirely; they can choose to emit whatever logs they want into the "transparent logs" and then emit whatever else they don't want into non-transparent logs.

Even if they were running open source software with cryptographically verified / reproducible builds, it's still running on their hardware (any component or the OS / kernel or even hardware can be hooked into to exfiltrate unencrypted data).

Companies like Apple don't give a crap about GDPR violations (you can look at their "DMA compliance" BS games to see to what extent they're willing to go to skirt regulations in the name of profit).


> they can choose to emit whatever logs they want into the "transparent logs" and then emit whatever else they don't want into non-transparent logs.

The log is publicly accessible and append-only, so such an event would not go un-noticed. Not sure what a non-transparent log is.


Ok, but they write and fully control the closed-source software that appends to the log. How can anyone verify that all the code paths append to the log? I'm pretty sure they can just not append to the log from their ExfiltrateDataForAdvertisment() and ExfiltrateDataForGovernments() functions.

Maybe I'm not being clear; transparent logs solve the problem of supply chain attacks (that is, Apple can use the logs to some degree to ensure some 3rd party isn't modifying their code), but I'm trying to say Apple themselves ARE the bad actor, they will exfiltrate customer data for their own profit (to personalize ads, or continue building user profiles, or sell to governments and so on).


> How can anyone verify that all the code paths append to the log?

davidczech has already explained it quite well, but I'll try explaining it a different way.

Consider the verification of a signed software update. The verifier, e.g. apt-get, rpm, macOS Update, Microsoft Update or whatever OS you're running. They all have some trust policy that contains a public key. The verifier only trusts software signed by the public key.

Now imagine a verifier with a trust policy that mandates that all signed software must also be discoverable in a transparency log. Such a trust policy would need to include:

- a pubkey trusted to make the claim "I am your trusted software publisher and this software is authentic", i.e. it is from Debian / Apple / Microsoft or whomever is the software publisher.

- a pubkey trusted to make the claim "I am your trusted transparency log and this software, or rather the publisher's signature, has been included in my log and is therefore discoverable"

The verifier would therefore require the following in order to trust a software update:

- the software (and its hash) - a signature over the software's hash, done by the software publisher's key - an inclusion proof from the transparency log

There is another layer that could be added called witness cosigning, which reduces the amount of trust you need to place in the transparency log. For more on that see my other comments in this thread.


Got it, that all makes sense. My concern is not someone maliciously attempting to infect the software / hardware.

My concern is that Apple themselves will include code in their officially signed builds that extracts customer data. All of these security measures cannot protect against that because Apple is a "trusted software publisher" in the chain.

All of this is great stuff, Apple makes sure someone else doesn't get the customer data and they remain the only ones to monetize it.


> cannot protect against that because Apple is a "trusted software publisher" in the chain.

That's the whole point of the transparency log. Anything published, and thus to be trusted by client devices, is publicly inspectable.


No, gigel82 is right. Transparency logging provides discoverability. That does not mean the transparency logged software is auditable in practice. As gigel82 correctly points out, the build hash is not sufficient, nor is the source hash sufficient. The remote attestation quote contains measurements of the boot chain, i.e. hashes of compiled artifacts. Those hashes need to be linked to source hashes by reproducible builds.

The OS build and cryptex binaries aligning to the hashes found in the transparency log will be made available for download. These are reconcilable with attestations signed by the SEP.

The source code provided is for reference to help with disassembly.

Edit link: https://security.apple.com/documentation/private-cloud-compu...


Publicly inspectable how? Are you saying their entire server stack will be open source and have reproducible builds?

My understanding is that Apple PCC will not open source the entire server stack. I might be wrong. So far I haven't seen them mention reproducible builds anywhere, but I haven't read much of what they just published.

One of the projects I'm working on however intends to enable just that. See system-transparency.org for more. There's also glasklarteknik.se.


No, but the binaries executed will be available for download.

Then shouldn't they allow us to self-host the entire stack? That would surely put me at ease; if I can self-host my own "Apple private cloud" on my own hardware and firewall the heck out of it (inspect all its traffic), that's the only way any privacy claims have merit.

> How can anyone verify that all the code paths append to the log? I'm pretty sure they can just not append to the log from their ExfiltrateDataForAdvertisment() and ExfiltrateDataForGovernments() functions.

I think we have different understandings of what the transparency log is utilized for.

The log is used effectively as an append-only hash set of trusted software hashes a PCC node is allowed to run, accomplished using Merkle Trees. The client device (iPhone) uses the log to determine if the software measurements from an attestation should be rejected or not.

https://security.apple.com/documentation/private-cloud-compu...


One of the replies in this thread sent me to transparency.dev which describes transparency logs as something different. But reading Apple's description doesn't change my opinion on this. It is a supply-chain / MITM protection measure and does absolutely nothing to assuage my privacy concerns.

Bottom line, I just hope that there will be a big checkbox in the iPhone's settings that completely turns off all "cloud compute" for AI scenarios (checked by default) and I hope it gets respected everywhere. But they're making such a big deal of how "private" this data exfiltration service is that I fear they plan to just make it default on (or not even provide an opt-out at all).


> It is a supply-chain / MITM protection measure

It is so much more than that, but you are entitled to your own opinion.


> They control the servers and services entirely

There's a key signing ceremony with a third-party auditor watching; it seems to rely on trusting them together with the secure boot process. But there are other things you can add to this, basically along the lines of making the machine continually prove that it behaves like the system described in the log.

They don't control all of the service though; part of the system is that the server can't identify the user because everything goes through third party proxies owned by several different companies.

> Companies like Apple don't give a crap about GDPR violations

GDPR fines are 4% of the company's yearly global revenue. If you're a cold logical profit maximizer, you're going to care about that a lot!

Beyond that, they've published a document saying all this stuff, which means you can sue them for securities fraud if it turns out to be a lie. It's illegal for US companies to lie to their shareholders.


"ceremony" is a good choice of word; it's all ceremonial and nonsense; as long as they control the hardware and the software there is absolutely no way for someone to verify this claim.

Apple has lied to shareholders before, remember those "what happens on your iPhone, stays on your iPhone" billboards back in the day they used to fool everyone into thinking Apple cares about privacy? A couple years later, they were proudly announcing how everyone's iPhone will scan their files and literally send them to law enforcement if they match some opaque government-controlled database of hashes (yes, they backed out of that plan eventually, but not before massive public outcry and going through a few "you're holding it wrong" explanations).


> Apple has lied to shareholders before

So sue them.

> how everyone's iPhone will scan their files and literally send them to law enforcement

That was a solution for if you opted into a cloud service, was a strict privacy improvement because it came alongside end-to-end encryption in the cloud, and I think was mandated by upcoming EU regulations (although I think they changed the regulations so it was dropped.)

Note in the US service providers are required to report CSAM to NCMEC if they see it; it's literally the only thing they're required to do. But NCMEC is not "law enforcement" or "government", it's a private organization specially named in the law. Very important distinction because if anyone does give your private information to law enforcement you'd lose your 4th Amendment rights over it, since the government can share it with itself.

(I think it may actually be illegal to proactively send PII to law enforcement without them getting a subpoena first, but don't remember. There's an exception for emergency situations, and those self service portals that large corporations have are definitely questionable here.)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: