Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some useful context: this is almost certainly being driven by Apple’s Private Cloud Compute architecture and not tariffs, as an investment of this magnitude is not planned overnight.

Why is PCC driving Apple to spend billions to build servers in the states? Because it is insane from a security standpoint (insanely awesome).

PCC is an order of magnitude more secure server platform than has ever been deployed for consumer use at planet scale. Secure and private enough to literally send your data and have it processed server side instead of on device without having to trust the host (Apple).[1] Until now the only way to do that was on device. If you sent your data for cloud processing, outside of something exotic like homomorphic encryption[2], you’d still have to trust that the host did a good job protecting your data, using it responsibly, and wasn’t compromised. Not the case with PCC.

To accomplish this Apple uses its own custom chips with Secure Enclaves that provide a trust foundation for the whole system, ultimately cryptographically guaranteeing that the binaries processing your data have been publicly audited by independent security auditors. This is the so called hardware root of trust.

It is essential then that the hardware deployed in data centers has not been physically tampered with. Without that the whole thing falls apart. So Apple has a whole section in their security white paper detailing an audited process for deploying data center hardware and ensuring supply chain integrity.[3]

You can imagine how that is the weak point in the system made more robust by managing it in the US. Tighter supply chain control.

[1] https://security.apple.com/blog/private-cloud-compute/

[2] Fun fact, Apple also just deployed a homomorphic encryption powered search engine! It’s also insane!

[3] https://security.apple.com/documentation/private-cloud-compu...



Trusting Secure Enclaves custom chips over processing locally is going to be a hard to impossible sell for those who truly care about privacy.

Thankfully for Apple that's a very low number in a world where people demand tiktok remain legal when shown how their data is being used by foreign actors. People only care about privacy when it's local (don't want mother to find out, neighbours to talk, friend to think a certain way about you or classmate stalking) and that's why ai fakes are much more concern then a company knowing everything you do.

But this product is great for fortune 500 businesses.


I think this is a level of security Apple is providing at additional cost to themselves that only a tiny fraction of consumers would even pay an extra cent for.

From that perspective I really appreciate this effort by Apple.


> at additional cost to themselves

For now.


Yup. Apple knows that they don't have to ship anything more than a whitepaper to justify their stance to current customers. They could announce an internet-connected bidet with a webcam and there would still be people arguing that it's safe until someone exploits it.

The fact that Apple is comfortable shipping a whitelabel ChatGPT is proof that the whole Private Cloud Compute thing is just for show. They're perfectly happy partnering with the Worldcoin guy to sell you something popular if there's money in it for them. Apple knows people expect them to release some haughty whitepaper, so they cook up PCC and claim you can audit it if they think you're worthy of seeing the insides. Now all the privacy nuts can pipe down while Apple plans a longer-term strategy to make their hardware compete in the datacenter.

There is a world where Apple takes their own privacy commitment to the next level through radical transparency. But that's not what PCC is, it's another puppet for the Punch-and-Judy security theater that sells their iCloud subscriptions.


PCC is completely different from the ChatGPT integration. ChatGPT is indeed not a privacy-hardened system, but Apple devices only use it for so called “world knowledge” queries and make you confirm when calling out to it, typically involving limited personal data.

PCC is designed to handle extensive personal data, and the auditing is attested by cryptographic proofs provided to software clients, not just white papers read by humans. It is significantly different from what we’ve seen before in the industry, and highly worth the effort to understand it if you are at all involved in server engineering.


> Trusting Secure Enclaves custom chips over processing locally is going to be a hard to impossible sell for those who truly care about privacy.

Isn't local processing on Apple devices rooted in the same secure enclave hardware/firmware, attacked and hardened for 10+ years?


The problem with any remote arrangement is that you have to trust Apple that the server side is running all that stuff. Their answer to that is "you can audit us", but I don't see how that would prevent them from switching things in between audits.

As far as local processing goes, though, you're also still fundamentally trusting Apple that the OS binaries you get from them do what they say they do. Since they have all the signing keys, they could easily push an iOS update that extracts all the local data and pushes it to some server somewhere.

Now, I don't think that either of these scenarios is likely to happen if it's down to Apple by itself - they don't really gain anything from doing so. But they could be compelled by a government large and important enough that they can't just pull out. For example, if US demanded such a thing (like it already did in the past), and the executive made a concerted push to force it.


  you have to trust Apple that the server side is running all that stuff
Remote attestation should be proving to the client that the server is running the expected firmware and PCC software hashes, https://security.apple.com/documentation/private-cloud-compu.... Apple has released (some? all?) source for PCC software on the server, https://github.com/apple/security-pcc

> When a user’s device sends an inference request to Private Cloud Compute, the request is sent end-to-end encrypted to the specific PCC nodes needed for the request. The PCC nodes share a public key and an attestation — cryptographic proof of key ownership and measurements of the software running on the PCC node — with the user’s device, and the user’s device compares these measurements against a public, append-only ledger of PCC software releases.

> compelled by a government

Sadly, the bar is much lower than "compel". Devices are routinely compromised by zero-day vulnerabilities sold by exploit brokers to multiple parties on the open market, including governments. Especially any device with cellular, wifi or bluetooth radios. Hopefully the Apple C1 modem starts a new trend in radio baseband hardening, including PAC, ASLR and iBoot, https://www.reuters.com/technology/apple-reveals-first-custo...


> Their answer to that is "you can audit us", but I don't see how that would prevent them from switching things in between audits.

PCC does actually prevent Apple from switching things in between audits to a high degree. It’s not like a food safety inspection. The auditor signs the hardware in a multi party key ceremony and they employ other countermeasure like chassis tamper switches. PCC clients use a protocol that ensures whatever they are connecting to has a valid signature. This is detailed in Apple’s documentation.[1]

See, this is why I think privacy engineering is low key the most cutting edge aspect of server development. Previously held axioms are made obsolete by architectural advancements. I think we’re looking at a once in 15 year leap - the previous ones being microservices and web based architecture.

[1] https://security.apple.com/documentation/private-cloud-compu...


At some point having trained and certified Apple engineers overseeing this sort of thing gives far more confidence than random startup #1345134 who promises they hired the best college drop outs that they could find.


Everything about democracy is great except its people. You know, the big brained carbon lifeforms that refer to themselves as “citizens”.


> Trusting Secure Enclaves custom chips over processing locally

If you're using Apple hardware, it's the same technology in your local device anyway, right?


    > for those who truly care about privacy
Is this the new "No true Scotsman" test on HN?


<<<security is not my domain, asking genuine questions!!>>>

At the end of the day, it ultimately still boils down to trust though, yes? Trust that they are running the data centers the way they say they are, trust that their supply chain is what they say it is, and so on? At the same time, using some open source piece of software also entails a great amount of trust: I’m not going through the source code of Signal myself, and I’m also not checking that an open source locally served model isn’t sending traffic/telemetry etc back to some remote server via whatever software is running the model… rather, I’m placing my trust in the open source community that others have inspected and tested these things. I’m sure all sorts of shady PRs into important open source code bases are made on the reg after all. So that’s not to say that trusting Apple is necessarily more or less wise than trusting open source software from a security standpoint… my point is just that it seems like they are aspiring to a zero trust architecture, but at the end of the day, it does still require trust that they are operating in good faith vis-a-vis what they are representing in the white papers right? To me, it seems like a relatively safe assumption that they are for a variety of reasons, but nonetheless, it is an assumption right?


> I’m placing my trust in the open source community

You’re right, security is a matter of degrees not absolutes, but open source software requires considerably less trust than closed source. Right?

PCC applies this principle by making the binaries it runs public and auditable by you or anyone in the security community. (In some cases the source code as well.) The craziness is in the architecture that provides cryptographic proof to clients that the server they’re connecting to is running an audited binary and running on secure hardware. It even does TLS termination at the shard level so you can have high confidence that if the binary isn’t connecting to anything your data will be unreadable by any other server in the org.

So it goes way beyond trusting what the whitepaper says. Data center hardware deployments are audited by a third party that signs the servers in a key ceremony. That ultimately undergirds the cryptographic attestation that servers provide to clients that everything has been audited. And it’s also the element that tighter supply chain control helps shore up.

If you’re new to security the architecture documentation I linked to is a very friendly read and a good intro to some of these threats, countermeasures and rationales.


Thank you for the really great response! It answered my main question:

> The craziness is in the architecture that provides cryptographic proof to clients that the server they’re connecting to is running an audited binary and running on secure hardware.

I definitely missed this concept when skimming the links before posting my comment - very very cool!

> open source software requires considerably less trust than closed source. Right?

Of course… but at the same time, I think the difference in the degree of trust I am placing in say, Signal’s end to end encryption and Apple’s (claims of) end-to-end encryption is not as large as it might cursorily seem. Would I be more surprised to read in the news that Apple had secretly embedded some back door than I would be reading in the news that malicious actor managed to push some hidden exploit through to Signal in an otherwise innocent PR? I’m genuinely not sure which would surprise me more, or which event would be more probable, so can I really make any claim as to which is more secure, given the current knowledge I have? Obviously I could think more deeply about this, but superficially, both are requiring pretty large amounts of trust from me - which I don’t think is misplaced in either… though I do personally trust something like signal more at the end of the day based on… what, intuition? A gut feeling?


That’s good food for thought! I would just add that the kinds of threats PCC is primarily targeting, I think, are attacks by malicious third parties (including state actors), rogue internal employees, and privacy-leaking software bugs. These are sort of bread and butter real world threats.

I would go out on a limb and say Apple would love to also prove beyond a reasonable doubt that they too as an organization cannot get away with planting a secret back door — not because they have pure angelic hearts, but because this is good for their privacy-differentiated business model. And PCC certainly makes a huge leap in that direction. But it’s not the problem it’s primarily targeting nor an easy one to solve completely.

As another example, Apple has an implementation of OHTTP onion routing[1] called iCloud Private Relay. It’s really cool and easy to use. The point is to make it so nobody but you can tell what website your IP address is connecting to, not even Apple, the operator of the relay. But bottom line, Apple picks who they collaborate with for the gateways and there’s nothing stopping them from colluding out of band to de-anonymize you if that’s what they wanted to do.

Does this defeat the purpose of iCloud Private Relay? No. Its purpose is to better protect you from common privacy attacks, better than a traditional VPN would. It happens to also narrow the trust you need to place in Apple, namely that they would need to collude with another company to defeat the system as opposed to some rogue lone wolf SRE deciding to access your logs. But it wasn’t put in place to make people who fundamentally distrust Apple as a company start trusting them.

[1] https://datatracker.ietf.org/doc/rfc9458/


> At the end of the day, it ultimately still boils down to trust though, yes?

Isn't that pretty much the story for most every thing though? It comes down to discernment, which is mostly subjective itself.

Same here. Personally, do I trust Apple? I don't have a leaning one way or another about that. What I trust is that Capitalism is gonna capitalize. And Apple doing what it says here, is its Brand. If down the road it comes out later it was all a lie. That Brand has no more standing. No more standing, no more sales. And Apple is in the Brand/product selling business. I trust they won't throw away their trillions because they would rather sell their Brand on white papers over an actual product that the papers describe.


Yes, I think along similar lines there… but on the other hand, brands need not reflect underlying truths about reality, and in fact often do not. Suppose two years from now, it is revealed by a whistleblower that they were part of a special skunkworks team responsible for creating various backdoors in PCC in order to enable Apple to access the data, train new models on queries, or maybe respond to government requests etc etc, all of which which were subtle, complicated exploits. Maybe Apple denies and discredits, or minimizes, or issues some sort of limited mea culpa. To what extent would it affect Apple’s brand? How long would it stay in the public consciousness? Would people (writ large, not those on HN) care? Perhaps it impacts sales and the stock price, but for how long and to what extent? Obviously there would be some sort of cost to such an event occurring, but would it outweigh whatever benefits that Apple might gain in the meantime? Maybe those benefits have to do with avoiding the wrath of the federal government… who knows. There’s definitely a world where the amoral calculus suggests lying might be better, right? Maybe not ours, but it is plausible. Like you said, discernment is the only tool we have, and it’s difficult to really know what’s going on at the end of the day.

Moscow rules and George Smiley’s tradecraft are probably the only real security… ha!


> Until now the only way to do that was on device

as usual, Apple's implementation is exceptional, but far from the first. see https://confidentialcomputing.io/ and its long history


  2019 Linux Foundation Confidential Computing
  2015 Intel SGX (Skylake)
  2014 Apple Secure Enclave (A8, iPhone 6)


> 2015 Intel SGX (Skylake)

Might be worth pointing out that SGX was compromised repeatedly and comprehensively by speculative execution attacks, e.g.

https://www.usenix.org/conference/usenixsecurity18/presentat...


Signal famously bet the (contact discovery) farm on SGX. A controversial design decision at the time, for good reason.

https://news.ycombinator.com/item?id=15340729


ARM TrustZone launched with the Arm1176JZ-S in 2004.


Absolutely right. My comment was strictly about “for consumer use at planet scale.” It’s the aggressive adoption and rollout of confidential computing architecture in an easy to use consumer platform that I’m celebrating here. (Including a 12 figure financial commitment!) Prior to PCC, smartphones generally had to process data on device to ensure privacy.


> It is essential then that the hardware deployed in data centers has not been physically tampered with. Without that the whole thing falls apart.

am i wrong or does this just change the threat from Chinese to US government tampering

and if third-party auditing can detect hardware tampering then why does it matter where the hardware is manufactured


PCC is an awesome solution for Apple to ensure that no one other than Apple can execute code in that environment.

That is however not most users' concern (in fact, I'd guess less than 0.001% of Apple users are concerned with supply chain attacks on Apple's servers); what we're concerned with is Apple itself misusing our data in some way (for example, to feed into their growing advertising business, or to redirect to authorities). PCC does NOT solve any of this and it's in fact an unsolvable solution as long as their server side code is closed source (or otherwise unavailable for self-hosting as binaries). For me, Apple Intelligence stays off on my devices (and when that is no longer an option, I'm jumping ship - I just wish there was something at least passable to jump to).


> what we're concerned with is Apple itself misusing our data in some way… and it’s in fact an unsolvable solution as long as their server side code is closed source (or otherwise unavailable for self-hosting as binaries)

It is in fact a solvable problem. The binaries are indeed available for self hosting in a virtualized PCC node for research purposes.[1] Auditors can confirm that the binaries do not transmit data outside of the environment. There are several other aspects of the architecture that are designed to prevent use data from leaking outside of the node’s trust boundary, for example TLS terminates at the node level and nodes use encrypted local storage so user data is unreadable to any other node / part of the organization.

[1] https://security.apple.com/documentation/private-cloud-compu...


That is a lot of mumbo-jumbo but what it boils down to is that you cannot run the PCC on your own hardware; you can download some "components" whose hash matches the supposed "transparency log" they publish (and some demo models) but since I can't go into my iPhone to say "set PCC server ip: 192.168.1.42" and see it work, I don't trust it (and it cannot be trusted).


Are these the droids you’re looking for? https://github.com/apple/security-pcc


No, that is not the PCC, just some research artifacts.


> PCC is an awesome solution for Apple to ensure that no one other than Apple can execute code in that environment.

Doesn't PCC guarantee even more than that? From my reading, Apple can't exfiltrate any data to other servers (even ones that Apple owns) nor can they inject any processing other than what is outlined into that server. Otherwise, what's the point of such a stringent hardware integrity requirement?


There is no way to verify that. It's just something they "pinky swear they won't do". The stringent hardware integrity is to protect against supply chain attacks (Apple making sure they fully control the stack down to the hardware and can run any software they want that connects to any external service they desire - such as the CCP, NSA, 3rdPartyAds, etc.)


PCC is a kludge for mitigating battery life on smartphones doing Personal Assistant work, for knowing what their chances of getting nVidia chip allocations are, for knowing how unreliable nVidia hardware is --basically for having been caught with their pants down when genAI took off. That said, it's a good kludge.

The easy fix is to add more vector cores and RAM to the chips and shrink them to use less power, but it takes time and initially these go to power cord systems (first in the kludge, then maybe MacPro and some kind of AI-hub that sits in your living room and vehicle), then..well you wonder why the small form factor iPhone just was dc'ed?


It's worth noting that AWS has had this sort of infrastructure with Nitro for quite some time now:

https://aws.amazon.com/ec2/nitro/nitro-enclaves/

At some point it was novel to put a separate hardware root of trust on a PCI-e card but I think that was a while ago, even for Apple!


Nitro is good! And showcases a great many of the foundational architectural concepts in PCC.

But there is a major difference that is germane to the topic of Apple’s investment in US server manufacturing: The hardware root of trust. Hardware tampering is the weak point and afaik AWS doesn’t describe any process to certify their supply chain integrity. I think the most they’ve done is commission a review of their architecture document.[1] PCC actually has an auditor sign each server node in the datacenter.

Thank you for mentioning them though. It’s an important advancement in generally available confidential computing infrastructure.

[1] https://aws.amazon.com/blogs/compute/aws-nitro-system-gets-i...


> this is almost certainly being driven by Apple’s Private Cloud Compute architecture and not tariffs, as an investment of this magnitude is not planned overnight.

The tarriffs haven't happened overnight. They've been discussed for going on 2 full years now. Anyone who wasn't blinded by their own political preferences saw this coming.


I think this is conjecture, there is no indication anywhere that it’s driven by the PCC data centers. If anything I would guess they are trying to build hardware in the US. That has to be the only reason to invest that much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: