I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable. Without open silicon, there's no way to detect that -- say -- when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs, additional access is granted to a monitor process.
Of course, this limits the potential attackers to 1) exactly one government (or N number of eyes) or 2) one company, but there's really no way that you can trust remote hardware.
This _does_ increase the trust that the VMs are safe from other attackers, but I guess this depends on your threat model.
> I feel like this is all smoke and mirrors to redirect from the likelihood intentional silicon backdoors that are effectively undetectable.
The technologies Apple PCC is using has real benefits and is most certainly not "all smoke and mirrors". Reproducible builds, remote attestation and transparency logging are individually useful, and the combination of them even more so.
As for the likelihood of Apple launching Apple PCC to redirect attention from backdoors in their silicon, that seems extremely unlikely. We can debate how unlikely, but there are many far more likely explanations. One is that Apple PCC is simply good business. It'll likely reduce security costs for Apple, and strengthen the perception that Apple respects users' privacy.
> when registers r0-rN are set to values [A, ..., N] and a jump to address 0xCONSTANT occurs
I would recommend something more deniable, or at the very least something that can't easily be replayed. Put a challenge-response in there, or attack the TRNG. It is trivial to make a stream of bytes appear random while actually being deterministic. Such an attack would be more deniable, while also allowing a passive network attacker to read all user data. No need to get code execution on the machines.
Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described, although exploitation required root privileges and would allow circumventing their in-kernel protections; protections most other systems do not have. (And they still didn't manage to achieve persistence, despite having beyond-root privileges).
> Apple forgot to disable some cache debugging registers a while back which in effect was similar to something GP described
Thank you for bringing that up. Yes, it is an excellent example that proves the existence of silicon vulnerabilities that allow privilege escalation. Who knows whether it was left there intentionally or not, and if so by whom.
I was primarily arguing that (1) the technologies of Apple PCC are useful and (2) it is _very_ unlikely that Apple PCC is a ploy by Apple, to direct attention away from backdoors in the silicon.
If you take as a fundamental assumption that all your hardware is backdoored by Mossad who has unlimited resources and capacity to intercept and process all your traffic, the game is already lost and there’s no point in doing anything.
If instead you assume your attackers have limited resources, things like this increase the costs attackers have to spend to compromise targets, reducing the number of viable targets and/or the depth to which they can penetrate them.
Some of us just assume Apple itself is a bad actor planning to use and sell customer data for profit; makes all of this smoke and mirrors like GP said.
There is absolutely no technical solution where Apple can prove our data isn't exfiltrated as long as this is their software that runs on their hardware.
You have actually set up a completely impossible-to-win scenario.
I can advertise a service running on open hardware with open software. Unless you personally come inspect my datacenters to verify my claims, you’ll never be happy. Even then you need to confirm that I’m not just sending your traffic here only when you’re looking, and sending it to my evil backdoored hardware when you aren’t.
At some point you have to trust that an operator is acting in good faith. You need to trust that your own hardware wasnt backdoored by the manufacturer. You need to trust that the software you’re running is faithfully compiled from the source code you haven’t personally inspected.
If you don’t trust Apple’s motives, that’s certainly your prerogative. But don’t act like this ridiculous set of objections would suddenly end if they only just used RISC-V and open source. I would bet my life’s savings that you happily use services from other providers you don’t hold to this same standard.
I'm looking for a middle ground. I need to use and trust hardware from vendors like Apple but I use as few of their services as possible (and verify that with firewalls and traffic inspection).
My concern here is with Apple Intelligence and this wishy-washy hybrid approach where some of the time your data is sent to the "private cloud" and some of the time it's processed locally on device. I absolutely hate that and need a big switch in Settings that completely turns off all cloud processing of data; but given how much they're spending on advertising how "private" their cloud is I suspect they plan to not make that optional at all (not just opt-out by default). At that point, all the photos you take might be sent to their cloud for "beautification" or whatever and there's no way to know whether they're also analyzed for other things or sent out to the CCP to make sure you're not participating in a protest against Xi.
Faraday caged, deafened power supply, Heartbeat/pulse sensors, proximity sensors, voltage sensors, EM/radio triggers, dead-man switch, all reporting with constantly rotating codes.
Multiple servers in different locations, depending on threat model because of jurisdiction, with XOR'd secrets, with random access to memory to obfuscate the real address if it needs zeroed/oned/zeroed, Da Vinci codex style.
Make access directly tied to reputation/staked interest/invitation, with subtle canaries and watermarks.
Even if it got super-chilled and no noticeable voltage disruption, and someone didn't set off any other alarm bells, they still have to get the other machine within the timeout.
And you could just do 3/5 multi-sig, and apply CAP theorem.
But if the best people in the world can't do it ("for more than a year"), then there is something less than ideal in the above hypothetical.
>Throwing random cryptography buzzwords at a problem does not magically create a secure solution.
no but any more words between those naughty ones and im pushing my quota
Diffie Hellman,
TIFU, handshake, auth between two+ parties
Homomorphic encryption, zero knowledge proofs,
Doesnt publicly exist in a useful manner; any implementations likely will be ITAR'd or NIST'd moled. allows verifiable, but anonymous computation, trust-less computing,
they glow in the dark, you can see em in your driving.
run em over, thats what ya do
The court entered this ruling despite testimony from an attorney who stated, “[b]ecause of the [FISA Amendments Act], we now have to assume that every one of our international communications may be monitored by the government.” Id., 133 S.Ct. at 1148.
The gold standard for WAPS is the Gorgon Stare system which is deployed aboard the Reaper UAS. The current version of Gorgon Stare uses five electro-optical and four infrared cameras to generate imagery from 12 different angles. Gorgon Stare can provide a continuous city-sized overall picture, multiple sub-views of the overall field and what are high resolution “chipouts” of individual views, each of which can be streamed in real time to multiple viewers. A single Gorgon Stare pod can generate two terabytes of data a day.
"After scandals with the distribution of secret documents by WikiLeaks, the exposes by Edward Snowden, reports about Dmitry Medvedev being bugged during his visit to the G20 London summit (in 2009), it has been decided to expand the practice of creating paper documents," the source said.
Since 2008, most of Intel’s chipsets have contained a tiny homunculus computer called the “Management Engine” (ME). The ME is a largely undocumented master controller for your CPU: it works with system firmware during boot and has direct access to system memory, the screen, keyboard, and network. All of the code inside the ME is secret, signed, and tightly controlled by Intel. Last week, vulnerabilities in the Active Management (AMT) module in some Management Engines have caused lots of machines with Intel CPUs to be disastrously vulnerable to remote and local attackers.
The economics of silicon manufacturing and Apple's own security goals (including the security of their business model) restrict the kinds of backdoors you can embed in their servers at that level.
Let's assume Apple has been compromised in some way and releases new chips with a backdoor. It's expensive to insert extra logic into just one particular spin of a chip; that involves extra tooling cost that would be noticeable line-items and show up in discovery were Apple to be sued about their false claims. So it needs to be on all the chips, not just a specific "defeat PCC" spin of their silicon. So they'd be shipping iPads and iPhones with hardware backdoors.
What happens when those backdoors inevitably leak? Well, now you have a trivial jailbreak vector that Apple can't patch. Apple's security model could be roughly boiled down as "our DRM is your security"; while they also have lots of actual security, they pride themselves on the fact that they have an economic incentive to lock the system down to keep both bad actors and competing app stores out. So if this backdoor was inserted without the knowledge of Apple management, there are going to be heads rolling. And if it was, then they're going to be sued up the ass once people realize the implications of such a thing, because Tim Cook went up on stage and promised everyone they were building servers that would refuse to let them read your Siri queries.
All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There's some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There's currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it's game over, it's not detectable remotely.
Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there's this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It's a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren't satisfied with the audit. Given Apple's legal resources and famously leak-proof operation, is this a convincing proposition?
Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can't forge an attestation (assuming absence of bugs) because they don't have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren't running any confidential workloads so there's no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people's hardware, not being able to intercept the network connections and so on.
So you need a higher authority that can force them to conspire which in practice means only the US government.
In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that's all it takes to dismantle the entire architecture. Apple know this and are doing the best they can within the "don't team up with competitors" constraint they obviously are placed under. But trust is ultimately a human thing and the purpose of corporations is to let us abstract and to some extent anthropomorphize large groups. So I'm not totally sure this works, socially.
> simply asserts that a particular key was generated inside a CPU ... There's currently no good way to prove this step
Yes, but there are better and worse ways to do it. Here's how I think about it. I know you know some of this but I'll write it out for other HN readers as well.
Let's start with the supply chain for an SoC's master key. A master key that only uses entropy from an on-die PUF is vulnerable to mistakes and attacks on the chip design as well as the process technology. A master key memory on-die which is provisioned by the fab, or during packaging, or by the eventual buyer of the SoC, is vulnerable to mistakes and attack during that provisioning step.
I think state-of-the-art would be something like:
- an on-die key memory, where the storage is in the vias, using antifuse technology that prevents readout of the bits using x-ray,
- provisioned using multiple entropy source controlled by different supply chains such as (1) an on-die PUF, (2) an on-die TRNG, (3) an off-die TRNG controlled by the eventual buyer,
- provisioned by the eventual buyer and not earlier
As for the cryptographic remote attestation claim itself, such as a TPM Quote, that doesn't have to be only one signature.
As for detectability, discoverability and deterrence, transparency logs makes targeted attacks discoverable. By tlogging all relevant cryptographic claims, including claims related to inventory and provisioning of master keys, an attacker would have to circumvent quite a lot of safeguards to remain undetected.
Finally, if we assume that the attacker is actually at Apple - management, a team, a disgruntled employee, saboteurs employed by competitors - what this type of architecture does is it forces the attacker to make explicit claims that are more easily falsifiable than without such an architecture. And multiple people need to conspire in order for an attack to succeed.
Hello! I'm afraid I don't recognize the username but glad to know we've met :) Feel free to email me if you'd like to greet under another name.
Let's agree that Apple are doing state-of-the-art work in terms of internal manufacturing controls and making those auditable. I think actually the more interesting and tricky part is how to manage software evolution. This is something I've brought up with [potential] customers in the past when working with them on SGX related projects: for this to make sense, socially, then there has to be a third party audit for not only the software in the abstract but each version of the software. And that really needs to be enforced by the client, which means, every change to the software needs to be audited. This is usually a non-starter for most companies because they're afraid it'd kill velocity, so for my own experiments I looked at in-process sandboxing and the like to try and restrict the TCB even within the remotely attested address space.
In this case Apple may have an advantage because the software is "just" doing inferencing, I guess, which isn't likely to be advantageous to keep secret, and inferencing logic is fairly stable, small and inherently sandboxable. It should be easy to get it to be audited. For more general application of confidential/private computing though it's definitely an issue.
The issue of multiple Apple devs conspiring isn't so unlikely in my view. Bear in mind that end-to-end encryption made similar sorts of promises that tech firm employees can't read your messages, but the moment WhatsApp decided that combating "rumors" was the progressive thing to do they added a forwarding counter to messages so they could stop forwarding chains. Cryptography 101: your adversary should not be able to detect that you're repeating yourself; failed, just like that. The more plausible failure mode here is therefore not the case of spies or saboteurs but rather a deliberate weakening of the software boundary to leak data to Apple because executives decide they have a moral duty to do so. This doesn't even necessarily have to be kept secret. WhatsApp's E2E forwarding policy is documented on their website, they announced it in a blog post. My experience is that 99% of even tech workers believe that it does give you normal cryptographic guarantees and is un-censorable as a consequence, which just isn't the case.
Still, all this does lay the foundations for much stronger and more trustworthy systems, even if not every problem is addressed right away.
>backdoors inevitably leak? Well, now you have a trivial jailbreak vector
the discover-ability of an exploit vector relates little to its trivialness, definitely when considering the context (nation-state-APTs)
You can hold the enter key down for 40 seconds to login into any certain Linux Server distro, for years. No one knew, ez to do.
You can have a chip inside your chip that only accepts encrypted and signed microcode and has control over the superior chip. Everyone knows - nothing you can do.
Nation state actors however, can facilitate either; APT's can forge fake digital forensics that imply another motive/state/false flag.
This is an interesting idea. However what does open hardware mean? How can you prove that the design or architecture that was “opened” is actually what was built? What does the attestation even mean in this scenario?
Great question. Most hardware projects I've seen that market themselves as open source hardware provide the schematic and PCB design, but still use ICs that are proprietary. One of my companies, Tillitis, uses an FPGA as the main IC, and we provide the hardware design configured on the FPGA. Still, the FPGA itself is proprietary.
Another aspect to consider is whether you can audit and modify the design artefacts with open source tooling. If the schematics and PCB design is stored in a proprietary format I'd say that's slightly less open source hardware than if the format was KiCad EDA, which is open source. Similarly, in order to configure the HDL onto the FPGA, do you need to use 50 GB of proprietary Xilinx tooling, or can you use open tools for synthesis, place-and-route, and configuration? That also affects the level of openness in my opinion.
We can ask similar questions of open source software. People who run a Linux distribution typically don't compile packages themselves. If those packages are not reproducible from source, in what sense is the binary open source? It seems we consider it to be open source software because someone we trust claimed it was built from open source code.
You're right. It is very hard, if not impossible, to get absolute guarantees. Having said that, FPGAs can make supply chain attacks harder. See my other comments in this thread.
This is my thought exactly. I really love the idea of open hardware, but I don’t see how it would protect against cover surveillance. What’s stopping a company/government/etc from adding surveillance to an open design? How would you determine that the hardware being used is identical to the open hardware design? You still ultimately have to trust that the organisations involved in manufacturing/assembling/installing/operating the hardware in question hasn’t done something nefarious. And that brings us back to square one.
This website in particular tends to get very upset and is all too happy to point out irrelevant counter examples every time I point this out but the actual ground truth of the matter here is that you aren’t going to find yourself on a US intel targeting list by accident and unless you are doing something incredibly stupid you can use Apple / Google cloud services without a second thought.
> How would you determine that the hardware being used is identical to the open hardware design?
FPGAs can help with this. They allow you to inspect the HDL, synthesize it and configure it onto the FPGA chip yourself. The FPGA chip is still proprietary, but by using an FPGA you are making certain supply chain attacks harder.
> How do you know the proprietary part of the FPGA chip performs as expected and does not covertly gather data from the configured gates?
We don't, but using an FPGA can make supply chain attacks harder.
Let's assume you have a chip design for a microcontroller and you do a tapeout, i.e. you have chips made. An attacker in your supply chain might attack your chip design before the design makes it to the fab, maybe the attacker is at the fab, or they change out the chips after you've placed them on your PCB.
If you use an FPGA, your customer could stress test the chip by configuring a variety of designs onto the FPGA. These designs should stress test timing, compute and memory at the very least. This requires the attacker's chip to perform at least as well as the FPGA you're using, while still having the same footprint. An attacker might stack the real FPGA die on top of the attacker's die, but such an attack is much easier to detect than a few malicious gates on a die. As for covertly gathering or manipulating data, on an FPGA you can choose where to place your cores. That makes it harder for the attacker to predict where on the FPGA substrate they should place probes, or which gates to attack in order to attack your TRNG, or your master key memory. Those are just some examples.
If you're curious about this type of technology or line of thinking you can check out the website of one of my companies: tillitis.se
If this is your position then you might as well stop using any computing devices of any kind. Which includes any kind of smart devices. Since you obviously aren't doing that, then you're trying to hold Apple to a standard you won't even follow yourself.
On top of which, your comment is a complete non-sequitur to the topic at hand. You could reply with this take to literally any security/privacy related thread.
No one should consider this any protection against nation state actors who are in collaboration against Apple. That doesn't mean it's pointless. Removing most of the cloud software stack from the TCB and also protecting against malicious or compromised system administrators is still very valuable for people who are going to move to the cloud anyway.
The Bloomberg SuperMicro implant in its various forms is an exceptionally poor example here: it's been widely criticized, never corroborated, and, Apple's Private Compute architecture has extensive mitigation against every type of purported attack in the various forms the SuperMicro story has taken. UEFI/BIOS backdoors, implanted chips affecting the BMC firmware, and malicious/tampered storage device firmware are all accounted for in the Private Compute trust model.
iirc, no real proof was ever provided for that bloomberg article (despite it also never being retracted). many looked for the chips and from everything I heard there was never a concrete situation where this was discovered.
Doesn't make the possible threat less real (see recent news in Lebanon), but that story in particular seems to have not stood up to closer inquiry.
Transparency through things like attestation is capable of proving nothing unexpected is running; for instance you can provide power/CPU time numbers or hashes of arbitrary memory and this can make it arbitrarily hard to run extra code since it would take more time.
And the secure routing does make most of these attacks infeasible.
There's been some limited research in this space; see for instance xoreaxeaxeax's sandsifter tool which has found millions of undocumented processor instructions [0].
Yeah, but, considering the sheer complexity of modern CPUs and SoCs, this is still the case even if you have the silicon in front of you. That ship sailed some time ago.
It depends on what you want to do. If all you're trying to do is produce an Ed25519 signature you could use something like the Tillitis TKey. It's a product developed by one of my companies. As I've mentioned elsewhere in this thread it is open source hardware in the sense that the schematic, PCB design _and_ hardware design (FPGA configuration) are all open source. Not only that, the FPGA only has about 5000 logic cells. This makes it feasible for an individual to audit the software and the hardware it is running on to a much greater extent than any other system available for purchase. At least I'm not aware of a more open and auditable system than ours.
You're right that it isn't. I assumed that your "..sheer complexity of modern CPUs.." statement was in response to "Without open silicon, there's no way to detect..". That's what prompted my response.
I realise now that you were probably responding to "This _does_ increase the trust that the VMs are safe from other attackers".
You do have to trust the SEP/TPM here, it sounds like. That is verified by having a third party auditor watch them get installed, and by the anonymous proxy routing thingy making it so they can't fake only some of them but would have to fake all of them to be reliable.
If they were okay with it being unreliable, then clients could tell via timing because some of the nodes would perform differently, or they'd perform differently depending on which client or what prompt it was processing. It's surprisingly difficult to hide timing differences, eg all those Spectre cache-reading attacks on browsers.
It does look like there's room to add more verification (like the client asking the server to do more intensive proofs, or homomorphic encryption). Could always go ask for it.
Of course, this limits the potential attackers to 1) exactly one government (or N number of eyes) or 2) one company, but there's really no way that you can trust remote hardware.
This _does_ increase the trust that the VMs are safe from other attackers, but I guess this depends on your threat model.