With our findings, we prove that SEV cannot adequately protect confidential data in cloud environments from insider attackers, such as rouge administrators, on currently available CPUs.
---
It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
> It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
People have gotten very close to achieving similar goals.
For example, modern games consoles' anti-piracy measures guard against the device owner who has physical control and unlimited time. [1]
iPhone activation locks likewise prevent stolen phones from being used, even by thieves with physical control and unlimited time.
And neither of the systems rely on the clunky 'brick the device if the case is opened' methods of yesteryear.
(Of course there have also been a great many failed attempts - almost every console since the dawn of time has eventually been hacked, as have things like TPMs and TrustZone, many versions of the iPhone were rooted, etc etc)
There's a significant asymmetry in motivation and resources available to compromise hardware between Jimmy and his Xbox vs. Google and their cloud infrastructure.
Yes, someone with an xbox hack has tens of millions of potential customers who can save $60 a game, with complete physical access to the hardware and no chance of getting fired or arrested.
Whereas someone with a Google cloud infrastructure hardware fault injection attack has only a tiny number of spy agencies or rogue admins as potential customers, the servers are all locked up in data centres, and anyone who got caught making an attack would get fired and/or arrested.
Jimmy is only willing to spend less than he'd spend in the cost of games. Even with a large amount of Jimmys there might not be market without getting the cost of an individual attack low enough.
On the other hand, there for sure is a market for cloud based attacks, and nation states that can apply a stick to go along with the carrot of millions of dollars in "consulting fees".
Especially as we move more key infrastructure into the cloud. If people start trusting these sorts of remote systems with things like financial data, the payoff of a clandestine compromise could be hundreds of billions of dollars.
Doubly true when you consider the history of Google working with the USG.
> It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
Yes. To expand: to a function on the CPU an administrator is just another user. The Operating System is responsible for managing those designations.
These trusted computing pieces across all kinds of CPUs are specifically aimed at protecting against people with host-root, so it would seem like it's a goal they've set for themselves and should be reasonably achievable.
Do you mean “adversaries from the inside” could be more detailed to create reasonable limitations on access and resources as imposed by external systems (eg cameras, guards, searches) securing the machines?
> It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
No, safe execution of untrusted code is impossible by the very definition, not without undoing 40 years of IC design practices.
It's an almost physical limitation which makes it very hard to compute something without some electromagnetic leakage from/to the die.
Take a look on secure CPUs for credit cards. They have layer, upon layers of anti-tampering, anti-extraction measures, and yet TEM shops in China do firmware/secret extraction from them for $10k-$20k
It is very hard to perform a physical process while making it impossible to observe it. Similarly it is very difficult to have some object with permanent physical properties that you (the chip) can measure yourself, but no one else can, like a cloud of electrons trapped on an island, or a metal connection between two places.
>> It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.
> No, safe execution of untrusted code is impossible by the very definition
I think this is more about data processing while hiding the data from whoever operates the hardware. Homomorphic encryption could be a partial answer to that.
The idea is to use a special encryption scheme (and associated operations). If I take 50 numbers and multiply them by two before asking you to add them, I'll just have to divide the result by two to get the correct answer, and you won't see the data nor the result. Of course, actual schemes are more complex than that.
What is a TEM shop? Curious about this topic, the threat model for some chips in the secure payments space assumes a secret value much higher than $10k for something like a root encryption key that blows open the payment processing security of multiple cards.
Also, just because something is physically possible, doesn't mean that the barriers to doing so are irrelevant. If it costs you $10k to unbrick a locked & stolen iPhone, then those countermeasures have likely succeeded at their intended purpose. This is why threat models try to quantify the time and/or monetary value of what they're protecting.
A single facility for TEM comes with $10,00,000+ pricetag, and usually they amount to few dozens per a developed country, in use in places like universities, and research institutes.
China has probably more of them than the rest of the world combined.
That the CPU should be able to cryptographically prove that a VM has been setup without any interference from an inside attacker who controls the hardware.
At the very least, SEV massively raises the barrier to such attacks. It's now beyond the ability of a rogue administrator or technician, requiring complex custom motherboards. But a well-funded inside attacker can target something with high enough value.
> It's now beyond the ability of a rouge administrator or technician, requiring complex custom motherboards
The end of the abstract explicitly refutes this. It is claiming that a software-only solution, using keys derived with this technique, can pretend to be a suitable target to migrate a secure VM to, which then allows the rogue admin to inspect or modify anything in the VM.
A bit unclear from the abstract whether the keys they learned how to derive (and the secret material they're derived from) are per individual chip or for all chips ever produced. If it's the former, that means the rogue admin still needs to electrically mess with the hardware once.
The part about "without requiring physical access to the target host" would seem to imply that they only need access to a machine on their end for some attacks.
This still excludes wide ranges of possible rogue admin attacks.
As a minimum, it takes shutting down and powering down the physical machine, then starting it up, which would not go unnoticed in highly controlled environment where SEV makes most sense.
One potential use of SEV is to provide a secure environment to run a VM at an untrusted provider. That provider could do lots of things with funky motherboards and forced migrations without notice by their clients.
If it's an insider attack on company owner and operated hardware, there's always some reason to have a long downtime, and you can piggyback on that to attack the CPUs... Or just put it in a new system and use the migration setup.
Suggested downtimes, organic or sabotage up to attacker's timeline:
HVAC failure: have to shut down many/most/all servers to manage temperatures until HVAC techs can fix.
Automatic transfer switch failure: these things love to fail at the same time as a utility failure, and aren't always easy to bypass.
it does mean though that a system integrator could extract the keys ahead of time, likely without any way to know this has happened. adding a way to generate a new key or otherwise rotate the key material should fix that issue though.
My understanding is that this is part of the threat model of TEEs (Trusted Execution Environment). Whether or not this will ever be achievable is a different story.
It's not plug-and-play. It still needs a custom firmware:
"(...)The presented methods allow us to deploy a custom SEV firmware on the AMD-SP, which enables an adversary to decrypt a VM's memory.(..)"
Voltage glitching is no double-click. It would be a huge embarrassment to AMD if just double-click defeated the secure processor's firmware authentication. This requires electrically messing with the power supply of the processor.
So this means the secure VM feature is secure up to the threat model of someone able to crack open the hardware.
Honestly that's kind of what I would have expected. Just making it almost impossible to get VM memory remotely by owning the hypervisor is pretty good and reduces your attack surface to people who can get into the data center and have electronics expertise.
While its goals are a bit different from confidential computing, people saying "no" here have apparently never heard of the Xbox One. More generally, securing a device against its physical owner is notoriously difficult. Tony Chen gave a talk about how the Xbox One was secured against physical attack: https://www.youtube.com/watch?v=U7VwtOrwceo
Chen makes it very clear that their threat model only includes attacks costing less than the attach rate of the system (about $600). He doesn't consider it an achievable goal in the general case.
does anyone actually use SEV in cloud environments? My impression was that its lineage (my understanding it's basically AMD's intel-SGX) is to enable DRM for stuff like netflix. I know for a time there was a lot of talk about using SGX in the cloud, but I was under the impression that the trust in SGX has been eroded over time to the point where no one thinks it's a good idea.
SEV is completely different from SGX, and doesn't (currently, to my knowledge) have an equivalent on Intel chips that are currently on the market. Google Cloud's confidential compute feature makes use of SEV under the covers.
I've only spun up a SEV instance for the novelty but am considering using it for things like hashicorp vault where performance isn't critical but extra privacy assurance is nice.
Fundamentally, though, system security hasn't caught up with the promise of SEV. It's far more likely that a VM will be compromised by 0-day attacks than insiders at the cloud companies. But if you really need to run a secure kernel on someone else's machine then SEV is the way of the future. This includes using SEV on-premises against hardware attacks. I've wanted hardware RAM encryption for a decade or two to avoid coldboot attacks and similar hardware vulnerabilities.
What benefit would you get from having the fTPM keys? I don't own any PCs with TPM or fTPM as far as I know, so am not very familiar with what having it does as far as user experience is concerned and what having the keys would do to improve that.
It would allow me to fake any measured boot attestation. Right now this infrastructure is only provided to companies looking to secure their network[0] but if you look at Android's SafetyNet and the trends in IT, companies may force you to only use software they approve of to use their services.
On android it's already a choice between banking apps or a device you fully control. I fear that this will include all internet connected devices in the future.
Oh god yes. I don't want a device where I have to choose between full services and full control (for myself). The introduction of SafetyNet really annoyed me for those reasons.
Similar attacks have already been demonstrated for other TEEs so nothing majorly new here (although the details are obviously different). The first work I'm aware of is an attack paper called CLKSCREW on ARM Trustzone. There were also some similar attacks published subsequently on SGX (Plundervolt). It's a hard problem to solve I think. One of the major dividing lines is whether the attack can be performed remotely (i.e. software only using OS power management APIs), or whether it requires physical access. The former obviously has more impact but most likely is much easier to mitigate than physical attacks.
Wow, this is just really basic stuff in the secure IC world that you need to monitor the supply voltage for glitch attacks [0]. The glitch they are injecting is in the 20 us range which isn't even that fast. Whatever part of the chip you want to keep secure (the SP in this case I guess) probably needs a dedicated voltage regulator, preferably an on-chip LDO w/ a droop monitor on the output. I only skimmed the paper, but it seems like there is no supply monitoring being done that would cause the SP to bail out of its hash checking.
I'd like to imagine that the engineers tasked with developing these systems are aware of management's evil endgame, and leave in whatever obvious bugs they can to slow down the loss of computational freedom. The spec for the next version will include an on chip regulator, and they'll have to sabotage it a different way.
Uh, this is completely unethical behavior from an engineer. In no way, shape, or form is the insertion of a deliberate, hidden flaw that breaks the intended security properties of a system an acceptable form of protest.
It wouldn't be a "protest". Rather, it would be directly preserving individual human autonomy against emergent entities. Would you also consider it "unethical" for a farm animal to break out of its pen?
The true ethics violation here is creating devices to be "sold" while retaining control over their new supposed owner. Unfortunately, the digital/software engineer's main recourse to ethical violations is to quit, and someone else will just take their place. As the digital honeymoon wears off and we become keenly aware of communications technology's authoritarian potential, I hope there is a different type of resistance forming within all of these systems of control.
From a skim through the article, it seems to require soldering onto the power line between the voltage regulator and the processor to trigger the voltage drop. I don't believe the VR circuitry could be caused to glitch in the desired way by software / easily accessible physical components (i.e. plugging into USB).
Someone might be able to develop a method of causing this to occur by targeting a draw elsewhere, but this will likely by motherboard specific (or even entire platform specific).
It definitely means that SEV isn't going to save you if your vendors conspire against you, but unless your dealing with a determined state level actor, I doubt there is much risk to most of us. Internal actors (rogue staff) are likely to compromise in a simpler way.
However this is great research, I imagine in CPU designs that are being planned now, there will be some work to ensure the Secure processor is protected, likely by making the processor fault when SP input voltage drops. Alternatively they might be able to move some power regulation onto the processor package providing buffering against voltage manipulation.
One of the more commonly misspelled words. In this case it's just a little easily fixable typo in an article online, but I've seen a few businesses where people wanted "rogue" and had "rouge" instead which is kinda mindblowing. When you've registered a business, setup branding and signage etc all the while repeatedly messing up the same word each step of the way. Boy, that's embarrassing.
---
It is an interesting attack but is the above goal ever achievable? To protect against adversaries from the inside.