Hacker News new | past | comments | ask | show | jobs | submit login
Intel Software Guard Extensions for Linux (01.org)
93 points by snaky on June 25, 2016 | hide | past | favorite | 47 comments



>protection

Except from ever more sophisticated side-channel attacks.

I can see why someone would want to have it in cloud machines. But I hope it never makes it into enduser devices, it's only going to be used for DRM there.


  > it's only going to be used for DRM there.
It's will be also useful for spyware developers too.


Thinking about it, an alternative would be the ability to use user-supplied debug keys (in the BIOS) instead of intel signing keys.

That way neither DRM nor malware would not be possible because the user could enter debug mode to inspect the enclaves if he wanted to but could still use SGX to secure his own applications because malicious code would not have access to the private keys.


Intel clearly wants complete control over what gets to run on the PC platform --- which is rather disturbing, but seems to be the way a lot of other companies are moving these days. Introducing SGX and "verified" software and promoting it as the best thing for everyone is only the first step. It's only a matter of time before they convince everyone to eventually deprecate "insecure" (i.e. free) software that doesn't use it, completely killing the "P" in "PC".

I'd consider SGX a small step forward for security and a big step backward for personal freedom... it makes me sad just how accurate Stallman was nearly 20 years ago:

https://www.gnu.org/philosophy/right-to-read.en.html

As such, I don't consider this announcement great news at all --- it's downright scary, an indicator of where things are going.


Its been a slow boiling frog ever since the Palladium debacle.


The issue is secure computing isn't happening and the current approach to security is broken. Malware and viruses and firmware hacks, ransomeware. And this approach of laziness is spreading to IOT and will lead to massive national infrastructure failures. State sponsored attacks you the US via the NSA, by Russia, China and stateless organizations will yield digital carnage. The more our everyday things are integrated with compute the great propensity for hacks, loss of data, loss of identity etc.

The current OS platforms don't even offer you any form of real protection of your person. And companies like Facebook, Google et all are all about mining YOU.


It can be used as a programmable secure element. Think disk encryption (with protection against brute forcing), storage of SSH and TLS keys, credit card authentication, etc.


I did not say that it will have zero potential security applications.

What I'm saying is that its current threat model also defends against any access by the owner of the machine, even though that is not necessary to achieve application security.

Not even boot time configuration can override any of it. So it is unnecessarily user-hostile because it can be used to implement DRM and similar schemes where the user has no control over the execution on his hardware.

It does not provide an escape hatch when it should.


Note that I don't know anything about SGX, but skimming very briefly through the first part of the below slides [1], it may look like it's just the ability to create — at runtime — special memory regions that are unreadable outside of the process space (as in, when the region is created, even root can't unscramble that region).

Or are we talking about execution of pre-encrypted code on disk that the machine owner can't disassemble? Because the former sounds like a good idea, while the latter sounds like an insanely bad one.

[1] https://software.intel.com/sites/default/files/332680-002.pd...


The former combined with the authentication aspect enables the latter.

1. plain-text code gets loaded into enclave

2. enclave generates a keypair

3. enclave authenticates itself against a 3rd party (the DRM/malware mothership) and sends it the pubkey

4. mothership sends additional secrets / code, only decryptable by the enclave

5. you now have uninspectable code running on your machine

And since the enclave can persist itself with the sealing key the handshake only has to happen once, e.g. during the installation phase which often happens with elevated privileges, i.e. also includes network access.

Oh, and the enclave also has access to the rest of the process memory, i.e. the system is not shielded from the actions of said uninspectable code.


> defends against any access by the owner of the machine, even though that is not necessary to achieve application security

Sure it is. Why shouldn't a laptop be able to protect itself against a nation-state the way an iPhone can? The only real way to do that is to make the enclave keys write-only. Anything less (e.g. giving the user master recovery codes) is vulnerable to the nation-state's favorite tactic—rubber-hose cryptanalysis.


> The only real way to do that is to make the enclave keys write-only.

That's where you are wrong.

You could install a pubkey in the bios that allowed decryption (by making the symmetric key of enclaves available through encryption with the pubkey). Then the user can choose whether he wants debuggability (by keeping the private key) or security against rubberhose-cryptanalysis (by discarding the key)


In a threat model where the Bad Guy (intelligence organization? police? criminals?) have physically taken the machine, how exactly do you define owner?

What about a cloud service that wants to deny themselves access to your data, despite the fact that they own the physical hardware?

The iPhone secure element, for example, is specifically designed to defend attackers with physical possession.


> What about a cloud service that wants to deny themselves access to your data, despite the fact that they own the physical hardware?

This scenario is fundamentally flawed. If you can't trust them with your data then you can't give it to them, otherwise the first side channel attack against the hardware gives them access to it.

On top of that, all you're really doing is moving the party you have to trust from Amazon/Google/Apple/Microsoft/Rackspace/yourself to Intel. Why is Intel supposed to be any more trustworthy? They're inherently worse because they have less competition, so you have less choice in who you're willing to trust.


See my other post: https://news.ycombinator.com/item?id=11977403

The cloud provider could operate by accepting a pubkey provided by you, thus they would not have the private key.


But can they prove that they are operating using your pubkey? If so, then you're back to square one -- your malware will prove that it's operating with Intel's (or the malware author's) pubkey. If not, then there's no security.


I think you do not understand my proposal.

I did not propose to replace intel's signing key, although that would also be possible[0]. I'm suggesting to add an additional keypair that can be used to decrypt a secure enclave's memory. Let's call it the backdoor key. Because this is a legit backdoor, to be used by the owner of the house.

Attestation would be performed with intel's key (so you know it's not an emulator) but also indicate which keypair could be used to break the enclave (so you know who has access to the backdoor).

By default that could be an invalid (unusuable) key. If you want to debug enclaves, e.g. because you suspect they run malware, you would add your own. If you want to run on a cloud provider, you send your own to the cloud provider. If you want to protect your own stuff from malware or rubberhose cryptanalysis, you don't need one and can leave the invalid key or one with a discarded private key in place.

To preempt a possible objection: This backdoor does not work retroactively. Only enclaves created after changing the key will be affected by it and it will show up in their attestation. So an attacker with physical access would not gain access to past encrypted data, forward secrecy remains intact.

[0] To fully replace intel's key it would either require a write-only procedure to get it into the hardware at boot time, which would only make sense with physical access to the machine, or one would have to replace the old key with the new one from within an enclave, that way you would ensure trust-continuity and thus avoid the emulation problem.


> Attestation would be performed with intel's key (so you know it's not an emulator) but also indicate which keypair could be used to break the enclave (so you know who has access to the backdoor). > By default that could be an invalid (unusuable) key. If you want to debug enclaves, e.g. because you suspect they run malware, you would add your own.

This wouldn't solve the malware problem. The problematic kind of malware will use attestation, because otherwise you would simply emulate it from start to finish. If the malware author is paying any attention, then their malware will simply refuse to run (C&C won't provision it with a payload) if a backdoor key is installed, just as it would refuse to run if emulated.

You would gain the ability to use SGX for your own internal purposes with a form of escrow, but I think you could achieve this even on existing SGX by writing an enclave that emulates other enclaves and tweaking the key derivation process a bit. Admittedly it would be awkward.


> If the malware author is paying any attention, then their malware will simply refuse to run (C&C won't provision it with a payload) if a backdoor key is installed

That looks like a solution to the malware problem to me. Only people who want to run SGX-based DRM would leave the default key in place. And DRM is structurally indistinguishable from malware.

I theory you could also use a whitelist-approach. Write an open-source enclave-loader which initializes an enclave and then pulls in signed code into the enclave (from within) to execute. But in practice malware would just trick the user to add something to the whitelist. "Want to play this porn video? Just add our DRM!"

So really, just choose between owner-only-debuggability and no malware or DRM and malware. The choice is yours.


It's already here, I have it on my i7-6700k

Edit: I've disabled it in UEFI, but the functionality is definitely there.


Genuine question since I haven't read the SGX docs in detail;

What's to stop us from implementing a broken SGX via trap-and-emulate?


I suppose a CPU would contain some secret key material of its own, that can be used to authenticate the hardware.


I don't have any problems with it being in enduser devices. Let the enduser decide whether to use products with DRM or not.


Endusers have no idea what DRM is. In short period of time there won't be any products without DRM at all.


So if endusers are fine with it, then what's wrong?

Intel Software Guard Extensions just allow for DRM, they don't do DRM.


Skimming through the docs, I'm still not sure how they can guarantee that an enclave can't be meaningfully emulated. Do they simply mean that you can't emulate a specific processor (since each processor uses a unique key)? That wouldn't be the same guarantee at all. Or is it just impractical to emulate SGX with acceptable performance? I'm probably missing something. (Note, I know SGX can be virtualized properly, but I'm specifically referring to emulation that would allow you to effectively debug an SGX-protected program).


https://software.intel.com/sites/default/files/332680-002.pd...

Page 36. After building the enclave you can have the CPU sign the state with an intel key. You can then check the signed state against a known-good state and only submit your secret data to the enclave if signature can be verified against intel's pubkey.

Additional overview: https://software.intel.com/sites/default/files/managed/3e/b9...


So, I only have to take apart one single CPU to destroy the whole concept?


Probably not, I would guess that they're using something similar to a certificate chain, i.e. a per-CPU key which is signed by intel which could then be revoked when the leak gets noticed.


You can emulate an enclave, but, if you do, the emulated enclave doesn't get access to its seal key, etc. The idea is that you provision secrets accessible only with access to the key and then write an enclave to access those secrets.

An enclave can verify itself to another enclave on the same machine, and Intel supplies a mechanism by which licencees (sigh) can verify their enclaves remotely.


I think the question is, is there any way to verify—from the userland of a cloud VM instance, where the enclave is talked to via hypercalls—that you're "installing" an enclave and its key into the processor itself (where it would be protected from access by rogue datacenter ops staff) rather than just into a hypervisor-emulated processor that is actually fully accessible to the machine owner?

The sibling post mentions that the processor signs its enclaves' outputs with an Intel private key, which would help a bit—but if that key is static, that'd be pretty easily thwarted by decapping a single processor to get the key, just like extracting the DRM keys from set-top boxes or game consoles. (TPMs are supposed to "self destruct" when you try to decap them, but that's only scary for the individualized keys in a TPM; if you're willing to sacrifice 1000 chips to reconstruct one key common to them all, it becomes just a matter of persistence.)


This is, indeed, a problem. I'm not aware of any mechanism to protect against it.


So is SGX open to everyone? We don't need a key signed or blessed by Intel to use it? If so that's great. Provably secure mixing services. Secure data processing clouds. Sounds neat.

You could build a secure webmail system and verify it's running as designed.


Probably yes, kind of.

On newer CPUs (unclear which ones yet), there is a set of MSRs called IA32_SGXLEPUBKEYHASH. If available and unlocked by BIOS, then SGX is open to everyone.

Looking at the provided Launch Enclave source (https://github.com/01org/linux-sgx/blob/master/psw/ae/le/lau...), it appears that, even on existing CPUs, the LE can be configured to launch anything. There's a file in the git tree (https://github.com/01org/linux-sgx/blob/master/psw/ae/data/p...) that sounds promising, but I haven't checked whether it's signed with production keys or whether it works for this purpose.


This would have been very,, very useful and immensely popular in the Embedded world (and all those Internet-Of-Thingies).

Intel: make this work on some future generation of Baytrail, with a proper version of Linux (Yocto comes to mind), and you might become relevant again.


Is my understanding correct that this is basically a built-in virtual HSM, with the advantages of being a) fully programmable and b) not performance-limited?


I'm really impressed by how easy it is to get NISTP256 key from the enclave. I think Intel got a lot of things right and then completely failed on the signing and attestation process. Hopefully we can see a future iteration of SGX that builds on this early promise.


We're working on versions of Java, Nginx, MySQL etc that can run inside the secure enclaves, it's interesting and challenging work! http://www.serecaproject.eu/


Does anyone know how much room one of these things would have anyway?


It's yet to be determined what best practices will be, but I have a hunch SGX will be used along the following lines:

1) load an encrypted symmetric key into an enclave

2) decrypt the symmetric key in the enclave

3) create a private key in the enclave, and encrypt it with the symmetric key inside the enclave. Give the encrypted data back to the user for storage.

4) All operations using the private key (sign, decrypt) are marshaled to the SGX enclave. You'll give the enclave the encrypted private key and the operation to be performed, and the enclave will return the result. The private key is decrypted by the symmetric key inside the enclave, and unloaded from the enclave memory as soon as the operation is completed.

There's obviously some churn copying the encypted private key to the enclave each time, but the private key is typically used for very few operations until an ephemeral symmetric key is negotiated. If you're super-paranoid the ephemeral key can marshal its operations to the enclave, but I think most people will agree that the only thing you can realistically protect without sacrificing performance is the private key.


So you would load the symmetric key into the enclave out of band i.e via the BIOS or IPMI? Could this be use the encrypt filesystem or block devices using something like LUKs I wonder?


Around 100MB of physical RAM, to be shared among all enclaves active at a given time in the platform. Enclaves can be swapped out (if the OS/VMM is sophisticated enough), so in principle memory can be overbooked.


That's not a hardware limitation; SGX supports arbitrarily large enclaves. The encrypted region of physical memory has to be reserved during early boot (using the PRMRR_BASE and PRMRR_MASK MSRs), so typically it's a small portion of physical memory to avoid taking too much memory from the OS; however, that could be configured to be larger if needed.

And as you noted, SGX supports the kernel paging memory in and out of the enclave (encrypting it before giving it to the kernel to store), which allows for arbitrarily large enclaves regardless of the amount of reserved physical memory.

Typically you do want to minimize the amount of code that you put inside the enclave to minimize your attack surface and the amount of code you trust, but there isn't any architectural limitation on size.


What current Intel CPUs contain this secure enclave?


Looks like all Skylake processors, even the low-end ones that miss out on a lot of other features: http://ark.intel.com/products/codename/37572/Skylake#@All


Except for the very first models, from like the first two months or so of last fall. They had an issue with it.


It also needs bios-support. Luckily it's off by default and not supported by most bioses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: