To me, creation and standardization of the capability is evil, even if it takes a further party's decision to cause actual effects. And I see those subsequent decisions as pretty inevitable, given how the market functions.
I would draw an analog to surveillance. I have a problem with surveillance itself, even without the surveillance being used to facilitate any bad effects. Even just building the systems normalizes the paradigm and puts us in a precarious situation.
How is this different than every other instance of signing for verification? Are you opposed to package managers, publicly posted hashes, GPG, and even TLS works with the same method. What about it verifying the operating system makes it more prone to being misused?
It's not the usage of the mechanism itself, but how the mechanism is used that mandates specific types of policy. It's a matter of where the trust is anchored:
> publicly posted hashes
The method you used to retrieve the hash and ...
> package managers
The method you used to obtain the initial install and ...
> GPG
The web of trust (how you came to associate a given key with an identity) and ...
> TLS
As commonly used, the CAs. Which we're presently having a problem with, because the list is too damn fixed. In the case of TLS applied to other protocols (eg openvpn), then the trust lies in how the keys are distributed. And ...
The "..." is of course the integrity of your machine. Which, unless you always keep it in your sight, is a big if. A large part of this is what boot image verification is aiming to solve. But to do this, one needs to choose somewhere else to anchor the trust. "Secure Boot" specifies that this trust should be anchored in manufacturer-designated entities using public key signatures. On x64 this this would raise antitrust hackles, so Microsoft mandates (for now) that its primary security property be destroyed, leaving the anchor back to possession/integrity of the machine.
What I'm advocating is that this trust anchor could also be something non-trapdoored like a proof of work (or simple waiting time, since we're dealing with trusted hardware). For example, imagine if the specification mandated that all conforming implementations allow changing the keys after waiting in an offline "key provision mode" for a week. The trust root would then be "possession of the hardware for a week" (defeating an evil maid), rather than a fixed set of manufacturer-designated signers.
You could root your trust in a TPM or other hardware security modules. You still have to trust the manufacturer of the HSM chip but that's their entire business model, unlike Microsoft's.
You can't really "root" your trust in an HSM. Yes, the HSM is trusted in that if it is broken, then the security of the system is as well. But the "trust root" is what the HSM uses as a specification for what to trust. This still boils down to a public key, physical possession, proof of work, immutable hash, etc.
>Secure Boot" specifies that this trust should be anchored in manufacturer-designated entities using public key signatures.
No, it doesn't. It doesn't specify how the keys should be dealt with at all. The implementation currently has manufacturers controlling that aspect, which the author views as flawed.
As long as the trust root consist of public keys and not physical possession of the device, then the manufacturer inherently controls those public keys.
A packet manager is installed by the user of the system. It's protecting against attackers elsewhere on the internet tricking the user into installing malware. The "secure boot" is installed by the manufacturer before selling the computer. It's "protecting" the computer against running the software that the user wants. One of them gives users more control over what programs they run, the other gives them less.
That just isn't true, especially on x86. I can and have loaded my own keys and made secure boot validate against my keys to force it to only run what I wanted it to. The standard actually mandates this so it's pretty disingenuous to state that secure boot on a computer prevents the user from running whatever software they want to.
The user is always free to control secureboot except for on windows phones, you might have a point about it being a bad thing for mobile.
It's protecting users from running unsigned operating systems on their computer, protecting the user from a malicious operating system having full access to their computer. You're arguing against the implementation, which both the author and I agree is flawed, where hardware manufacturers are the only ones regularly controlling keys.
> Are you opposed to package managers, publicly posted hashes, GPG, and even TLS works with the same method.
A lot of people are opposed to centralized package managers - not the (desktop) Linux ones, since Linux distros aren't monopolish enough to have political power, but their proprietary equivalents, the Windows Store and Mac App Store (also mobile app stores but that's a bit different). Nothing stops you from installing software from outside the store on Macs (in fact the store's pretty dead), but there's no end of complaints about walled gardens and speculation about an iOS-style lockdown being implemented in the future. Ditto on Windows, but Gabe Newell famously called Windows 8 a catastrophe (and was widely cheered for it) and started a big Linux push, just because Microsoft created a store to compete with his own company's centralized store.
a more precise analogy might be surveillance cameras. while an argument could be made for their immorality, the vast majority of society accepts at this point that while they can be used for bad (mass surveillance), their uses are varied and beneficial enough that we accept their costs as one of society's.
I would draw an analog to surveillance. I have a problem with surveillance itself, even without the surveillance being used to facilitate any bad effects. Even just building the systems normalizes the paradigm and puts us in a precarious situation.