Hacker News new | past | comments | ask | show | jobs | submit login
Secure Boot: Who will control your next computer? (fsfe.org)
114 points by mariuz on June 20, 2012 | hide | past | favorite | 114 comments



Secure Boot is only going to hinder legitimate users in the long term.

iphone boot exploits, wii boot exploits, PS3 boot exploits have all been published freely mostly because they are very hard to capitalize on (these machines all have secure boot implementations, albeit not the UEFI one)

If UEFI secure boot exploits really protect against malware, they will be traded like 0-day exploits and will be worth a lot (I suspect it will be harder to update the firmware against these 0-days than running WindowsUpdate).

It's a landgrab; slow and cunning on the x86/AMD64, quick and merciless on the ARM. But it has a lot more to do with grabbing land than with end user security.

edit: drivebyacct2 claims that none of these 3 had security comparable with UEFI SecureBoot. I don't know enough to argue about that - my assumption is that it will be implementation flaws that will be exploited, rather than theoretical flaws. But I might be wrong.


You've probably heard this a lot lately, but:

"everything Stallman said was true"...and now it's happening on a huge scale.


Let's not get ahead of ourselves. Stallman is cool and everything but he is NOT 100% right. Most of us here, for instance, do not believe it is unethical to create "proprietary software". There is still a lot of basic philosophy on which to contest Stallman.


You may disagree with Stallman's philosophy (many do, me too). But his predicitions WERE spot on, and they were made more than 30 years ago.

He predicted DRM for consumer media, and that DRM violations will carry a harsh punishment. As the laws are written (and occasionally practiced), he was more than spot-on.

He predicted being unable to run unapproved software on commodity hardware you own. This has started happening with iPhone/iPad and somewhat with Win7 and (boot locked) Android devices. This post is about it coming to a PC near you in the near future via Win8 UEFI Secure Boot requirements.

In the beginning, you'll have to pay more to get hardware that can run anything you fancy, and it some point you might not be able to.

I'd say his predictions were right. Philosophy and ethics is never a well defined right or wrong, so I'm not sure what you are arguing with.


Those were not predictions. Those things were already happening -- to paraphrase, the rest of the world got caught up in the future of 30 years ago.

The "F* u nvidia" seems also to catch on slowly in Finland -> "there was a disturbance that they needed to take care of. The officer then asked what the disturbance was and the faculty member relented - they were worried that there would be an incident, but that it hadn't yet happened." -- http://www.fsf.org/blogs/community/rms-ati-protest.html


Was anything like DRM (sanctioned in law or not) happening in 1985 when Stallman wrote "The right to read"?

I'm very interested in the details.


DRM is as old as the floppy. Frequently, bad sectors were created on the original media and the software checked for those bad sectors. If you copied the software, you couldn't copy the bad sectors, so the software wouldn't load.


That's a very distant cousin to DRM: if you had two machines, you could just move the floppy or dongle between them (or lend them to a friend, or resell them). You cannot do this with your DRMd music or ebooks. Furthermore, your software and data were usable even if the authorization server went down. (google play4sure if you are not familiar with a modern counter example ).

Furthermore, quaid software's "Copywrite" and central point's option board were able to copy just about everything, and we're legally sold and marketed.

French and US 3-strike accusation-based penalty is very much in line with what stallman was describing. copy protection of the 80's isn't.

The guy deserves credit for quite accurately predicting a non-trivial future.


Right. It was called "copy protection" back then, not DRM.


According to Wikipedia, Stallman wrote "the right to read" in 1997, not 1985:

"The Right to Read" is a short story by Richard Stallman, the founder of the Free Software Foundation, which was first published in 1997 in Communications of the ACM.

That said, I remember several copy protection measures where en vogue for games, on Amiga and Atari, and even business software. It wasn't called DRM at the time, but DRM is just an acronym describing any similar practice, not a specific technology.

From Wikipedia we also learn: "A very early implementation of DRM was the Software Service System (SSS) devised by the Japanese engineer Ryoichi Mori in 1983 [135] and subsequently refined under the name superdistribution. The SSS was based on encryption, with specialized hardware that controlled decryption and also enabled payments to be sent to the copyright holder."


I stand corrected.

For some reason, I remembered reading it already in 1992, that should show about trusting my memory...

Still, he was discussing these things before "The right to read" (in which the ideas were well enough crystalized to put into literature), and modern DRM is inline with "right-to-read" (e.g. Japanese penalties passed this week) and dissimilar to anything that was there (in theory) before 2000 and (in practice) before 2005.

> That said, I remember several copy protection measures where en vogue for games, on Amiga and Atari, and even business software. It wasn't called DRM at the time, but DRM is just an acronym describing any similar practice, not a specific technology.

I see a huge difference between software copy restriction (a.k.a "protection") and DRM. With copy restriction, you owned your copy and could do anything you wanted with it, including lending and reselling it. Many of these schemes would still let you make a backup, so long as the dongle/original was available for a 1 second check. That is, the restriction was on distribution.

Modern DRM restricts use.

> From Wikipedia we also learn: "A very early implementation of DRM was the Software Service System (SSS) devised by the Japanese engineer Ryoichi Mori in 1983 [135] and subsequently refined under the name superdistribution. The SSS was based on encryption, with specialized hardware that controlled decryption and also enabled payments to be sent to the copyright holder."

That's more in line with modern DRM. But Japan had always been in the future :)


There is no good reason to deny anyone the four basic Freedoms. The world will eventually realize this, and get rid of proprietary software for good, in addition to all the other horrors RMS has accurately predicted and warned us about for over three decades.

Too bad people didn't listen, continue to trample everyone's Freedoms under their foots for their petty selfish greed while having the audacity to assert their actions to be perfectly ethical. We are currently seeing where this has lead us, and it is the duty of every good man or woman to do anything in their power to stop it, lest evil will triumph.


> There is no good reason to deny anyone the four basic Freedoms.

Sure there is. First good reason: Provides the ability to turn a profit on the sale of software licenses, which is a tried and true method for producing better quality consumer software than we've ever seen from OSS products.

> Too bad people didn't listen, continue to trample everyone's Freedoms under their foots for their petty selfish greed while having the audacity to assert their actions to be perfectly ethical.

I write closed-source software, and sell it to people that want to buy it. If they prefer OSS solutions, that's fine -- they're under no obligation to buy.

They want what I'm selling.

If making a decent wage while giving people what they want is evil, well, call me Satan.

> ... it is the duty of every good man or woman to do anything in their power to stop it, lest evil will triumph.

Yeah, this right here? This is why I can't stand the GPLers.


>Provides the ability to turn a profit on the sale of software licenses, which is a tried and true method for producing better quality consumer software than we've ever seen from OSS products.

a) "Tried and true" does not mean it will work forever. It's also bullshit to sell copies of non-scarce resources. Besides, you can sell Free software. Your argument is invalid.

b) "better quality [...] than ever seen from OSS products" is bullcrap, and you know it. There's a few select niche markets where proprietary software is still entrenched (eg, Graphic Design), but this too will change. Also, if you don't like something or you feel like a feature is missing, stop complaining and start contributing.

>I write closed-source software, and sell it to people that want to buy it.

And again you are missing the fact that this neither requires your software to be proprietary nor does it mean you need to trample on your users' Freedoms. If they want what you are writing, they will still want it if it's Free (as in Speech).

>If making a decent wage while giving people what they want is evil, well, call me Satan.

Making a decent wage with unethical practices is still evil. The circumstances do not matter. You're essentially saying that the ability to turn a profit somehow excuses wrongdoing. It doesn't.

>Yeah, this right here? This is why I can't stand the GPLers.

Hate us all you want, in the end we are right and we'll eventually win. :)


> a) "Tried and true" does not mean it will work forever. It's also bullshit to sell copies of non-scarce resources. Besides, you can sell Free software. Your argument is invalid.

It works now, and as of yet, there's been no better model demonstrated, despite no lack of trying.

> And again you are missing the fact that this neither requires your software to be proprietary nor does it mean you need to trample on your users' Freedoms. If they want what you are writing, they will still want it if it's Free (as in Speech).

If my users don't want me to trample on their "Freedoms", they don't have to buy my software.


> If my users don't want me to trample on their "Freedoms", they don't have to buy my software

This is true of most anything, but doesn't stop consumer protection laws from existing. Moreover, most people agree that consumer protection laws are reasonable and necessary.


Most consumer protection laws have to do with curtailing dishonest or misrepresentative behaviors -- eg, selling a defective product, misrepresenting product features, etc.

http://en.wikipedia.org/wiki/Consumer_protection#United_Stat...


Yes. And a program you can't modify to suit your needs is defective. This isn't enshrined in the law at the moment, but it's the same idea.

However, my real point was just that people's being able to not buy your product is no defense: it doesn't work in consumer protection cases and shouldn't work here.

Moreover, consumer protection isn't just about dishonesty: even if you represent your product perfectly accurately, you still can't sell certain (mostly dangerous) things. For example, if you design a really cheap car and market it as "cheap but dangerous", you still can't sell it.


> Yes. And a program you can't modify to suit your needs is defective. This isn't enshrined in the law at the moment, but it's the same idea.

It's not defective according to the people buying it. That's what matters.

> For example, if you design a really cheap car and market it as "cheap but dangerous", you still can't sell it.

This analogy is not particularly applicable to consumer software, and applies mainly to liability. A lawyer would need to step in here, as I'm guessing neither of us are experts in consumer product liability.


I'm pretty sure it's not a matter of liability--it's simply against the law to sell a car without certain safety features (like airbags). Moreover, there are good reasons for a customer to not want airbags--they make the car heavier and that kills acceleration and handling.

So there is plenty of precedent--just because a customer would be fine with something (like less safety) does not necessarily justify it.


> So there is plenty of precedent ..

No, for software, there isn't.


>If my users don't want me to trample on their "Freedoms", they don't have to buy my software.

They could most likely not make an informed choice, because you do not educate them about denying them their freedom. And still there is no argument to do it.


> They could most likely not make an informed choice, because you do not educate them about denying them their freedom.

Users don't care about their "freedom", and neither do I, so why is educating them my problem?

Also, you're not giving users credit. I'm informed, and yet I still choose to buy Mac OS X, iPhones, and even DRM'd media (iTunes video rentals).

> And still there is no argument to do it.

I don't see any other viable economically proven method to fund our consumer software development.


> I don't see any other viable economically proven method to fund our consumer software development.

Apparently you have never heard of RedHat. You missed a billion dollar company. They release their software for free, as CentOS, as well. But they sell costumer support on top of that. It WORKS.


RedHat doesn't sell consumer software.


Right. They don't sell software - they sell expert advice.

You sell software to uninformed users. I can see the difference.


Consumer software that requires significant 'support' would be bad consumer software.


Do you consider Windows a consumer software or not? I see pretty much a big need of support needed for that one. Just look at all the forums dedicated to solving its shortcomings.


Hmm. Should Rovio sell Angry Birds support contracts?

Windows occupies a rather unique market position of ubiquity and scale, and its still debatable whether such a thing would be viable given the R&D costs in producing a modern desktop OS, mobile OS, programming languages and runtimes, Metro, etc.

I'm not a Windows expert, but it's pretty undeniable that Apple and MS pour bucket loads of cash into their respective platforms. Linux, on the desktop, has barely caught up to the last decade's state of the art, and in many places (such as graphics drivers), it relies on closed-source software.

Underneath it all is closed source hardware (like those graphics chipsets and proprietary processor cores). Nobody tends to complain about that, since spending millions on hardware development is out of reach. Making use of software source code is equally out of reach to nearly all users: thus, they just don't care.


>its still debatable whether such a thing would be viable given the R&D costs in producing a modern desktop OS, mobile OS, programming languages and runtimes, Metro, etc.

Bullshit. The estimated cost to redevelop the Linux kernel in a proprietary environment exceeds 600 million USD, and has probably even reached the billion USD mark by now.[1]. It's perfectly possible.

The only reason Windows is still so entrenched on the market are shady monopolist tactics, intentional lock-in practices and closed, shitty formats such as the Office pseudo-standard. In other words, the very things RMS warned about and which the free software community is fighting against.

>Linux, on the desktop, has barely caught up to the last decade's state of the art, and in many places (such as graphics drivers), it relies on closed-source software.

Your ramblings are so dishonest it's cringe-worthy. GNU/Linux is perfectly viable on the desktop. The greatest hurdle it faces is exactly the kind of FUD that you are spreading.

As for the drivers: yes, and this is a problem. The evil of proprietary hardware and closed specs is something that needs to go the way of the dodo, too, and it needs to do so fast. We could have drivers vastly exceeding anything proprietary if the specs for nVidia or ATI/AMD cards would be accessible. For the record, the best graphic drivers available for GNU/Linux are for the Intel Graphic chips, and they are perfectly free.

[1]: https://en.wikipedia.org/wiki/Linux_kernel#Estimated_cost_to...


You clearly have no idea what you're talking about If you're equating the full desktop technology stacks of Mac OS X, Windows, and Linux.

Best of luck in your quixotic quest.

You never did answer as to whether Roxio should sell angry bird support contracts.

Oh, and the problem isn't just ATI and nVidia. Your ARM and Intel cores are quite proprietary too.


> If my users don't want me to trample on their "Freedoms", they don't have to buy my software.

Would you please expose yourself so your customers can know who's trampling on their freedoms? Or will you remain hidden behind a nickname?


> Or will you remain hidden behind a nickname?

Yes, I will remain anonymous. What does it matter? Non-GPL software licensing is hardly a rarity, and if customers care, they can use 'Free' software.


Maybe he works for Google. Or are they magically excused like Apple is?


>a) "Tried and true" does not mean it will work forever. It's also bullshit to sell copies of non-scarce resources. Besides, you can sell Free software. Your argument is invalid.

While you technically can sell free software, you can't make much of a profit off of it in the current climate. Red Hat et al DO NOT make money off the software -- they make the money off incidentals like support and consulting. The software is a prerequisite, but if Linux went away, Red Hat could survive by switching its techs and consultant to whatever replaced it. People who make OSS do not make money off of their software directly.

This happens because Stallman believes that no creator should be granted a limited monopoly over the distribution of his product. This is a completely reasonable "freedom" to disavow; copyright has been around hundreds of years and is explicitly authorized in the US Constitution. You can easily argue that copyright has gone out of control, but the GPL essentially mandates removal of the profit-bearing portions of copyright by legally releasing your privilege to control distribution (and thus become the sole supplier).

Perhaps this would work better in a world where copyright didn't exist at all; then there'd be no exclusive right and the numbers wouldn't skew so deeply negative as compared to proprietary options.

Personally I fully believe computer users have a right to a readable copy of the code they expect to execute on their machines. I simply do not believe that there is a moral imperative that software vendors allow anyone and everyone to redistribute that package. If a vendor chooses to do this, that is fine and good, but I don't believe it's immoral to try to make some money developing complex software by restricting licensees and utilizing copyright law to a reasonable extent (i.e., as a limited monopoly on distribution intended to promote useful progress in science and the arts).


>I simply do not believe that there is a moral imperative that software vendors allow anyone and everyone to redistribute that package.

The moral imperative here is not that they need to allow it - it's that they can and should not be able to forbid it. You can't force someone to share something, but neither can you stop others from doing so.


This is an argument against copyright. I think it's a rather extreme position. Copyright is useful as a concept even if it's gotten completely out of control.

The issue with the GPL is that it takes the most critical component of copyright away -- the ability to control the distribution of your work in any meaningful degree (e.g., by charging money for access to it). You can tinker with the copyright code to say that works transformed away from direct human readability (i.e. things that require a machine to be understood, like binary compiled code) are not copyrightable, requiring everyone to release source code, etc., but the premise of freedom 2 is neither self-evident or inviolable. In the real world, people have to make money, and without some form of intellectual property law, any digitized work immediately has an infinite supply, which will always usurp any level of demand no matter how significant, making it impossible to profit off of the digitized work directly.

I don't believe there's a moral requirement to allow everyone access to the fruits of your labor for free. If the software generates value for the end-user, it seems fair to expect some recompense for the work you've done. Why do you believe it is unfair to actually make money off the product you build? Do we see people giving grills away for free and attempting to make a living off of "selling support" for that grill?


I'm a fan of specifics.

Got any specific predictions of his that haven't panned out / have panned our wrong? Other than HURD / Microkernel architecture.


Sorry but what authority do you have to speak for the HN readers and to say that "most of us do believe" this or that?


People who are ahead of their time always get shot down by the majority.


And oftentimes so do people who are just kooks. So getting shot down by the majority doesn't really tell us much.


So being ahead of their time AND having people trying to shoot them like they are kooks, doesn't make them kooks. I see your point.


And it's nice to see the FSF expressing the issues thoughtfully here, rather than pulling insipid stunts like harassing Apple Store employees.

Maybe it's because this is from FSFE - the European counterpart.


well, it's not like anyone has made a real effort to stop it...


Today is secure boot. Next say some governments will make it illegal to buy a machine without a secure boot feature and forbids you to run any OS without a backdoor. Now i'am sure that your government will never do that

I personaly will never buy any machine, as soon as i can, as soon as it is still legal, with any "secure" feature. My machine is mine, not to the original manufacture, not to Microsoft or to Redhat or to anyone that believe that it is theirs. Even that i know that i live in a great democracy, i have nothing to hide, and my government is my good friend


Couldn't all hardware that has this secure boot functionality simply include a physical switch that grants full access to the hardware? When the switch is on you can install any OS you like, when the switch is off no root kits could install themselves.

What do you think?


While not a physical switch, I believe for x86 / AMD64 computers, Microsoft is mandating that the user must have the option to disable secure boot via firmware switch (see http://blogs.msdn.com/b/b8/archive/2011/09/22/protecting-the... ).

The opposite is true of ARM, however.


> The opposite is true of ARM, however.

There isn't even a non-proprietary standardized ARM platform. You have things like OMAP, but that's defined by TI:

http://en.wikipedia.org/wiki/OMAP


Fair enough. Microsoft is demanding no ARM-based platform running Win8 can have their locked-down boot mechanism disabled. In other words, Microsoft is demanding manufacturers make machines that can only run software Microsoft allows them to run.

Sounds like a really boneheaded move.


> Sounds like a really boneheaded move.

Or a market opportunity. Hard to say -- in many respects, the standardized PC is a fluke, created through clean-room reverse engineering and the resulting clone market.


Nicely put. Also, another huge factor in the standardized open PC was Microsoft licensing DOS to Compaq. That triggered the PC clones and the PC revolution which Linus leveraged with Linux and then Apple a decade and half later.


> Microsoft is mandating that the user must have the option to disable secure boot via firmware switch

They are not mandating it one way or the other. <sarcasm>In their infinite benevolence</sarcasm> they leave that at the discretion of x86/AMD64 machine makers. For now.

They do not for ARM machine makers that want to run Windows.


> They are not mandating it one way or the other. <sarcasm>In their infinite benevolence</sarcasm> they leave that at the discretion of x86/AMD64 machine makers. For now.

Wrong. The Windows 8 logo requirement specifically mandates that secure boot can be disabled by the end user, or (perhaps and...not sure on the correct conjunctive) that the end user can add new keys.


I agree that a compromise giving the best of both worlds would be the ideal outcome here.


I believe this is known as "the ChromeOS solution".


Low level vulnerabilities are indeed something that need to be addressed - such vulns are a real (and growing) threat and by their very nature are incredibly hard to deal with. It's a shame countermeasures are taking this manifestation, however.


Is there any actual evidence for this? Why are they any harder to deal with than cleaning existing malware? If anything they're easier to detect (just read the boot sectors on the drive and compare vs. a small list of known bootloaders). "Clean up" just requires installing a new bootloader and rebooting. Obviously the malware could try to take steps (load compromised storage drivers, say) to defeat this, but that's equally an option for traditional malware too (e.g. hack the filesystem to hide itself).

Where's the urgency here? What makes this so important?


Without Secure Boot systems, even with full disk encryption, would be vulnerable to repeated evil maid attacks.

>If anything they're easier to detect (just read the boot sectors on the drive and compare vs. a small list of known bootloaders)

That is what Secure Boot essentially does.

edit: to expand: Boot sector malware is easy to detect from outside the system, but you need actively to be looking for it. How many users regularly check their bootsectors with an external OS?


So your counterexample is that secure boot can protect against hard drive encryption breaks. I guess that's fair. But it's not perfect protection (a compromised kernel will give up the keys too). Honestly, given the frequency with which we see kernel exploits in the wild I'd say it's at best incrementally better protection. It's also a hypothetical: are there active evil maid attacks in the wild?

And, of course, my point was that secure boot comes with very non-trivial costs (to freedom of use, primarily). I don't think it's worth it, and your example hasn't convinced me.

And I don't understand the edit. Of course no one checks their bootsectors, just as no one checks hashes of their system files. There's a whole industry of (vaguely useful) software to do this for them. How does a potential new vector change the concept of AV software?


It's only a hypothetical that secure boot is going to have any impact at all on the freedom to use x86 hardware.


It's also only a hypothetical that implementing an "Internet killswitch" or government-mandated ISP website blocking ("for the children" of course) will impact our freedom.

The idea is that with the infrastructure in place, we're ever closer to being impacted.


Sure. I was riffing on the other poster calling the malware prevented by a secure boot scheme a hypothetical.

It's certainly a hairy issue. The ability to run a verified software stack is a real benefit to users. Central control of that verification is bad for users, but a shadow of that central control is the obvious way to make it easy for users that don't know they should care.


In context, I called it a hypothetical because I'd asked for specific examples of boot-time malware that justify the rush to secure boot. This wasn't one.


My point is that you are anticipating problems with secure boot (there is no evidence that it will worsen the situation on general purpose hardware; there are certainly reasons to believe it might) and then dismissing some of the justification because it merely anticipates an attack.


The problem is rootkits booting even before your OS or antivirus can. That way they can run a hypervisor and boot the OS on it or load a malicious filysystem driver that returns a benign boot sector if it's requested and no application or antivirus running on that OS can even know that it's running on top of malware. Secure boot can stop this from happening.

Why is this so hard for everyone to understand?


Stop. I assure you I understand the issues, your patronization isn't appreciated.

Malware can defeat an installed AV already, it certainly doesn't need a hypervisor to do it. Likewise it can (and just did, c.f. the news last week) install a malicious driver without pre-OS hooks.

These are exploits that exist in valid, already-authenticated and running OS code. Secure boot can do nothing to stop it. So why is that so hard for you to understand?


> But it's not perfect protection (a compromised kernel will give up the keys too). Honestly, given the frequency with which we see kernel exploits in the wild I'd say it's at best incrementally better protection.

There is no such a thing as perfect protection. All security is incrementally better protection.


Which is exactly my point.


No, your point is –in your own words– that, since it’s at best incremental protection, it isn’t worth the non-trivial costs. Which is not a good point because, as I said and you agreed, all security is at best incrementally better protection. If that was really your point, we could as well just drop any protection, since none of it is perfect.

You may say the increment in protection is not enough to justify the costs, but that would be a different story.


You've completely lost me. That last sentence? That's what I meant.

The first bit is either a terrible mistake or a deliberate misreading. For the life of me I can't figure out how you think "it isn’t worth the non-trivial costs" (which, by the way, are not my words) and "not enough to justify the costs" mean different things.


You explained this far better than I could - only dedicated, tech-savvy folks would be able to do this.


I was thinking of malicious code in the BIOS firmware, but I'm sure there are other examples. It's hard for the average user to deal with such things - even the supposedly easy example you gave of comparing a drive's boot sectors to a known list of trusted boot loaders.


What other countermeasure would you suggest? If these countermeasures were OSS, would you complain?


I'm no expert - but shouldn't such countermeasures really be in the higher-level software side of things, anyways? Low-level exploitation could never happen in the first place if higher levels were secure enough to prevent it(another huge issue itself). The only other avenue of attack would be physical exploitation, and there's no real way to stop that(if it's in front of you, you own it).


No, you want to be as close to bare metal as you can - and as soon as possible. Dedicated hardware is even better but you need to gain access to it as the first one.


This doesn't make any sense. For example, Secure Boot can help protect against an evil maid attack against Full Disk Encryption. How would something "higher-level" be able to do this?

People should take a look at the rather secure Xbox 360 to look at how chain of trust is established from the hardware through boot stages to hypervisor and OS. Were it not for a tiny mistake and literal electrical glitching of the processor, the Xbox would remain an impressively secure platform. Hell, it's one regardless.

Secure Boot, or something like it, is necessary to protect your bootloader, and thus, everything else (See: Flame and friends).

Quite frankly, UEFI/Secure Boot, itself, is not that bad. People are upset that Microsoft gets to be pre-enrolled but they seem to not point out that it's disablable, people can sign their own bootloaders if they want, or better yet, can enroll their own keys and sign bootloaders themselves for ultimate security. Even if MS were working with higher authorities they wouldn't be able to sign a malicious bootloader if you enroll your own keys.

edit: Instead of downvoting, maybe you could explain how SecureBoot could possibly work without pre-enrolling MS keys, or how some other form of verified boot would work without pre-enrolling someone's keys. In fact, there's a whole consortium of very smart individuals who would like to know.


I think the issue most people take with it is the ability of the manufacturer to lock down what a user could do with the hardware they paid for. In my mind it's similar to Apple's hold on the iPhone hardware(whether or not that's still the case I don't know, I haven't kept up on it in a while) - but with general purpose computers.

Also, my mention of the issue stemming from the higher-level realm is founded in the idea that exploitation of a device's firmware can't take place without either A) physical access or B) exploitation of higher level software to work its way down. As I mentioned I'm no expert, so I don't know if there are other methods available to attackers. Please elaborate if there are.


I don't understand your point. If higher level software is exploited, they don't need physical access and not only are you screwed, without Secure Boot, you have no way of knowing you're screwed.

Again, Microsoft is not locking down what you can do. It's a feature of UEFI that is there to protect users. The ONLY way that Microsoft intersects with this is that they have the privilege of being pre-enrolled on computers, because let's face it, 99.999% of people buy computers with Windows on them and they expect it to run. Not that they have to boot into a special OS, enroll keys and install Windows themselves. (Again, see my earlier post about how they can disable Secure Boot or enroll their own keys)

With Secure Boot, your bootloader can't be compromised. Not even with physical access, not even with a higher level software escalation. That's kind of the point. I'm not sure how else to explain it. You basically named the only two ways to affect a computer, so I really don't know what you're getting at, I guess.

Downvotes for explanations, jeez. If your takeaway is that I don't believe in personal device freedom, you're either not understanding the issues at hand, or you've conveniently ignored the explanation I offer in this post.


You aren't getting downvoted for explaining, you're getting downvoted for the ridiculous assertion that we need to throw out basic Freedoms like the ability to do whatever the fuck with hardware we own (including running whatever the fuck we want on it) because there's a chance we might be compromised.

    Those who would give up essential liberty 
    to purchase a little temporary safety
    deserve neither liberty nor safety.


You can run whatever you want. If you don't care about security, go to the firmware settings and turn off secure boot. If you do care about security, go to the firmware settings and add your own key, and then sign the boot loader for your operating system with the corresponding key.


> go to the firmware settings and turn off secure boot.

Except that you cannot do that with any ARM-based machine.


It's almost like I addressed that.

1. This isn't about Microsoft having control. This is the only way to ship devices with Secure Boot enabled. What do you suggest exactly? That OEMs ship with Secure Boot enabled but without MS keys? Great. Everyone goes out, buys a new Windows laptop... and Windows doesn't boot.

2. You conveniently ignored everything about being able to disable it and enroll your own keys.

Your false ad hominem attack is insulting and wildly inaccurate. You'll note that I don't defend the use of Secure Boot on ARM where user-key-enrollment is forbidden. Not only is it insulting because it's blatantly ignoring half of my last post, it's also insulting because I've spent years campaigning against things like the Patriot Act with that quote and I'm well aware of the sentiment and enjoying freedom on my personal devices (as I tout my Galaxy Nexus with CM9 and unlocked bootloader).


The easiest way to handle this would be to enrol keys on initial OS boot. If the user wants to wipe the preload then they can do that.


I think that's a good idea. With a big "JUST PRESS ENTER" for the confused or unknowing folk.


> The ONLY way that Microsoft intersects with this is that they have the privilege of being pre-enrolled on computers, because let's face it, 99.999% of people buy computers with Windows on them and they expect it to run. Not that they have to boot into a special OS, enroll keys and install Windows themselves. (Again, see my earlier post about how they can disable Secure Boot or enroll their own keys)

99.999% of people buy computers with preinstalled windows. If they are computer-litrate enough to install windows, they are literate enough to authorize a new Windows boot sector (the hash/fingerprint of would be printed on a new Windows media or sticker).

I wouldn't have a problem with Microsoft's preinstalled key if I had reason to believe this will work well when you authorize other keys. But so far, no manufacturer cares that anything other than Windows works on their machine/bios, and I wouldn't be surprised if adding more secure boot keys is somehow broken (the way a lot of BIOS power-management/apic tables are broken and no one cares because they work well enough on Windows)

> With Secure Boot, your bootloader can't be compromised. Not even with physical access, not even with a higher level software escalation.

I have a bridge in Brooklyn that I'm willing to sell for a good price if you believe that.

Yes, it will be harder to do, but ...

PS3, XBox, XBox 360, Wii, iPhone {2G,3G,3GS,4,4S}, iPad {1,2} and many other devices all have secure boots. And all have, in the past, been rooted by software or a combination of software and minimal physical access. (Of these, I think only the XBox 360 required physical access after two months has passed since the first boot exploit was released).

Theory, meet practice.


>99.999% of people buy computers with preinstalled windows. If they are computer-litrate enough to install windows, they are literate enough to authorize a new Windows boot sector (the hash/fingerprint of would be printed on a new Windows media or sticker). I wouldn't have a problem with Microsoft's preinstalled key if I had reason to believe this will work well when you authorize other keys.

I'm assuming that you didn't understand what I wrote. For pre-installed Windows to work, it has to have the Microsoft keys enrolled. That's why their keys come preinstalled. If they weren't preinstalled, Windows wouldn't boot. That was my point about the 99.99%. I'm not sure what you're getting at, I assume you didn't understand. (And no, pressing "Next", "next", "next" in an installer is not the same as enrolling private keys into a write-only area of your computer's BIOS).

>I have a bridge in Brooklyn that I'm willing to sell for a good price if you believe that.

Ok, I'll put you in the category of people that swore for years that Motorola's bootloader protection would be hacked. That was... 3 years now since they introduced that and nary a vulnerability found?

>PS3, XBox, XBox 360, Wii, iPhone {2G,3G,3GS,4,4S}, iPad {1,2} and many other devices all have secure boots. And all have, in the past, been rooted by software or a combination of software and minimal physical access.

Oh, you just don't know what you're talking about (or what Secure Boot is, one of the two). Only the Xbox 360 had boot verification in the style of secure boot and it was never compromised. [1] The others did NOT use a hardware based bootloader verification. The only other mainstream usage of this style (that I'm aware of) is Motorola's signed bootloaders

[1] While the Xbox was attacked via Hypervisor vulnerability, a timing attack found (both of which were fixed remotely) and now through electroshocking the CPU, the verified boot itself was not compromised.


The point isn't that Microsoft also has its keys loaded.

But right now, nobody can ensure that any other keys will be able to be added, mostly because it is up to the hardware vendors to implement that, and windows right now is the only one giving them an actual incentive, i.e. money.

Most people will agree that SecureBoot itself isn't evil, quite the contrary, that it is useful, and that it is useful to everyone. But right now, the minimum hardware vendors have to implement is "boot Windows with SecureBoot and be able to disable SecureBoot". The point is, how do we get others to be able to use SecureBoot just like Microsoft is allowed to from the very get go.

The problems in user freedom do not arise from SecureBoot as a technology, they arise from Microsoft being in from the get go, giving incentives to hardware vendors to ensure that things work for Microsoft, and that's it. Unless a way can be found to also reliably sign other systems (Linux, BSD etc.), SecureBoot and Microsoft's position as the a priori trusted software vendor make for two classes of software: Software working out of the box (=Windows) and that not working (=everything else).

There is no incentive whatsoever for manufacturers to give people control over their computers, and that is the crux.


> There is no incentive whatsoever for manufacturers to give people control over their computers, and that is the crux.

The incentive is that the Microsoft hardware certification requirements demand that they do (point 17 of System.Fundamentals.Firmware.UEFISecureBoot). Whether that proves to be a good (or even enforced) incentive is hard to know until the hardware ships, but saying there's no incentive is inaccurate.


>There is no incentive whatsoever for manufacturers to give people control over their computers, and that is the crux.

I agree and I think this is the interesting part of the discussion (not the vilify Microsoft part). I guess don't see any reason why they wouldn't. They could have not allowed users to reinstall their OS or forbidden non HDD boot in the past by forcing it in the BIOS.

It's hard to explain because this is another step where they will have to provide the ability but to me, they could have done something like this at any point in time (the OEMs, that is) and they didn't. Will they now? I guess that remains to be seen, but I see it as an issue almost separate from UEFI. Maybe the UEFI folks could have made a stronger recommendation and required licensing that included forced terms of user key enrollment? I certainly would be in favor of that in the interest of user freedom!


I think the important point is that Secure Boot flips the default.

We always expect manufacturers to "do nothing" if they can get away with it. Pre-Secure Boot, doing nothing meant you could install whatever OS you wanted (subject to other hardware limitations, of course). Post-Secure Boot it will mean that you probably can't (even if there's a mandated escape hatch, how well will it be tested? And so on).


> Ok, I'll put you in the category of people that swore for years that Motorola's bootloader protection would be hacked. That was... 3 years now since they introduced that and nary a vulnerability found?

Yes. The trajectory on companies getting secure bootloaders right points in a clear direction: they're on their way to getting it right!

This makes me unhappy, but unfortunately, it's taking longer and longer to jailbreak devices, with the exception of Apple devices. (Unfortunately, this seems to indicate a need to brush up on security on their part.)


Secure boot provides no way of knowing that you're screwed. It's not a measured boot. There's no independent confirmation that you're still using the same root of trust as you were before. If someone is able to compromise the key database in any way then they win.

Of course, the point is that this is only supposed to be possible if the attacker has access to your firmware. You can password protect the UI, but if they've got an SPI programmer and enough time to pull your machine apart you're still going to lose.

A fully measured boot has the root of trust in the hardware rather than the firmware, and that protects against most of the technical attacks. Someone can still stick a hardware keylogger in somewhere, but then no level of computer security is going to protect you against a camera stuck to your ceiling.


I see your point. I suppose in the end if the idea of all of these security measures is to prevent unauthorized access of your data, then this is another hurdle for an attacker to get over. But the way I see it, if someone has physical access to your machine, and you aren't using full disk encryption, this isn't going to stop them from doing just that.

Safe Boot goes a long way in the scenarios where your boot loader can be changed by a software vulnerability, or if you're using full disk encryption and an attacker actually needs to alter your boot loader to get at your data(i.e. evil maid).


Like I said, I'm only aware of two other major implementations of this style of security (Xbox 360 and Motorola's locked bootloaders) and the bootloaders themselves were never compromised, even with high/low-level OS access.

So you're right, they can simply wait until I've booted and then compromise Windows in some fashion. That will continue to be the case until that chain of trust is extended and enforced further and further.

The mere fact alone that this could have mitigated the stealthiness and harm of Flame makes it easy for me to include that it can enhance security for individuals.


> The mere fact alone that this could have mitigated the stealthiness and harm of Flame makes it easy for me to include that it can enhance security for individuals.

Did either Flame or Stuxnet (or Duqu or any other worm) override the boot process? I remember reading that Stuxnet used a kernel driver signed by a bona-fide certificate issued to some asian hardware maker, and I assumed Flame did the same (and also used some other MD5 code signing hack, that exploited trust farther away in the chain).

How would have Secure Boot mitigated the stealthiness? You seem to be knowledgeable, so I assume you are right in that Flame did use a boot-level exploit -- but what did it give Flame that a Stuxnet like (bona fide) signed kernel driver couldn't?


I (feel like I) understand the basics of how Secure Boot is supposed to work at least, heh. My knowledge of Flame and it affecting the boot loader came from something I read either in a comment here (and trusted) or read in an article posted here. I can look for it. (I'll be completely honest, a quick Google doesn't seem to support this notion, so take the Flame side of things at your own volition.)

Additionally, my understanding is that the keys in UEFI's Secure Boot storage can also be applied against signed drivers, so assuming the keys are better than the MD5 collided Microsoft certificates, it would also help secure against malicious drivers. (Note, this post is more speculative, I don't know if Windows will use this driver-related feature or if I'm explaining it entirely accurately).

(And not to be repetitive, I apologize, but this would still help prevent evil maid attacks on full disk encryption.)


It's all down to implementation faults; e.g., the wii "trucha" signing bug http://wiibrew.org/wiki/Signing_bug would be good if only they used memcmp rather than strcmp.

There's nothing magic about the bootloader check; They have to get it right, which they might -- but it's much harder than one would expect.

> Additionally, my understanding is that the keys in UEFI's Secure Boot storage can also be applied against signed drivers, so assuming the keys are better than the MD5 collided Microsoft certificates, it would also help secure against malicious drivers.

Not really. You just have to get an exploitable driver signed (possibly through official channels of some sort), and you are good to go. Whoever wrote stuxnet had the means to acquire a driver signing certificate; they probably had the means to get a driver certified even if Microsoft had to sign it.

Without a proper revocation setup, you aren't really better off, and a proper setup is much harder than one would expect, because a well-equipped attacker can DOS/DDOS the revocation list response.

> (And not to be repetitive, I apologize, but this would still help prevent evil maid attacks on full disk encryption.)

It would stop a specific class of evil maid attacks on full disk encryption, yes, and would make other attacks harder but not impossible (ASLR, W^X/NX and many other features were supposed to make buffer overflows unusable for attacks; they made them harder, but apparently not much harder and definitely not impossible).

I don't think it is worth the price in freedom that I suspect will be associated (and yes, I know you disagree. My distrust of PC hardware vendors is based on a lot of continuing frustrating experience, but they might surprise me in the end)


by trusting MS, surely RH can spare an HPC to sign the kernel every time...

http://blogs.technet.com/b/msrc/archive/2012/06/03/microsoft...

Not even to remember, that Mr. Sh. was in the random number selling and certification business in the first place. "I am what I am because of who we all are." -- VeriSign


This would seem to suggest that the days of dual booting are numbered. Either if ARM becomes the key player or if the policy on x86 changes.

I'm not sure that general purpose computers will die though, geeks and developers are a big enough market to support manufacturers creating systems that are more powerful and flexible.

Assuming that Virtual Machine software is not blocked, with good enough visualization software and fast enough hardware (enough RAM especially) running a Linux VM with Windows 8 may be indistinguishable from running it directly on the chip.

If the OS becomes just a way to bootstrap a browser or VM I'm not sure how important it is anyway?


How important is the foundation of a house, after all there is also a floor in the second storey of the building.

The entire point of the secured boot process is, that you can only boot a signed kernel. And in the case of iOS today, the kernel will only load signed programs. (And a VM is against the App Store guidelines, as is a python interpreter). So there is no guarantee, that you will be allowed to run a VM on your secure boot machine.


If every computer in the world is vulnerable to malware signed by a specific Microsoft key, how long do we think this key would remain secret? What will Microsoft do when (not if) that happens? Will they pay for the replacement of every PC and ARM device built to that point?

The benefit of breaking it and immediately gaining permanent undetectable access to every Windows-capable computer on the planet can offset a lot of cost.


If you want to make up stuff about Microsoft, at least make up something believable instead of straight up nonsense FUD.


With the correct signing keys, you could make every UEFI secure-boot-enabled machine in the world seamlessly run whatever you want them to. You could infect them with undetectable malware.

Now, imagine every computer on every office vulnerable to your malware because you have the signing keys used by Microsoft.

How much computing power would you dedicate to get that keys? How much money would you spend? A billion? Ten? That's the price of a single fully-loaded bomber these days. How can you be absolutely sure the keys are kept secure enough from someone willing to spend a fraction of their military budget to get what could amount to be the ultimate cyberweapon?


>With the correct signing keys, you could make every UEFI secure-boot-enabled machine in the world seamlessly run whatever you want them to. You could infect them with undetectable malware. >Now, imagine every computer on every office vulnerable to your malware because you have the signing keys used by Microsoft.

Even if that doomsday scenario comes into play, things would just go back to... the present.. where there is no locked bootloader.

So I really fail to understand your hype.


The article nicely ignores the fact that on non-ARM platforms Secure Boot will be disableable.


For some definition of "will", I guess. It's part of the windows logo requirements for x86. For now.

But what happens when someone screws up? I buy a laptop with a windows logo and try to install OpenBSD or whatnot, only to discover that the "disable secure boot" option is missing or broken. What's my recourse? Wait for a firmware update (which we know from experience will never arrive -- it boots fine in windows)? Return the laptop (which works perfectly within its warranted behavior)? Whine and look sad?

The incentives are all wrong for this to be a stable situation. Over time and platform changes, "disable secure boot" is going to rot. We all know it.


Also, we don't want secure boot as ARM moves into the more mainstream device space. We want Linux on ARM machines just as much as on x86. M$ is intentionally driving ARM boxes without the ability to change the OS to try to stop that inevitable growth.


Shop more carefully?


Well there will be some incentives for the manufacturers who do NOT support SecureBoot: they will get a better share on the market than their current one.


Sadly, that isn't enough. For everyone who understands how UEFI limits their freedoms, there is a dozen people who will say "look! shiny!".

Idiots will always outnumber smart people. Microsoft relies on that.


Well the 4% share (or more) of future linux users who need a reliable Linux-PC is not so small if you consider the potential number of clients.

Surely some will take advantage of this situation, if everyone else rushes to have SecureBoot.


Who cares? There are going to be millions if not billions of Secure Boot ARM devices manufactured. x86 doesn't make this not a problem.


There seems to be a bit of confusion over exactly what Microsoft is requiring, so I did some Googling. The requirements are here: http://msdn.microsoft.com/en-us/library/windows/hardware/jj1...

The relevant parts for this discussion are:

    17. Mandatory. On non-ARM systems, the platform MUST implement
    the ability for a physically present user to select between two
    Secure Boot modes in firmware setup: "Custom" and "Standard".
    Custom Mode allows for more flexibility as specified in the
    following:

        1. It shall be possible for a physically present user to use the
        Custom Mode firmware setup option to modify the contents of the
        Secure Boot signature databases and the PK. This may be
        implemented by simply providing the option to clear all Secure
        Boot databases (PK, KEK, db, dbx) which will put the system into
        setup mode.

        2. If the user ends up deleting the PK then, upon exiting the
        Custom Mode firmware setup, the system will be operating in
        Setup Mode with SecureBoot turned off.

        3. The firmware setup shall indicate if Secure Boot is turned
        on, and if it is operated in Standard or Custom Mode. The
        firmware setup must provide an option to return from Custom to
        Standard Mode which restores the factory defaults. On an ARM
        system, it is forbidden to enable Custom Mode. Only Standard
        Mode may be enabled.
PK is the "platform key". It is the key used by the firmware to identify the platform owner, and is used by the platform owner to enroll the "key exchange key" (KEK). The KEK is used for the firmware and the operating system to establish a secure channel. DB is the database of authorized signatures and certificates. DBX is the database of banned signatures and certificates.

UEFI has two modes, "User Mode" and "Setup Mode". It is in User Mode when there is a PK enrolled, and in that mode the secure boot policy is enforced. When in Setup Mode, no secure boot policy is enforced, and PK, KEK, DB, and DBX are writeable without any authentication required.

In other words, in Setup Mode, you can put your own set of signatures and certificates in. You should be able to even put Microsoft's certificates and signatures in DBX and thus prevent people from installing Windows on your box. (Or, more practically, put Microsoft's keys in DB along with yours, so you can dual boot).

Also:

    18. Mandatory. Enable/Disable Secure Boot. On non-ARM systems,
    it is required to implement the ability to disable Secure Boot
    via firmware setup. A physically present user must be allowed to
    disable Secure Boot via firmware setup without possession of
    PKpriv. A Windows Server may also disable Secure Boot remotely
    using a strongly authenticated (preferably public-key based)
    out-of-band management connection, such as to a baseboard
    management controller or service processor. Programmatic
    disabling of Secure Boot either during Boot Services or after
    exiting EFI Boot Services MUST NOT be possible. Disabling Secure
    Boot must not be possible on ARM systems.


Matthew Garrett's posts on Secure Boot do a pretty good job of covering how all this works: http://mjg59.dreamwidth.org/12368.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: