Hacker News new | past | comments | ask | show | jobs | submit login
Linux kernel lockdown, integrity, and confidentiality (mjg59.dreamwidth.org)
142 points by JNRowe on April 22, 2020 | hide | past | favorite | 86 comments



Linux security has been criticised[1] for not having security enabled by default, and for allowing users to disable the security measures. This leads to users avoiding setting minimal-rights policies, and leaving the doors open. When security is optional, policy issues in distributions also aren't fixed quickly.

To limit applications rights to a minimum, SELinux, firewalls, and (systemd) sandboxes are all tools that could be used, but aren't in most installs. However, I think we are still lacking user-friendly interfaces (opensnitch is ported from MacOS).

One idea could be to let the desktop environment restrict one more capability a application on every run, inform the user before running it and ask the user after if the software worked correctly. That would gradually lead to a minimum set of software rights.

[1] https://www.youtube.com/watch?v=OXS8ljif9b8


> One idea could be to let the desktop environment restrict one more capability a application on every run, inform the user before running it and ask the user after if the software worked correctly.

That would only work reliably if one used every single feature of a program, every time one opened it.

I have tried to enable sandboxing for my services, but it is not easy to know what permissions can safely be restricted, without negative effects.


An interactive sandbox UX I like a lot is an app giving me a dialog "Allow", "Deny" with an additional "Remember" button when it does something new (perhaps modulo some time window, as long as it's clear). In certain situations I'd like to be more granular than that, but it would be much better than what we have today. A good way to be more granular is an "Advanced" button that lets you drill down more specifically depending on the resource that is being attempted to access.

Today if I want to send someone a photo I took on <insert newfangled application that can send messages here> I need to grant it permissions to read all of my storage. That's dumb. The OS should delegate access in such a way that I can give the app access to a single file's contents AND be performant/not get in the way for accessing all files if that's what I prefer.

What should NOT be possible is an app asking "hey can I read '/*'?". It should attempt to access specific resources in the namespace, and the OS should be responsible for saying "should this app bug you again about reading your <namespace>?" This gives an only-slightly-more complex UX to users that don't know/care, and gives a lot of flexibility to those that do.

I'd even be fine if it was one-time opt-in like Android "developer mode" which is basically impossible to enable if you're not looking for it.


But how does this interact with Unix command line tools? Any "sandboxing" system that A) either makes comamnd line usage inconvenient or B) completely ignore command line usage is going to create a rift.

Most of these "interactive sandbox UX" approaches basically create a "developer sandbox" where the command line tools can all play together but cannot access external data. And this is where things go downhill. Developers (or even users) DO want from time to time to write a script that accesses their contacts, gets the current GPS position and then does some munching with Perl for whatever obscure reason. Developers DO want from time to time to read whatever stuff Netflix program is storing on their private storage (oh noes!), or what the PDF reading program wants to send to the net.

And then you hit either A or B from above. If A, developer is annoyed and disables your sandboxing, and you are back to stage 1. If B, you are already at stage 1 and developer is annoyed seeing that random Perl scripts can apparently read your contacts list.

I find that any sandboxing approach that fails to actually think of command line usage is just falling in the trap of the "Android/iOS-centric world view". "Apps" may be glorified websites which are trivial to sandbox, but the more generic concept of "programs" is not. This is not only about command line scripts. Command line scripts interact with pipes. Programs, however, interact between themselves in ways we cannot even think of right now.

Which is why year after year you still see completely unsandboxed PCs being used for "productivity" despite tablets and anything else with the Android/iOS model.


The problem with sandboxing is it only works for server processes with very narrow behaviors - it's completely unable to express broad ideas.

My file browser should be able to see my whole system - that's what I want it to do. But I really don't want it to scoop up a list of files on my system, and send it wholesale to a network address I didn't type in specifically, after some specific actions.

AFAIK no security mechanism anyone currently proposes properly captures this sort of intent: there isn't a firewall which defines what can be done with the actual bytes of data an application has picked up in those terms - when they're in memory.

Of course this is a huge challenge: proving that my file browser doesn't have a way to, without gating through a user system, transform my file list into any code paths which can send it via network traffic.

But it's what we desperately need.


> Today if I want to send someone a photo I took on <insert newfangled application that can send messages here> I need to grant it permissions to read all of my storage

Yes

> That's dumb

In that context, yes.

But:

> An interactive sandbox UX I like a lot is an app giving me a dialog "Allow", "Deny" with an additional "Remember" button

Everytime you'd want to access any file (eg. upload a saved photo to instagram), you'd get a permissions prompt, and everybody clicks "allow all" after the second time.

The workaround is to have the app ask the trusted OS module to open the file selector, and the user then selects the file, and the app only gets access to that one file. This works for instagram for example.

...but! This doesnt work for file browser apps, file synchronisation apps, backup apps, etc. This also doesn't work with instagram, which offers to upload the last photo taken when you select photo (instead of camera), because it doesn't have permission to load it, and that means an additional step just to select the photo. If you want instagram photos to be saved to your device storage, instagram either needs permission to write to some folder, or it has to ask the os (and then user) where to save the photo every time.

If you want security, features and user friendliness, you're basicaly fscked


As an example, Flatpak has "portals" for e.g. file access, which mirror how this works on mobile OSes. The app asks the environment to let the user pick a file, then the file gets mapped into its sandbox.


This suggestion shows little understanding of the difference between Linux the kernel and Linux the OS distribution. It is the role of distributions to implement defaults for security. The idea, ask the user, is naive - most Linux software never interacts with a GUI or console user.


I think that is why you should choose your Linux flavour well? Perhaps the argument is that developers (especially ones that deploy to production) should better choose theirs?

I think being able to change anything and everything is a philosophy for programming in general and that it shouldn't be played off against security. I do think that for mission critical deployments you should have custom security (I think the Eurofighter Typhoon uses a microkernel not unlike MINIX) but I am in two minds about whether systems are insecure because of the default software or because of the lack of interest in security by people who are trying to pay their bills.


Opensnitch is not a port from macOS. It is a GUI layer-7 firewall, like good ol' ZoneAlarm or -indeed- Little Snitch. Code-wise, it has nothing to do with Little Snitch. The development is also stalled (there's an active fork though).

Does OpenBSD's pledge require or have user-friendly GUIs? Are there good GUIs for PF?


Link to fork, for those that were confused like me:

https://github.com/gustavo-iniguez-goya/opensnitch


I think the reason security on Linux is so underused is because most of the software desktop Linux users run is very well behaved (compared to popular software on most consumer OSes.) I'd be willing to bet the majority of users aren't running anything closed source outside of a VM (and most that are probably only have one or two apps.) Closed desktop Linux apps are pretty unpleasant even when they run well.

So there's little need for tools like SELinux and add to that how unpleasant they are to use and that's why no one uses them.


I'd be willing to bet the majority of users aren't running anything closed source outside of a VM

That smacks of anecdote, as a counter anecdote I run plenty of closed source stuff on Linux and have no complaints.


the key word is should, but it's too hard sometimes (like sandboxing anything with UI)


That key word appears nowhere in the quote.


It's not at all hard to sandbox things with a UI on Linux. There are a dozen ways.

Sandbox something that needs performant OpenGL is a littler harder.


This lockdown made it a huge PITA to run Wireguard in Fedora[1]. It broke existing installs when the kernel upgraded and then became very difficult to insmod or modprobe the wireguard kernel modules unless you blacklisted all new kernels (not safe). I hope this matures in a way that doesn't destroy some of the reasons I love Linux (that I can hack on my system and do cool stuff).

[1] https://unix.stackexchange.com/q/543576/34855


You can sign modules using these instructions: https://docs.fedoraproject.org/en-US/fedora/f31/system-admin...

Though it's definitely a faff, and it's required every time there's kernel upgrade.


Wireguard is merged into the mainline and would possibly appear in Fedora 32 on 5.6. Fedora 32 will be released several weeks later.

The underlying problem is that many distro just use a throwaway pubkey which makes users impossible to sign their modules. Maybe a better security model is needed.


yeah for this reason I've been super excited for kernel 5.6 (I don't usually get excited for new kernel releases cause it means work for me, but this one is different :party: )


Why is this considered a good thing?

The only "security" this improves, is of devices where the manufacturer has decided to lock you out!


> The only "security" this improves, is of devices where the manufacturer has decided to lock you out!

Aside from me wanting to be able to verify the integrity of the boot chain and running OS (on platforms like servers, notebooks, etc) this has little to do with manufacturers locking you out. Secure boot is already in all machines and if the manufacturer wants they can already lock you out. They don't a bunch of code in the Linux kernel for that.


From the post:

> Even if root can't modify the on-disk kernel, root can just hot-patch the kernel and then make this persistent by dropping a binary that repeats the process on system boot.

Lockdown is intended as a mechanism to avoid that, by providing an optional policy that closes off interfaces that allow root to modify the kernel.

> Don't use confidentiality mode in general purpose distributions. The attacks it protects against are mostly against special-purpose use cases, and they can enable it themselves.


Why doesn't it improve the security of devices where you decide to lock yourself out to protect against faulty applications?

Don't you want your hosting provider to lock you and your neighbors out from the kernel running your processes, even if one of you manages to attack a privileged process?


Your second line is the answer to the first.

Look at who works on the Linux kernel, and their affiliation, and you will see why things are the way they are.


We are drowning in cynicism these days. Let's make an effort not to add more.


All depends from hardware vendor. A lot of modern Android smartphones has encrypted bootloader for example, so, nothing prevent them to lock-in your device right now.


Not exactly. That's why those same vendors work so hard at disabling root access for exactly the reasons outlined by the article; root can wreck it or maybe read out keys, or a million other little things because ALL of it is predicated on no one being able to just arbitrarily read/write to everything on the system like root/uid 1 can do.


It’s a good thing when you have control over whether it’s enabled, and how.


apparently some people miss UAC when they use linux


I think Fedora Silverblue takes a very different approach here - https://docs.fedoraproject.org/en-US/fedora-silverblue/

It basically doesn't try to fix the building blocks with all their legacy...instead it simply makes the entire operating system immutable.

I wonder what are the pros and cons of each approach. Or are they complementary.


Complementary. Silverblue’s immutability may be aimed purely at stability and therefore may not have strong guarantees, but if it did then the next attack vector would be from root to kernel (in memory), and that’s where lockdown comes in; it’s designed to prevent attacking a running kernel as root.


I find all the attempts at "curtailing down the powers of root" to be the modern equivalent of https://xkcd.com/1200/ .

At least with the "user vs admin" distinction, I could argue that it is useful becase my backups could be stored into a separate account with restricted access, so ransomware running in at the "user level" account, would not be able to touch them.

While if my "admin" account gets compromised then all bets are off as the backups could be wiped out (even if they are offline, they could be wiped out the next time I connect the storage device).

However I find a much harder time finding user justification for having a separate "root" vs "manufacturer root", unless you happen to be a manufacturer with questionable motives. If root is compromised, the attack surface becomes absolutely huge, and it doesn't seem to be very useful as a user the fact that I can rely that at least the kernel and bootloader will not be compromised if everything else is dubious.

Sure, now I can trust that the builtin "restore to factory" functionality on the device works (and even that may not be true). But unless I use it frequently (and who does?), the malware with root access would still be able to destroy all of my files anyway, compromise my backups, etc.


This is pretty much it. Let's frame the proposition differently to get people to consider it from a new perspective.

I can verify the OS install media with trusted publishes using signing keys and PKI. I can't do a god damn thing about the cheap, back-doored PCI controller from China.

What protection does secure boot really offer the end user at that point? The PCI controller is in place to just pass the right signatures to secure boot or just wait until after the secure boot checks so it's not helping with bad hardware. I already verified the OS media at install, so its not super useful there either. Did my boot code change? How would I know? Did the bad PCI controller fake it? Do I have any additional trust in my system? I can't go probing the system to try to find out.

A black box with zero control told you you were safe and there is no way to look at or modify the system now so you can trust it. Your hardware was never on a TAO workbench. Who doesn't feel safer?


> I can't do a god damn thing about the cheap, back-doored PCI controller from China.

Of course you can - that's what IOMMUs are for.


Surprised no one has yet commented on eBPF, and the impact this has. ebpf already gives users the right to run code as the kernel, and this is a security win in many cases - you can audit your system better.

On a server, I think I'd rather assume the attacker has root (or even kernel!) but have good auditing, and do specific service sandboxing, than assume that I can separate root from kernel.

My understanding is that confidentiality breaks ebpf. I'm unsure about integrity - if integrity works with it, great. Then I'd wonder where you'd want confidentiality - maybe a box that's handling CC info/ payment processing?

I have mixed feelings overall.


Integrity doesn't restrict ebpf. Confidentiality is for cases where you're doing stuff like using EVM to prevent offline attacks, which involves the kernel holding a key and using it to sign all files. This can be circumvented if you're able to just scrape the secret out of the kernel.


Cool, that sounds reasonable then. Good stuff.


>Various interfaces make it straightforward for root to modify kernel code (such as loading modules or using /dev/mem), while others make it less straightforward (being able to load new ACPI tables that can cause the ACPI interpreter to overwrite the kernel, for instance).

What exactly is the "ACPI interpreter" referenced here? I am familiar with ACPI but this is the first tine I have heard this term.


ACPI contains code!

https://docs.microsoft.com/en-us/windows-hardware/drivers/br...

> ACPI defines an interpreted language (ACPI source language, or ASL) and an execution environment (ACPI virtual machine) for describing system devices and features, and their platform-specific controls, in an OS-agnostic way.

> ASL is used to define named objects in the ACPI namespace, and the Microsoft ASL compiler is used to produce ACPI machine language (AML) byte code for transmission to the operating system in the DSDT.

Linux drivers run this code via the acpi_evaluate_* functions.

https://lwn.net/Articles/367630/

Microsoft defined another interface on top of ACPI called WMI. Unfortunately it seems to be widely used.

https://docs.microsoft.com/en-us/windows-hardware/drivers/ke...

https://lwn.net/Articles/391230/

On Linux it's easy to extract and decompile the DSDT:

  $ cat /sys/firmware/acpi/tables/DSDT > dsdt.dat
  $ iasl -d dsdt.dat
https://wiki.archlinux.org/index.php/DSDT


ACPI is powered by a Turing-complete programming language. It compiles to the ACPI Machine Language, which is an assembly language executed on the ACPI virtual machine, a.k.a. the ACPI interpreter. The power management actions are actually the result of executing the living code. The justification is that it enables a lot of flexibility to adopt ACPI on different types hardware, without the limitation of being purely a configuration file. Also, the complicated power management code can be operated by the OS kernel, without writing any specific driver or calling into BIOS.

Naturally its power faced a lot of criticisms. Technically, you can decide what your power button does based on the next digit of pi. This is why Mark Shuttleworth called ACPI a Trojan horse and Linus Torvalds called ACPI brain-damaged.

Broken power management is often the result of low code quality. In the Hackintosh community (and Linux to a less extent), discompiling the DSDT table and manually fixing all the compiler warnings and bugs in the code is a critical step to get proper power management running.


ACPI tables contain byte code (AML), which needs to be interpreted by the kernel.


Since this is controlled via a kernel parameter, can root just unset the kernel parameter and reboot? Or are there systems where kernel parameters are also 'locked down' by some other mechanism?

(these are genuine questions - I don't know much about linux security and am seeking to understand).


Most distributions carry a patch to automatically enable this if you have a verified boot process. Otherwise, it's up to the admin to ensure that their verified boot process applies the appropriate policy (eg, by ensuring that your bootloader appends the argument regardless of configuration)


Thanks!


I believe it's a set of kernel build flags and not a kernel parameter, in which case, not really, I initially said you could just replace the kernel with something that's been built without the flags and reboot but apparently the second paragraph says it prevents that, though I'm not sure how would updating the kernel work in that case then.


There are build config options, but there's also a kernel parameter

> lockdown= [SECURITY] { integrity | confidentiality }

https://www.kernel.org/doc/Documentation/admin-guide/kernel-...


You need GRUB's help do do this. There is a 'verify' module you can use that makes GRUB load files that are signed with a given GPG key.

You build a GRUB efi binary that contains your key and only loads signed config files, initrds, and kernels and then sign that binary so that it can be loaded by UEFI.


I missed that part, so I'm honestly as puzzled as you are (I suppose the answer would be yes then).


This is laughably useless.

It's extremely unlikely they have patched all existing ways for root to change the kernel, and anyway since the kernel is written in C it almost surely has plenty of memory safety exploits.

Also requiring an hypervisor is much simpler than doing this work and actually has a reasonable chance of achieving the objective of not allowing arbitrary ring 0 code.

At any rate, all this work is mostly pointless because if you let people run arbitrary user space code, then they can do almost anything with the hardware anyway (like erasing all disks, etc.), and if you lock that down then you need to force them to use a particular user space and in that case there is no need to also lock down the kernel since without being able to run arbitrary user space code you can't interact with the kernel anyway.


How is requiring a hypervisor much simpler than this? I've seen no implementations that make strong integrity guarantees.


> Also requiring an hypervisor is much simpler

I guess you are using Qubes OS, aren't you?


Possibly a dumb question -- how can one perform kernel/firmware updates if root is "locked out"?


The integrity mode applies to the running kernel in RAM. You generally don't actually upgrade that, instead you replace the kernel image on disk (vmlinuz), which is still possible. On next boot, the new image will be loaded. But if you have trusted boot enabled, the new image will only be booted if it is appropriately signed.

Live patching still works if the updates are signed. The kernel can still do whatever - with integrity enabled it just refuses to do certain things, such as loading kernel modules or updates that aren't signed.


Great explanation, thanks!


Another option might be to allow a "rootless" mode in which direct access to the root account itself is impossible (but where there are still ways for non-root users to gain a subset of root capabilities).


This is a serious improvement.


Do people realize what they are doing here? I'm happy not touching my Windows kernel, stuff generally just works. Stuff never just works on Linux. And now the last time I tried to load a kernel module I had built for my Ubuntu system, it refused because, duh, it wasn't signed by some UEFI key I had never even seen. So now of course, the Ubuntu is fucking gone. Because it makes it impossible to do things that on Linux are still a monthly requirement.

Not to mention the impact this stuff has on kernel and driver development. Even Windows you can just boot into a development mode and it might scream at you on the desktop, but it will allow you to modify your system as you wish. I wasted a few hours when that Ubuntu thing happened to figure out where that switch was and didn't find it.

Right now, this should be strictly the domain of Google and Amazon that can actually have a trusted chain from bootloader to userland. Not enabled on any vanilla Ubuntu because hey, we detected UEFI!


If you're using a modern machine, then you probably are using Secure Boot. For better or worse, Secure Boot requires Microsoft[+] to sign your binaries and they have certain requirements which you cannot break or they will revoke the signature. One of these requirements is that you cannot have a signed binary load unsigned code into ring-0.

So (ignoring whether these features are useful or good) in order to be able to run Linux on modern hardware these types of features are necessary. And I'm sure you'd be just as annoyed if you couldn't run Ubuntu on a machine that was less than 6-8 years old.

> I wasted a few hours when that Ubuntu thing happened to figure out where that switch was and didn't find it.

It's a shame you didn't manage to find it, because it's pretty trivial to create your own signing keys and enroll them in the MOK. You can then use those to sign your kernels. You just need the sbsign package.

If you want to just turn it all off, it's even more trivial -- go into your BIOS and disable Secure Boot.

I would hope someone who wishes to do some kernel development would be able to overcome this fairly minor hurdle.

[+] Technically it's whoever owns the keys that the manufacturer has approved to run software on the machine. On basically all modern machines that list is just "Microsoft" but you can enroll your own keys or remove Microsoft's keys if you want.


I see arguments (like one of replies to your comment here), that you can turn off Secureboot - so simple, much wow.

Not all firmware allow you to turn off secureboot or enroll your own keys. You’ll see plenty of this on bios-mods.com if you want to know what that looks like. It also really throws into sharp relief questions around things like device ownership.

Let me tell you about my experience with an Amazon DeepLens device (x86-64) that I’ve been trying to get stock Ubuntu installed on. The only keys on this device are Amazon ones. This means I cannot install any OS other than the one they supply (a modified Ubuntu 16.04 install). If I own the device, shouldn’t I be free to install my own OS? If I own the device, and have physical control of it, I should be able to bypass secureboot, period - but not always the case today.


> Not all firmware allow you to turn off secureboot or enroll your own keys.

Being able to disable Secure Boot and install your own keys is a requirement of the Windows 8 and 10 advertising requirements, so manufacturers almost always allow it so they can get some money from Microsoft for advertising.

That doesn't mean it's always possible, but I would humbly suggest that we shouldn't purchase such devices so that companies who make those devices learn to stop doing that. The fact that Microsoft managed to pull this shit with Windows RT is disgraceful.

> It also really throws into sharp relief questions around things like device ownership.

I don't disagree at all, and I do think that it's something we need to be very mindful of. But Secure Boot does solve real security problems.

> If I own the device, shouldn’t I be free to install my own OS? If I own the device, and have physical control of it, I should be able to bypass secureboot, period - but not always the case today.

I completely agree. Amazon shouldn't be allowed to sell such devices. But that doesn't invalidate Secure Boot as a concept, nor is it the fault of Ubuntu or anyone other than Amazon.


I can see how Secure boot solves real security problems. And I am definitely not blaming Ubuntu here.

However, it’s unfortunate that the Secure Boot technology (or maybe this is a licensing thing) by default does not make prescriptions, and that we’re reliant on the device manufacturer’s good will to see it implemented correctly.


How could a technology itself make prescriptions about the ways that the manufacturer lets you configure it?


Through licensing and/or certification requirements. Large companies take compliance serious.


> It also really throws into sharp relief questions around things like device ownership.

There's no question about it: it's not ownership if the user doesn't have the keys to the device. The purpose of this technology is to ensure users can't run unauthorized software. Whoever authorizes the software is the true owner of the machine.

There are legitimate applications for this. Whether it's empowering for the user or not depends on how it's implemented. If people can use their own keys to sign the software they trust, it's fine. If they can disable the security, it's fine.

It's a problem when software is authorized by corporations or governments. That means the users of the machine are merely guests who are allowed to use the hardware provided they follow the rules. This is the true purpose of this technology, regardless of any potential benefits for users. The multi-billion dollar copyright industry would love it if this was the default for all computers. It's the only way they can guarantee the artificial scarcity of copyrighted works in the 21st century. Governments would really like to regulate software as well: encryption is far too powerful, it has the potential to frustrate even intelligence agencies and they can't deal with the fact civilians have free access to it.


>it's pretty trivial to create your own signing keys and enroll them in the MOK

Aleksa, I know you are probably aware, but if keys can be added by a user then the mechanism is not really achieving verified boot. The keys need to be burned into a read only portion of memory if we hope to protect against evil maid style attacks. Unfortunately this conflicts with the "user freedom" side of the hardware. Wish there was a good solution for device owners and only device owners to own their keys.


You can password the MOK infrastructure.


> One of these requirements is that you cannot have a signed binary load unsigned code into ring-0.

I think the requirement is slightly looser - MS won't sign binaries which in turn allow arbitrary unsigned binaries to be loaded/run at ring-0 without any user interaction/confirmation.

The shim and PreLoader loaders used by most Linux distributions allow you to boot arbitrary ring-0 code by requiring user interaction first, and both have been signed by Microsoft. The distribution-specific versions of shim/PreLoader also usually allow booting any code signed by the distribution's own key without needing to enroll that in the MOK.

The process you need to go through, as you describe, is to use shim/PreLoader to first enroll the hashes/keys you want into the MOK variable, then after that's been done once it will all happen without interaction.


> So (ignoring whether these features are useful or good) in order to be able to run Linux on modern hardware these types of features are necessary.

TBH, I find the argument "we are locking down your choice here because of Microsoft pressure" to be ridiculous at best.

The correct arguments should be more in line of "you can turn this off", but these arguments do not answer the very valid question of "what if it is my hardware vendor --like Google -- that is forcing me to turn this on?".


Which Google devices force you to turn this on? (genuine question, I work at Google but not on any hardware product teams)


I don't know either, but they will use this sooner or later. And Google specifically: They are not the worst since they tend to address these questions; but they are not the best either since they tend to fall in the "my way or the highway" camp where you can "enable security and let Google control it" or "disable security", lacking the obvious middle option (e.g. the developer switch on chromebooks).


If you remove the write-protection from the read-only flash on Chromebooks, you can replace the verification keys and re-enable security based on your own root of trust.


And then can you turn the write-protection back on with a physical switch?


On modern systems the write-protection is gated via the security chip, which responds to various key combinations. My understanding is that it's possible to re-enable the write protection after flashing and the machine will behave identically (other than that updates from Google will fail to apply due to the system no longer considering Google a trusted authority)


> these types of features are necessary

No, they're not necessary. As you mention yourself in a later paragraph it can either be turned off or you can install custom keys


They are necessary to get Microsoft to sign your keys, which is necessary to get Linux distributions to boot on modern hardware without getting the user to install custom keys (and that is a requirement because there is no standard way to configure UEFI Keys -- that's why the mokutils exist).

And I have to stress that they do actually solve real security problems and aren't of themselves a bad idea. The fact that you can inject unsigned code into the kernel as root is not a good thing.


I don't disagree that secure boot can be useful, but the distinction is critical. If you omit all those conditionals and shorten it into a necessity you're basically saying that subjecting yourself to microsoft is unavoidable. It's ceding the entire PC ecosystem to a single vendor, similar to vendor-locked android devices for example.


Unfortunately, it is a practical necessity if you want to have an operating system that the majority of the public can just pick up and use. Yes, I think it's utterly ridiculous that Microsoft acts as the primary signatory for most hardware by default, but pretending that isn't the case won't help distributions ship software.

I wish it wasn't the case, and it is crazy that we have given this power to Microsoft. But that is the current state of the world.


Microsoft is the only company that could have done it. They are the only entity that has teeth through (as you mentioned above) advertising dollars, logo certification programs, etc. Only they had the infrastructure to get the ball rolling. The biggest kicker, the one thing only, and I really mean only Microsoft could do, was provide the infrastructure for revocation. Without their ability to strong-arm (for good or bad), the industry just would have never agreed.

When all it takes is a rogue USB drive and a power cycle to own a machine, it presents serious problems in high security environments.


> Stuff never just works on Linux

This is the worst argument ever because it carries the false pretense that stuff "just works" on Windows and MacOS which is blatantly false.

Just yesterday I had to use regedit on Windows to stop its auto update from overwriting my manually installed video drivers. The only way to stop windows from doing that is to stop it from updating _all_ drivers. How dumb is that? Which normal user can figure out that this is why their video performance suddenly goes into the gutter?


A normal user shouldn't have to install a driver manually


I can't remember when I needed to install a driver on Linux (e.g., GPU drivers come in with system updates).

I can remember when I needed to install a driver on Windows (any GeForce XpErIeNcErS???).


Read his comment… Windows auto update fucked things up. This problem would be even worse for users who never installed a driver manually, as they have no idea at all why things suddenly run worse.


If he didn't have to install drivers, because they're available / checked, then it wouldn't have messed things up.

On macOS, I have 0 need to install drivers. On windows, I'm not sure because I haven't used it in such a while, but afaic remember, windowsupdate has updated drivers suppiled by vendors. And that actually works.

His comment is similar to: I compiled a kernel modulue, and loaded it. When I do my regular uprade / dist-upgrade, and now I'm stuck with a vanilla drivers. The only way I can fix this is to alter the Makefiles, and compile from scratch.

Oh look.. 1 google gives the exact situation actually:

https://askubuntu.com/questions/492217/nvidia-driver-reset-a...


No, Windows installed a driver update that broke my graphics performance. When I manually installed a driver Windows decided to override it _without_ telling me that it did so or without giving me any choice at all.


> And now the last time I tried to load a kernel module I had built for my Ubuntu system, it refused because, duh, it wasn't signed by some UEFI key I had never even seen. So now of course, the Ubuntu is fucking gone.

There is no conspiracy here. The option you are looking for is in your BIOS/UEFI, not in the OS. You should have known better and read the fucking manual. Your machine comes with Secure Boot enabled by default, by the policy of Secure Boot, there are two possible outcomes.

1. If the OS doesn't support Secure Boot, you cannot boot the OS until Secure Boot is disabled (or you add your own signing key to BIOS/UEFI).

2. If the OS supports Secure Boot and signed by a big firm, you can boot it by default, but by the policy, it cannot execute any unsigned driver until Secure Boot is disabled (or you add your own signing key to BIOS/UEFI).

By supporting Secure Boot, Ubuntu has made OS installation works out-of-the-box, otherwise you cannot even install the system (if so, I bet you'll complain how Ubuntu is gone for you, and how free and open source has terrible user experience - due to its absurd priority on the notation of freedom, does not even boot, and blablabla). TL;DR: Your rants are unwarranted. You can always disable Secure Boot, try disabling it first.

If the UEFI on your PC doesn't allow you to disable Secure Boot, you can write a warranted angry post then, the conspiracy would be real (there used to be some evil embedded/tablet PCs that did this), otherwise it's not - you are free to disable Secure Boot, adding your signing key, optionally, even deleting Microsoft's signing key to fully control Secure Boot under your own discretion in the vast majority of PCs.

> Even Windows you can just boot into a development mode and it might scream at you on the desktop, but it will allow you to modify your system as you wish.

This is plain falsehood. Again, read the manual from Microsoft: If Secure Boot is enabled, you cannot boot into a development mode - the BCD setting is disabled. Windows does not allow you to modify your system as you wish. You cannot run, develop, or debug an unsigned kernel-mode driver unless you disable Secure Boot [0]. Disclaimer: I run Linux/BSD exclusively, I don't do any development on Windows, don't take my words for it, instead, check my citations to Microsoft's documentation. It's for Windows 8 32-bit that happened to be the first hit in my search, but I believe the policies are the same in Windows 10.

Finally, locking down a system is not only used for DRM. Maybe you don't want it on a PC, but it's often desirable in production systems. For example, on my server, I completely disabled kernel modules, hotpatching, /dev/mem, etc., so that no code can even be loaded into the kernel (other people do the opposite, it's a tradeoff between uptime and security). There may still be some exploits that make it possible - PaX/grsecurity has better countermeasures and blocks additional attack vectors - but it's not available to the general public anymore.

[0] https://docs.microsoft.com/en-us/windows/win32/w8cookbook/se...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: