The researchers are keen to note things about this, but also likely want to avoid giving attackers "more ideas", which I feel limits the discussion. Plus, I highly doubt these attackers don't know everything we should be discussing.
This is obviously a low hanging fruit and first PoC implementation. The fact that secure boot can "mitigate" some of this attack right now is mostly due to the attacker being lazy or deploying an unfinished product. The researchers describe this as "unless they install the attackers certificate", which is a nice way of saying that the attacker has not spent much time fishing through DKMS and abusing the keys used for this purpose.
There are a lot of systems that are affected by this type of attack because for various purposes they have to sign their own modules. The most common example of this (until extremely recently, sort of) is Nvidia.
>The fact that secure boot can "mitigate" some of this attack right now is mostly due to the attacker being lazy or deploying an unfinished product. The researchers describe this as "unless they install the attackers certificate", which is a nice way of saying that the attacker has not spent much time fishing through DKMS and abusing the keys used for this purpose.
Can you explain, or link to a source explaining this?
if you can add keys and sign things on the fly secure boot doesn't matter. it only protects you from downward payloads. if the one above the one that cares about secureboot is compromised its useless. you're confused because it's sold differently from this.
Seems to me that in an ideal world, you would only have to add the public key, and an attacker wouldn't be able to forge a signature without the private key...
The point of DKMS is to compile kernel modules on the same host where they'll be used, so it needs the private key to be accessible. And isn't DKMS a rather common thing on Linux, e.g., for Nvidia drivers and for VirtualBox?
On Arch most DKMS packages have a separate package that is compiled directly against the stable kernel (and some against the lts kernel). IIRC they all don't support loading with SB though since the keys that are embedded in the kernel for other modules are discarded after the kernel build.
This is to say, its not impossible that those can be signed from the distro. Just Arch doesn't.
Check out "sbctl" by Foxboron, it's a UEFI key and signature manager [1] that's pretty nice.
But other than that I agree with you there, I wish that upstream kernel builds would be signed by the distro for secureboot usage. Maybe this should be part of the archlinux-keyring package?
But that relies on having the private key available locally, so it doesn't help with the scenario discussed here. Ideally, you'd want to sign the image on a different machine than the one booting it.
True, but that kind of also requires some way of distributing the bootable binaries, e.g. via netboot image via a TFTP server.
I usually store these keys on a LUKS encrypted flash drive. Not the best opsec, but at least good enough to prevent this kind of malware from spreading around. Can't update the kernel without the flash drive though :D
> I usually store these keys on a LUKS encrypted flash drive. Not the best opsec
Why would it not be the best opsec?
I replied to your other comment suggesting encrypting your local signing keys. I am not sure if I would use a flash drive though, why not just using the local disk?
I haven't looked into the tooling much, but does it at least support pkcs11? That way you'd at least be able to store the key on a smart card or Yubikey.
Yes. Edit /etc/dkms/framework.conf, set mok_signing_key to something like "pkcs11:id=%01", and mok_certificate to point to a file containing the certificate. You can extract the certificate using eg "pkcs11-tool -r -y cert -d 01 > .../cert.der".
I don't know. I actually asked myself this very thing while typing the above comment, but I'm too busy/lazy to look it up.
One issue I can see with this, though, is that if the malware is already present on your system and can run things, nothing would prevent it from hijacking the modules or the boot image before they're signed.
Unfortunately, using your own keys is a massive pain because it involves the command line. Nobody made good user-friendly tooling for it yet, though the systemd tooling has improved things a lot, but it's not in a place where it can be part of the normal install wizard just yet.
It's kinda ridiculous reading the comments in here.
This is a persistence stage exploit mechanism, meaning in order to install it, privilege escalation happened before that and it already got root rights.
The people here that claim "secureboot prevented that". No, it didn't. A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally. Otherwise you can never update your kernel.
That is the conceptual issue that cannot be fixed, and also not with TPM or whatever obscurity mechanism in between.
Linux needs to be a rootless system, meaning that there needs to be a read only partition that root can never read. That would limit access to this kind of thing to physical access or the kernel ring at the very least. Technically, this was the intent of efivarfs, but look at where vendor firmware bugs got us with this.
> The people here that claim "secureboot prevented that". No, it didn't. A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally. Otherwise you can never update your kernel.
The majority of Linux machines out there are running vanilla, distribution-signed kernels. For most people, the only reason to build your own kernel (modules) is Nvidia.
> The people here that claim "secureboot prevented that". No, it didn't. A simple call to sbctl to sign the rootkit is missing, because, as every Linux device, you will have to have the signature keys available locally. Otherwise you can never update your kernel.
If, hypothetically, you were using a system without custom keys, e.g. with a third party kernel trusted via the Microsoft / Red Hat shim program, [1] wouldn't you be safe, so long as secure boot was enabled? The bootkit would not be able to sign itself with a trusted key since the private key would never exist on the system to begin with.
Obviously, I'm aware that this approach has other problems and has had vulnerabilities in the past.
You don't need to do your signing locally, it is possible to build your network around a build machine that does the signing for you. That being said, SecureBoot has always been security theater for anyone that isn't a major OS manufacturer or industry player. The fact is, as soon as cryptography comes into the picture the majority of the computing populace have already left the conversation.
If you roll your own keys, as MS happens to lose some. But as I replied to parent, storing them on a fido2 device or in a crypted file would alleviate the issue. If not, please educate me.
What makes you think that? Secure Boot prevents this rootkit from running and is the recommended mitigation:
> Bootkitty is signed by a self-signed certificate, thus is not capable of running on systems with UEFI Secure Boot enabled unless the attackers certificates have been installed.
> To keep your Linux systems safe from such threats, make sure that UEFI Secure Boot is enabled
In fairness, the blog post confusingly says this in the next bullet point:
> Bootkitty is designed to boot the Linux kernel seamlessly, whether UEFI Secure Boot is enabled or not, as it patches, in memory, the necessary functions responsible for integrity verification before GRUB is executed.
However, this would still require Rootkitty to have gained execution already, which it wouldn't be able to if Secure Boot was enabled and the malicious actor's certificates weren't installed.
Secure boot prevents this proof of concept but it doesn't prevent all UEFI boot kits and this particular kit will likely evolve.
On window: It took several years until the first two real UEFI bootkits were discovered in the wild (ESPecter, 2021 ESET; FinSpy bootkit, 2021 Kaspersky), and it took two more years until the infamous BlackLotus – the first UEFI bootkit capable of bypassing UEFI Secure Boot on up-to-date systems – appeared (2023, ESET).
It was just a way for Microsoft's partners to limit the ease with which one can install alternative OSes. Try explaining to your mother how to disable SecureBoot to install Ubuntu. It used to be a single sentence - pop the CD in and follow the instructions, but Microsoft couldn't have that. As is always the case with Microsoft, security is never the goal unless they gain a competitive advantage or make it harder for their customers to move away in the process.
"It was just to keep people from installing something other than Windows" seems very counter-indicated by it taking ~7 years for a Windows UEFI bootkit to come out, and 13 years for one for Linux.
...and this bootkit is not able to work if Secure Boot is set up.
UEFI is also a godsend in terms of fixing a lot of the legacy BIOS crap
And my bloody computer is potentially trying to make god-blessed network calls before the OS has even loaded, and before my machine even provides the bare minimum human interface, you want me to navigate cryptography?
The trusted computing initiative was a disaster to the learnability of the computing field.
Devs are users too. Especially the unskilled/ignorant ones.
Most distros will run just fine without disabling secure boot. I don't think the *BSDs are supported by the shim loader yet, but even Gentoo boots with secure boot enabled, without loading any user keys.
Because it can lock the door behind itself in an opaque hardware-dependent layer users have no control over.
If i were to design security from the ground up it would be a small external sdcard for firmware and kernel (with a hardware r/w toggle), and optionally a external sdcard adapter that verifies the hash of the content.
Everything else is as dumb as bricks and gets its firmware loaded from the sdcard.
We didn't do that because secure boot was solving the problem of large orgs with remote administration in mind, and designed by orgs happy to sell yearly advanced cybersecurity protection shield plus certification subscriptions.
Designing for remote administration by an IT department will.. increase the attack surface for attackers to remote administrate my device.
You only need disable it until you've got that OS installed, and then you can re-enable it. All the major linux distros have supported Secure Boot for years (which I was not aware of, and will now look into setting up!)
Is the implication that anything that is more complicated is necessarily less secure? Because I think that turns security on its head. A deadbolt is more complicated than a door with no lock.
We can argue about whether there is sufficient user demand and benefit to make secure boot easier for lay people. But that is completely orthogonal to whether it increases or decreases security of the system.
and identity. most of the world now replaced your credit card and government id with apps that rely on the OS assurences to prove you're yourself with vendor keys, mandatory selfies and such.
I agree the move to UEFI added a huge new attack surface and that most UEFI implementations (notably, even the open source ones) are teeming with horrible bugs.
And yes, then linking the trust architecture for Secure Boot so deeply with UEFI means that UEFI bugs are also Secure Boot bugs.
But to say this is less secure? No way. Traditional BIOS-based MBR backdoors are like 1980s oldschool classic stuff. Most adversaries would require a good degree of development work to backdoor / root kit a PC they were given with Linux, Secure Boot, and an encrypted filesystem. With a BIOS based PC there would literally be nothing to do.
I think UEFI has many problems. However, you should not confuse separate (but related) issues from each other. If the initial booting functions can be altered by the operating system, that is a different issue (which perhaps UEFI makes it more severe). An internal hardware switch to disable this function would be helpful, and possibly a software function that the BIOS disables once the system starts (so it can only be altered by the BIOS setting menu, or by a BASIC or Forth in ROM or something like that). Functions being restricted by internal hardware switches would improve security, especially if also the initial booting functions are made less complicated too; if you are paranoid then you could also use glitter or whatever to detect hardware tampering.
> An internal hardware switch to disable this function would be helpful
For desktops and mobos, maybe. Gonna be hard to make that work for laptops and phones.
But generally I'm in agreement. By the time I'm booting into and using the system the BIOS is no longer a discussion point; if I need to update it then I need to shut it down and get under the hood.
UEFI itself is way too complex, has way too much surface (I'm surprised this didn't abuse some poorly written SMI handler), and provides too little value to exist. Secure boot then goes on to treat that place as a root of trust, which is security architecture mistake, but works ok in this case. This all could be a lot better.
Hello everyone, I am the developer of BootKitty. I am studying IT in Korea and I am making bootkit as a private project in BOB, a security program training. If you find it hard to believe that I am a developer, I can prove it. If you have any questions about BootKitty, please ask me :)
Nothing. This is just a proof of concept that is ridiculously easy to detect. If your attackers can drop files in your /boot or /boot/efi directory, I think you have much worse things to worry about than this.
In fact, this bootkit would be about the least thing I would worry about. Because by the time an attack can write to /boot, they can also write to /etc/init.d . And the later is not protected by "secure boot".
How is an infection hidden somewhere in the friggin entire rootfs easier to detect and remove that one that literally replaces the one file for your kernel /boot ? What advantage could the latter possibly have ? Not to mention that something from a bootkit bootstrapping an infection in the root filesystem is the realm of useless tech demos like this one; while for something that can already write your rootfs, infecting the kernel is trivial.
The entire boot system has much, much fewer places for malware to hide compared to the entire "rootkit" OS attack surface which is astronomically larger. Secure Boot has always targeted the smaller and most useless of the swiss cheese holes.
No, I wasn't "asked" by the browser publisher to trust them unless you use the word "ask" in a very broad (almost to the point of meaninglessness) sense: when I installed my browser, it simply started using its pre-packaged bundle of CA certificates. Which it regularly updates, I imagine, although it also never asked me about what the update source I'd like to use either.
You can say that I implicitly trust the browser vendor's judgement in what CAs to trust, by the virtue of using the browser, and I'd agree with that. But saying that I was asked by the browser publisher to trust them? No, I disagree, I wasn't. It was a silent decision.
This is obviously a low hanging fruit and first PoC implementation. The fact that secure boot can "mitigate" some of this attack right now is mostly due to the attacker being lazy or deploying an unfinished product. The researchers describe this as "unless they install the attackers certificate", which is a nice way of saying that the attacker has not spent much time fishing through DKMS and abusing the keys used for this purpose.
There are a lot of systems that are affected by this type of attack because for various purposes they have to sign their own modules. The most common example of this (until extremely recently, sort of) is Nvidia.
reply