If we just had flash that was write protected unless a jumper was moved, this kind of thing just would not happen.
"Oh, but the average user is incapable of that" and "Corporate IT are unwilling to do that" are the two biggest excuses I hear to not make it so.
Here's a crazy idea: you connect said jumper to a front panel key switch. The 'average' consumer could probably sort that out. And corporate IT could just leave it enabled, or require you to turn the key for them, or have IT interns on site that run around with the keys or whatever.
A Key. A literal physical key. But I guess that's too low tech for the crypto obsessed purveyors of TPMs and the like.
> If we just had flash that was write protected unless a jumper was moved, this kind of thing just would not happen.
This would make it incredibly hard to manage a bunch of servers in various remote data centers.
> connect said jumper to a front panel key switch
Ah yes, I do miss the good old days of front-panel key switches, where rubbing your feet and touching the front would reboot your system. I've never seen a smartphone or ultrabook with one, though.
Parameters could be stored in separate memory from the operating system that interprets those parameters. This is somewhat similar to what Apple does with the secure enclave. That way, wearout parameters and non-risky user tunables would still be customizable without risk of compromise. Heck, even if that had to be implemented with an entirely separate hardware chip for parameter storage on the motherboard, the added cost would be negligible.
I don't understand this either. A physical presence requirement eliminates all remote threats. The vast majority of these attacks occur from the safety of some far away adversarial country. They'd have to have agents on the ground carrying out these attacks and getting caught.
Point is, if you Don't turn the key, you sleep well at night knowing it's Not Being Written.
Contrary to the apparent belief of the developers of New Products, a good many of us are sick to death of everything "as a service", and all the headaches that ensue.
I am sure one of these "New Products" developers will come here and share the story of the Evil Hotel Maid that comes to your room and toggles the physical switch on your computing products.
Anybody that peddles you an absolute safety in IT security is an idiot who should be swiftly ignored. If they work for you, fired and all their work thoroughly reviewed.
The flip side of that argument is just as true: Anybody that denigrates a security measure because they can come up with a single convoluted compromise is an idiot. I'm hoping that's not you, it sounds like you're saying that some hypothetical strawman developer would say such a thing (Sounds like a particularly plausible strawman to me... a sad state of affairs!) – but that's the defense if you meet such dangerous idiots.
With this proposal, there's a key involved, which is some extremely limited defense against evil maids. If it's truly a key as in 'keys in doors', you can pick it, trivially (see Lockpicking Lawyer's video stream for how trivial it is to pick or otherwise circumvent locks).
A well thought through system can fix many of these warts - in this case all you need is an unresettable flag that shows: Compromised! Designing a lock that can detect but not defend against picking is a lot easier than making an unpickable lock. But I'm sure LPL or somebody else with skills, experience, lots of time, and a budget to buy a bunch and open em up to look at how they work can come up with a scheme to pick em open without triggering the flag.
You could also make it a digital key (or a combination even).
Point is, pre-supposing a well-tested design with input from experts is mostly just begging the question: If I had that, we might as well posit 'Let us assume no bugs in any firmware implementations of stuff'. And for an encore, let's posit "world peace", about as useful.
It's all swiss cheese: _EVERYTHING_ has holes. Nothing is perfect. But, layer enough slices of swiss cheese on a slice of bread and pretty soon no bread is visible. A physical key rather obviously covers precisely those holes that schemes involving entirely digital and remote-accessible setups have. Thus it is a great idea, and the fact that an evil maid can attack it indicates someone either doesn't understand the fundamentals of security (they don't get that it's all imperfect, and combining to cover the gaps is almost always the winning move), or is intentionally coming up with pithy oversimplified toss to cheerlead their product or pet preference.
> Note that at the time of writing we lack sufficient evidence to retrace how the UEFI firmware was infected in the first place. The infection itself, however, is assumed to have occurred remotely.
And given that it's done on memory (flash, that'll persist over reboots, windows reinstall and even hdd and sdd replacement), it wasn't done by trivial scriptkids.
Call me old fashioned but I'd prefer to just make it impossible to write the firmware without physical access. Verification just brings it's own issues and is not a panacea.
Who owns the key(s)? If not you, how do you trust them? If you, how do you write them? How do you prevent attacker from writing them? Or is it one-time programmable and your hardware becomes brick / unresellable / un-updateable if you lose your keys or don't want to pass them on to next owner? How do you make sure your keys are more secure than the machine that somehow got its firmware infected? What if the code that checks your signature turns out to have a bug? Are you stuck with it, or can you update the code? What if an attacker can infect that code..?
Making this all convenient and safe is very hard and in practice either takes freedoms from you (you must trust your vendor and their signed binaries), or gives a potential attacker the same freedoms that you get. Giving you full access without enabling an attacker to have the same access is quite impractical (read: unlikely to ever happen for consumer hardware that you can order over the internet), and you still have to trust the hardware is what it claims to be when you get your hands on it.
In practice, I can't audit the hardware but I'd be pretty good if I could flash it myself once it's in my hands.
And here is was thinking we had all this solved already, a jumper on the mother board that has to be shorted to update the bios. But I'm too old to understand why that isn't good enough I guess :-)
How about this -- a jumper to update the firmware keys. System ships with third party certificate(s) (similar to the well-known certs your browser ships with). But with physical access you can update / supplement / remove keys as needed.
Now that means normal users and IT depts can do all the updates they need, but if you have a need for your own private key you can do that too (after physically touching hardware).
Oh and for bonus points, a motherboard switch can enable the keyboard on a physical designated USB slot to be able to authorize a cert modification. So you get convenience of not having to open the system, or if you want additional security then open the case and remove that jumper.
That's kinda what chromebooks do, though even better. There is a physical screw, that you can unscrew to rewrite the read-only part of firmware (which ofc contains secure boot keys)
Perhaps implement it as a touch button on the back plane of the motherboard rather than something completely internal? Or keep it as a jumper and leave case designers to decide where it goes, so it is just an extra thing to have a connector for like power and reset switches. For cases that don't support it, a switch could be put on an otherwise blank slot filler at the back of the case and connected the same way just like extra USB ports are (and extra serial/parallel ports used to be).
Enterprises love security but they also love easy operations. Having to press a button on 15.000 computers, especially if they're laptops and people work from home can be a headache. And if you rely on the user pressing it for you then getting the same for a determined attacker like in this case is just one easy scam call away.
>But the average person (or even the average IT dept) does not want to open their machine to update the BIOS.
I happily updated BIOS until I experienced a catastrophic failure that bricked the board, don't know how. Since then I'm terrified of updating BIOS. I've also experienced UEFI conflicts(?) loading linux to new boards that took forever to sort out. It seems BIOS development has not progressed as well or far as one might expect, but that may just be me and my klutzy bad luck.
> I happily updated BIOS until I experienced a catastrophic failure that bricked the board, don't know how. Since then I'm terrified of updating BIOS.
My experience with BIOS updates has been mixed.
The first time I did a BIOS update, it added an option to tell it that the HDD connection had 80 wires instead of 40 wires, which enabled a faster transfer mode. The second time I did a BIOS update (on that same machine), however, it made the realtime clock lose touch with reality, apparently running twice as fast, so I had to revert this update. That put me off from doing BIOS updates for a while.
My latest laptop can be updated through LVFS (using UEFI capsule updates), and it's new and under warranty, so I decided to risk it. One of the first BIOS updates I did made, according to its release notes, an important-looking change to the battery management. Some time later, the laptop started crashing randomly (the crashes happened when the display turned off, but not every time). After a lot of searching, I found out that a later BIOS update made some unknown change which causes these crashes, unless you add "i915.enable_dc=0" to the kernel command line.
For now, I'll keep risking it (at least while it's still under warranty), and hope no other obscure BIOS-update-caused bugs happen (and I'll probably keep using "i915.enable_dc=0" on this laptop forever, since there's no way to know whether a later BIOS fixed it).
Note though, that plain SecureBoot cannot prevent attacks like this. This is more of an area for BootGuard. The thing is, it's also not impenetrable and has its share of the flaws, same with Intel ME and similar technologies. If anything, it helps to hide the code from the watchful eye. Current security solutions are just a gimmick. It should be properly designed AND deployed in the actual hardware for that. What we currently have offers mostly protection for vendors and DRM, not the end users. It's broken by design.
You may not like it, but the future of computing is a signed, verified chain of executables from power-on to user-level application code. That's the future we're lurching toward in fits and starts.
And it could do so while still giving the owner tamper-evident control. Yet that never seems to happen - security is too good of an excuse for expanding vendor control at the expense of consumers.
A TPM allows to detect this tampering if used properly for that use case. (To clarify, Windows with BitLocker bound to TPM and Secure Boot on does _not_ do this, taking a shortcut)
UEFI Secure Boot doesn’t have much to do with this, because this is about loading an altered UEFI firmware.
The problem with AMD's PSB is the one-time eFuse on the CPU. If they just provided some way (perhaps electrically connecting a couple contacts on the chip) to reset the eFuse, it would be fine. Instead it causes CPUs to be locked to the motherboard/vendor the moment they are inserted.
"Oh, but the average user is incapable of that" and "Corporate IT are unwilling to do that" are the two biggest excuses I hear to not make it so.
Here's a crazy idea: you connect said jumper to a front panel key switch. The 'average' consumer could probably sort that out. And corporate IT could just leave it enabled, or require you to turn the key for them, or have IT interns on site that run around with the keys or whatever.
A Key. A literal physical key. But I guess that's too low tech for the crypto obsessed purveyors of TPMs and the like.