Obviously people might screw up, but the spec included a way to revoke any signed components that turned out not to be trustworthy
"trustworthy" according to who? Remember that dystopia does not appear spontaneously, but steadily advances little-by-little.
What's the summary? Microsoft (understandably) didn't want it to be possible to attack Windows by using a vulnerable version of grub that could be tricked into executing arbitrary code and then introduce a bootkit into the Windows kernel during boot. Microsoft did this by pushing a Windows Update that updated the SBAT variable to indicate that known-vulnerable versions of grub shouldn't be allowed to boot on those systems.
Who is Microsoft to decide what others do on their machines? Should they have the right to police and censor software they have no control of? In the spirit of Linus Torvalds: Microsoft, fuck you!
We are seeing the scenario Stallman alluded to over 2 decades ago slowly become a reality. He wasn't alone either.
Things like TPM and "secure" boot were never envisioned for the interests of the user. The fact that it incidentally protects against 3rd party attacks just happened to be a good marketing point.
"Those who give up freedom for security deserve neither."
The alternative dystopia is one where the NSA can grab your laptop, rip out the storage, write some code into the boot chain, put the storage back, leave, and you have no evidence to know who did that.
Signed code fixes this by requiring someone actually put their name to the code. If it's not someone I recognize, I don't boot. And yes, the NSA could theoretically compromise a signing key with a $5 wrench. But then they blow their cover. Signatures create a paper trail that makes plausible deniability vaporize.
There's no state actor that any of that would protect against. You, and everyone else, is already compromised at a level so deep there is no hope of digging out if that is your adversary.
What these technologies protect is market share, nothing more.
Targeted attacks against individuals or small groups from state actors are basically impossible to protect against. Widespread compromises of all operating systems at the boot level should be fought against.
I don't really think malice explains Grub being limited b/c of Microsoft's software at the boot level. There's conflicting objectives at play, and that will inevitably produce, well, conflicts.
If the NSA grabbed your laptop, you've already lost. For instance, they could replace all input and output devices (keyboard, mouse, screen, audio, etc) with ones that not only log everything you do, but also allow them to remotely control your machine as if they were physically present. They could then pretend the laptop was opened (by falsifying the hall effect sensor which detects the lid state), power it on (by forging a press of the power button), log into your account (by replaying the password they logged earlier), and do anything they wanted, as if they were you. They could even use the camera to detect when you looked away for a second while logged into the laptop, and quickly do some input, bypassing any extra validation (like fingerprints or a smartcard) logging into your user account might have required. No need to modify or even touch the boot chain and storage.
Whether the government was allowed to compel a company to write and sign code was going to be determined in the "Apple-FBI encryption dispute" but the FBI withdrew the day before the hearing since they had found another way to crack the phone without apple's help. I wonder if this will ever be re-litigated or the government just learned its easier to pay someone to write an exploit than it is to pay a company to write a backdoor.
kompromat? pffff money talks. somewhere there is someone who will take a bribe, and 300k to completely compromise every toolchain in the world is a pittance.
I mean can you actually protect against the NSA? After Stuxnet, I fully trust that nation/state actors can infect whatever they put their mind to - I'd rather at least have control over my machine
If your adversary is a nation state, you've already lost.
Which gives me another opportunity to quote from my favourite Usenix paper:
"In the real world, threat models are much simpler (see Figure 1). Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good password and don’t respond to emails from ChEaPestPAiNPi11s@ virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFINITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them. "
Figure 1:
Threat: Ex-girlfriend/boyfriend breaking into your email account and publicly releasing your correspondence with the My Little Pony fan club
Solution: Strong passwords
Threat: Organized criminals breaking into your email account and sending spam using your identity
Solution: Strong passwords + common sense (don’t click on unsolicited herbal Viagra ads that result in keyloggers and sorrow)
Threat: The Mossad doing Mossad things with your email account
Solution:
• Magical amulets?
• Fake your own death, move into a submarine?
• YOU’RE STILL GONNA BE MOSSAD’ED UPON
Most of that time he was in a series of caves located in a fairly apathetic nuclear power's boarders.
He was also trained and equipped by the CIA.
So, if you're willing to live in caves where they can't easily search for you after being trained and equipped by the best of the best, sure, you might live slightly longer.
Doesn't seem like a tenable circumstance to me though.
to be fair, he did lose eventually, and it took the CIA impersonating a vaccine distribution program to take blood samples to find him, which is pretty fucking omnipotent if you ask me, although sowing distrust in vaccine distribution did have some unintended consequences...
You're right, that's a fair call, but still he's a person with possibly the most recognised face on the planet at the time and really it took that long? He's not in a cave, he's living in a mansion in a city with servants and staff.
There's plenty of completely unknown actors who I'm sure are on their radar, along with modern serial killers who despite leaving physical evidence have still evaded capture.
I've had brief dealings with cyber side of policing from reporting incidents and a few friends in the services, they all seem incredibly capable but have questionable amount of resources to do the job (along with not getting private sector wages).
Some seem repeat this phrase like it's a done deal but their job ain't easy, there's a huge amount of bad people out there in the world and there's only so much focus an agency can have. Think a little bit of realism is needed when someone mindlessly repeats such things.
zzz, this guy who wrote this piece is either a tool or an agent.
people give up their security too easily...
the same applies to the threat model absolute bullshit. the threat model makes people think inside the box, meaning, they already accepted, by thinking inside that box, that there are people/entities they can't defend against.
I don’t know what country you live in but it’s impossible to decrease your attack surface when targeted by a Nation State Actor. Even more impossible if you live in the country in which the Nation State Actor controls through a plethora of agencies and relationships with corporations.
Yeah that'll work for everybody who never ever touched any cloud service and who's friends and family never ever touched any cloud service (nobody in the real world).
I guess in their defense the same attack can be used against any other OS so they're unintentionally protecting Linux as well, since they stated this was supposed to be a Windows-only system change. You can disable secure boot if you don't want to be secure. And, there is a way to disable the SBAT policy and keep secure boot if you want that, which is also insecure. Disable Secure Boot, login, sudo mokutil --set-sbat-policy delete, reboot again, re-enable secure boot. But, then you're susceptible to the attack.
I think understandably, everyone is concerned because it felt like an affront by MS against Linux. But, I don't think that was their thought process at all.
> I think understandably, everyone is concerned because it felt like an affront by MS against Linux. But, I don't think that was their thought process at all.
Given Microsoft's history, it's hard to really be sure. It's been a quarter century since The Halloween Documents and Microsoft definitely gives the air of contributing to the open source ecosystem today, but giants like having a big moat to defend, and old habits die hard. And Microsoft definitely has a reputation, even if, technically, undeserved.
There was nothing to be gained in this except ill will. Hanlon's Razor suggests they were in a hurry to fix a security issue and didn't dot their i's on checking for dual boot systems.
It's a trolley problem, and it's not in Microsoft's locus of control to keep dual boot systems dual booting. So they don't try.
They have never, ever supported anything other than the Microsoft bootloader[s], and if you work around that for instance it's pretty trivial to blow up your data by hibernating Windows and booting into a different partition. Resuming hibernation loads the old MFT onto the modified partition and you pretty much lose everything.
>hibernating Windows and booting into a different partition.
Definitely to be avoided, along with a few other considerations.
But experienced multibooters can usually reboot so quick that they have not had any need for hibernation since forever. It's almost like a valid excuse to not fully reboot a sluggish machine, more so than an energy-saving success. But I don't blame them.
Keep in mind that the default since the beginning of Linux is for someone who wants their PC to be completely Linux, never had a need for anything originating from Microsoft whatsoever. Something in firmware would really be the worst and it was immediately obvious when UEFI & GPT were foisted, with Microsoft SecureBoot to boot, that something was rotten somewhere.
A bigger threat than Linux was actually Windows 7, but with this exact hindsight now it can be seen how the knife was much further twisted for Linux well beyond the effective lifetime of W7. This was not just collateral damage, and it keeps on giving as if booby-trapped or time-bombed.
Also remember that until Windows Vista, motherboards and business machines from all major manufacturers were always common where there was no way to alter the BIOS itself in any way without physical access. Like a jumper on the board internal to the PC. Sometimes special key combinations on laptops accepted only from its built-in keyboard.
With BIOS settings only accessible to non-local users occasionally on specialized enterprise models according to options if present.
When you wanted to upgrade your BIOS, or "re-flash" it due to something like power line corruption, you always booted to the floppy containing the desired firmware after enabling the delicate flashing operation manually. By the time Vista arrived it was often a bootable CDROM, or a USB stick formatted FAT32 with DOS to substitute for a floppy. Whichever way you did it you wrote the same binary file into the BIOS chip, then with further access completely disabled after that, never need to worry about malicious firmware whatsoever as long as you used a clean binary.
The only possible way for a rootkit to infect your machine was to reside on your HDD. Usually in some of the spaces outside your filesystem that were so commonly unused it could lurk there and persist in spite of re-formatting.
But no rootkit or preboot contamination could withstand a complete HDD zeroing, or replacement HDD if needed under emergency conditions.
Well one day Microsoft must not have wanted people to ever boot DOS again, so they developed a need to access every BIOS from within Windows NT6, and manufacturers conformed. It only bricked machines significantly for a few years while the DOS way continued to be flawless for a while there.
It's a slippery slope, this got much worse once they forced UEFI on consumers, and malware can now reside in the preboot environment itself, which can often also access the web if connected.
Plus the motherboards have much more space for this kind of thing.
With Windows Servers and general Macs well-established beforehand at using EFI to restrict booting to only the exact OS that it was shipped with.
And everything on that Microsoft webpage introducing the advent of UEFI & GPT as a complete advantage in many ways, looking suspicious and turning out to be completely false without even waiting for the 20/20 hindsight there is now. The whole thing!
The cloak has now been further removed from this "false sense of security" system but surely not everybody wants to say out loud how sparsely-clad the emperor has become as he struts as if to demand the full respect once deserved.
So there hasn't been a physical way or available setting to prevent malicious access to sensitive PC firmware for quite some time, and who's most likely to blame for it?
No Windows "update" has ever made sense to change anybody's BIOS or UEFI firmware without being absolutely stupid as shinola, not like there was any question before this either.
This is also beyond most user recovery if you get malware in your UEFI.
Zeroing a HDD or SSD won't help you now like it would with BIOS.
>But, I don't think that was their thought process at all.
Intent, being squishy and debatable matters far less than the outcome.
I can say that I never intended X, but in the end, X still happened. That it happened unintentionally assuages exactly no injury from X having happened. Intent, therefore can only be considered as at best, an aggravating factor on top of the outcome.
>But, I don't think that was their thought process at all.
Intent, being squishy and debatable matters far less than the outcome.
I can say that I never intended X, but in the end, X still happened. That it happened unintentionally assuages exactly no injury from X having happened.
"Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety"
Is the ability to run an insecure bootloader on a system that has an installed OS with a security policy built around it not running insecure bootloaders an essential liberty? Let's say it is, for the sake of argument. Have you given up that freedom? Given that you can disable secure boot, or boot a live image and remove the SBAT entry, or boot an updated image and recover your existing install, I think it's hard to say that you've actually given it up. Is that security temporary? A well-maintained secure boot chain provides you long-term security against a variety of threats, so I don't think it's clearly temporary.
It's fine to disagree, but please don't do so by pretending that a misquote is meaningful.
> Who is Microsoft to decide what others do on their machines?
That would be an amazing rant had it only ended with "Sent from my iPhone".
Since the Blaster worm incident two decades ago, we're in a new era where security at scale becomes the forefront responsibility of the companies developing the product. That includes writing more secure code, having more verifications in place, adopting more secure technologies, but also, limiting user capabilities in order to avoid at scale security incidents.
This isn't about Microsoft. Some of these "forced" limitations are: UAC (User Access Control)/SUDO, Bitlocker/Full disk encryption, App sandboxing/On-demand permissions, Signed firmware and boot mechanisms, signed release binaries, Jailbreak-protections, Limitations on raw packet operations, auto-installed updates, forced security updates, closed source code, built-in anti-malware.
When you have a billion devices running around the world, you can't say "hey we'll let this arbitrary group of billion people do what they think is best for them", because you then end up with Blaster worm, and the whole Earth falls apart.
Think about the more recent CrowdStrike incident. That kind of deployment has been performed by professionals, not even regular people, and yet, it's managed to bring down the entire world to its knees. People might have died because of CrowdStrike.
CrowdStrike happened because one of the "user-empowering" features: ability to install kernel drivers on a machine. Now, people are begging Microsoft to adopt a more isolated, user-mode-only device driver system, so this kind of incident won't happen. Yes, some users who want to install their precious kernel driver could have problems, but at least the world would keep running.
Microsoft is nowhere to be blamed about this. Secure defaults is the responsibility of every product that intends to be used at scale.
If you'd like, you can disable Secure Boot, keep your data in plaintext on your hard drive, let all applications run as root, and you'd be the most powerful user in the universe. I'm all for personal freedom to disable the security features, but, at scale defaults must always prefer security over capability. That's not about Microsoft, or Google, or Apple. That's about at scale risk management.
The Blaster rworm did not in fact make the whole earth fall apart. Stop scaremongering.
> When you have a billion devices running around the world
This is exactly the point: Microsoft does NOT have those billions of devices, their users do.
> CrowdStrike happened because one of the "user-empowering" features: ability to install kernel drivers on a machine.
Crowdstrike happened because the corpration behind it had direct control over the computers it was running on and the ability to install security updates without the user's consent. They even ignored configuration that was supposed to delay updates for critical machines. Spinning this as some kind of failure of user empowerment instead of a consequence of the same kind of ownership inversion that secure boot and other DRM brings is absurd.
> at scale defaults must always prefer security over capability
And that's exactly how you end up in a dystopia. Because the demand for increased security never ands and can be used to justify any and all loss of freedom.
> The Blaster rworm did not in fact make the whole earth fall apart. Stop scaremongering.
Blaster was a wake-up call, caused DDoS on servers, and kickstarted similar variants like SQL Slammer, Sasser, Conficker that hindered many services around the world. Stop dismissing real threats because you haven't personally affected by them.
> This is exactly the point: Microsoft does NOT have those billions of devices, their users do.
Do you prefer a billion unpatched systems roaming around with all ports open and running all programs as admin? Why are you against as secure defaults?
> Because the demand for increased security never ands and can be used to justify any and all loss of freedom.
If you don't like secure defaults, just turn them off. If you don't like how Windows does something, use an alternative. What dystopia are you talking about?
Blaster was Microsoft's own incompetence. CrowdStrike was CrowdStrike's own incompetence. They are free to fix the problems of their own doing. But messing with software you do not own, on machines you do not own, crosses a line and should be considered an act of aggression. What if some Linux distro releases an update that deletes any installations of Windows it finds "because Windows is insecure" (according to them)?
people are begging Microsoft to adopt a more isolated, user-mode-only device driver system, so this kind of incident won't happen
Those people are, to put it bluntly, either authoritarian idiots or corporate shills. They want to give more control to Microsoft, but it's not like M$ is all that competent either, as what this article and past fiascos (like the Blaster you mentioned) have already shown, so they're going to just make things worse for everyone.
CrowdStrike happened because one of the "user-empowering" features: ability to install kernel drivers on a machine.
And crimes happen because people still have freedom. Doesn't mean we should start imprisoning (or enslaving to the machine) everyone from birth.
"Freedom is not worth having if it does not include the freedom to make mistakes."
All security bugs are result of incompetence. Massive DoS incidents are result of scale. Use your magic wand, bring Linux to 90% desktop OS marketshare, and see how one malware destroys an order of magnitude more Linux devices than Windows.
> They want to give more control to Microsoft
No, they want secure defaults, not less control.
> And crimes happen because people still have freedom.
Okay, let me extend that whataboutism with "hey why do we have laws that limit people's freedom, let's remove all the laws if people are entitled to infinite freedom, and can be trusted with their judgement".
> When you have a billion devices running around the world, you can't say "hey we'll let this arbitrary group of billion people do what they think is best for them", because you then end up with Blaster worm, and the whole Earth falls apart.
The bug is in the fact that billions of machines are running exactly the same proprietary software.
Following the "virus" metaphor, having billions of identical organisms is how you get pandemics, mass die-offs, and extinctions.
At the time of opening this page this was the top-ranked comment and that is a bit depressing. If you read Matthew Garrett's blog in full, you can learn quite a lot about what went into the process of building out secure boot for Linux.
* The UEFI Consortium via their spec mandates nothing, but Microsoft (not mentioned here, but to stick Windows stickers on your boxes and get WHQL for your hardware) requires carrying their db keys: https://mjg59.dreamwidth.org/9844.html
* You can take control of the process yourself and evict Microsoft's keys: https://mjg59.dreamwidth.org/16280.html the details are sort of in here, but let me summarize it for you: by default the platform key is provided by your manufacturer, which signs a key-signing-key, which itself signs updates to the DB (what you can boot) and DBX (what won't boot even with valid signatures). As the article says, x86 specifications explicitly require that this database be modifiable, so you can always install your own keys. I did this for a while, and on my laptop I evicted Microsoft's keys entirely. Ultimately you can bypass this if you can bypass the BIOS password simply by resetting the database or disabling secure boot and... well, https://bios-pw.org/ .
* The whole thing was built so that you can re-sign your own kernels and other bits if you want (you could just sign your distribution's db keys with your KEK, which will make OS upgrades smoother): https://mjg59.dreamwidth.org/12368.html
* Here is an article on secure versus restricted boot: https://mjg59.dreamwidth.org/23817.html - I said above that the x86 specifications explicitly allow the key database to be modified (Microsoft's ARM devices were the inverse).
Now some non-Garrett points:
* To be affected by Windows Update, you need to run Windows. Tautological and true!
* If you update your firmware via, say, LVFS (https://fwupd.org/) and your distribution via its standard tools you get updates to things like dbx all the time. All from your hardware vendor and friendly FOSS folk, no Microsoft involved. You might even be using SBAT right now.
* Those Talos II boards people like? They also have secure boot. It is entirely optional and since Microsoft only implemented a "kinda" version of NT for PowerPC, they're definitely not involved. It is not UEFI, since there's no UEFI for POWER (there is for ARM and RISCV though). You also aren't getting anything from LVFS and barely anything from your distro, but, secure boot is there. You can turn it on.
Personally, providing I can control the keys and decide what is and is not trusted and whether I use it, I am fine with it. Depending on what you want to achieve, secure boot is not always unreasonable, and neither are TPMs - small example, software exploits won't be able to successfully modify the boot chain if you have good key management (i.e. you sign elsewhere). They also have their limitations - as usual, physical access is hard to defend against and remote attestation is a hard problem all around.
A recent Linux Unplugged episode went into how one can use the TPM to set up a secure and trusted chain of trust for the booting process on Linux [0] using Clevis [1], very interesting!
- use a unified kernel image (UKI) which means I directly boot the kernel from efi (and place it in the efi partition)
- sign the image with that platform key (I use sbctrl)
- have every thing else including swap partition for hybernation fully disk encrypted, I could set it up to auto unlock using TPM2 but I would recommend using a long password. TPM2+password would be optimal. There had been too many cases of leaky TPMs and especially on a laptop you don't want to fully rely on it (through you in turn could decide to auto login if PCRs are unchanged, or login using only the (often not so secure) fingerprint reader etc.)
- efi password, I mean if you don't set that you lose most secure boot benefits... EDIT: Not really most, there is still a bunch of ways it helps but it's anyway a bad idea to rely on secure boot and not have a efi password
As bonus tip:
- include the vfat in your initramfs (i.e. `MODULES=(vfat)` in `/etc/mkinitcpio.conf`) if your booting kernel and installed kernel modules ever mismatch that is nice to have to fix the issue
> I could set it up to auto unlock using TPM2 but I would recommend using a long password. TPM2+password would be optimal.
Personally, I trust LUKS with passphrases far more than I trust some random proprietary hardware implementation nobody can audit...
It's also important to me to be able to recover the disk contents with the passphrase on another machine if the motherboard dies. Maybe that's what you meant (backup passphrase), but I think you meant requiring both?
In case of systemd-cryptenroll (and other LUKS-related systemd infra, even without TPM) it's systemd that handles the passphrase to generate a key to unlock LUKS device - possibly combining with a PIN or passphrase or also a FIDO-compatible device or a smartcard.
- but it would be optimal to require PCR values and password
Note that in any case where you use PCR values you always should setup a secondary way to unlock the partition. Or else you will lose your data if some of your hardware measured into a PCR breaks.
Requiring both is optimal as it 1. doesn't rely on TPM/PCRs but 2. prevent certain attack vectors possible with password only but not possible with PCRs. Through you now also have to manage a backup unlock method. Which is annoying. And the security benefits are negligible/irrelevant for most people. Which is why I don't use it.
Nit: It's useful to distinguish between passwords (checked against a hash for auth) and passphrases (used for decryption). It's an important practical distinction because a lost password can in general be bypassed out-of-band somehow while a backup strategy for passphrases is essential.
A more common definition of passphrase is a a password which is a phrase which makes it longer but also more predictable in it's structure.
Similar prompts for decryption will ask you for passwords in most cases as non technical users shouldn't need to understand the underlying technical differences (nor do they normally want to, or do).
sbctl with package manager hook for automatically signing on updates etc.
keys are just stored on the device, for the typical laptop use-case this is good enough (platform key only used by a single device, no MDA or anything like that)
The "new" way of doing this would be using systemd-cryptenroll [0]. I did this recently on Ubuntu 24.04. I actually tried the default LUKS+TPM shipped with Ubuntu 24.04 at first [1], but it was a bit disappointing because it locks you into using snap-based kernels. This means you cannot install custom DKMS modules (which I needed). Although Clevis is very interesting software (you can even unlock based on some other computer in your network [2]), it's not absolutely required anymore for LUKS+TPM.
> Microsoft's stated intention was that Windows Update would only apply the SBAT update to systems that were Windows-only, and any dual-boot setups would instead be left vulnerable to attack until the installed distro updated its grub and shipped an SBAT update itself.
I wonder what went wrong here? If you would read the EFI boot order it would clearly say to boot shim first? Or were these dual boot setups where the user would use the firmware menu to select linux or windows?
Anyway this comes at a time when I want to install linux on my work PC, since it has two nvme slots I think I'll go with installing it on a completely separate drive. Would have not prevented this issue though, which seems a legitimate fix from microsoft, just bad communication.
From the people reporting this affecting their Linux boots in various IRC/Matrix forums and my diagnostics with them, very often they weren't dual-booting in the Microsoft sense, in that they were booting using the UEFI Removable Media Path so there was no entry in the motherboard firmware's Boot menu.
I suspect the MS installer simply scans the EFI BootXXXX entries and looks for a non-Windows boot-loader path like, for example, /EFI/$distro/shimx64.efi
If one-such doesn't exist the installer likely assumes it is not a dual-boot system.
MS has zero vested interest in caring. If they brake booting for Linux users, how does that hurt them in any meaningful way? Sure they get some press, but is it bad press if most people are never affected by this?
I worked for Microsoft for 17 years, most of that in and around Windows.
I can tell you that you are wrong. Whatever the company’s flaws, the people in Windows care deeply about compatibility and about not breaking things with updates. I have hours of stories from the trenches, and could probably talk at length about how such a point of view would be suicidal for the Windows business.
I don’t know what went wrong here, and I’m not saying Microsoft is blameless. I am saying that whatever went wrong was NOT due to lack of caring about breaking things, even non-Microsoft stuff sharing the same computer.
It makes Linux more robust. Since Microsoft is the king of vulnerability, making Linux more robust is NOT in their best interest. I actually think Microsoft did a GOOD THING. This should create a mad scramble to tighten up security at all those lackadaisical distros!
Microsoft's bootloader is clearly intended to be a pain in the ass. There is nothing new about this situation. They have been doing it since the 90s when any windows update would write over your MBR without a care in the world. We all hoped that UEFI boot menus would resolve the situation. They would have, if only Microsoft were willing to stop intentionally polluting everyone's partitions. Instead, it is not only the default, but the only option, for the windows installer to squat in the first EFI System Partition it sees. That means that if you install linux first and windows second, windows will install its bootloader to your ESP. Even if it's too small. There is no way to disable this behavior. It's asinine.
P.S. The ultimate irony of this situation is that it actually ends up breaking concurrent windows installs more often than anything else.
People that dualboot are probably also people that run random debloat scripts that disable telemetry. So when such system broke there was no signal it happened.
I really hate the error message from shim (or SB in general) when a security check fails. At tell me what exactly failed and what I could do to fix it.
I hate error messages from most software. Recently my system failed to boot because systemd told me a start job is running for a certain disk. And it doesn't tell me what the nature of the start job is, why the start job is needed, and why the start job is not finishing. From the disk UUID I could guess the first two, but there was no way to guess the third.
This is why I strongly prefer working on software made by developers for developers. That is to say, internal tooling. You can just show the entire error message in as much detail as possible, without a PM stepping in and saying you can't show this much scary text to the typical user. Especially if the user of the software also has easy access to your source code so they can search for the exact string and find the exact location of the error, and understand exactly what checks are being done to emit that error.
Good error messages are hard. You want to tell the user what to do, but if you knew that the error could be thrown, you probably should have been gracefully handling the problem. You don't know what information is useful to a hacker and you don't know how your error will be propagated. Meaningful errors at one level ("incorrect parameters passed" when calling an API) is perfectly useless at another level ("incorrect parameters passed" when interacting with a React UI). And if you respect all of the above, at some point you'll end up with an error message that basically says "I can't tell you what, why or how something went wrong, but it did."
all information is useful to a hacker. if you can find a way to use information beyond creators intent, to achieve your goals regardless of hat color, you are hacking.
There's two directions that goes into. Highly specialized error codes with zero results on search engines, or overly generic errors with a billion results and underlying reasons.
Error design needs to be its own subject / specialization. Errors need to say what the problem is and how to fix it, in an ideal world, or what the user can do or should google to solve it.
And of course, any error code of any public software should be listed on a website or a locally accessible resource.
This is where I really love mainframes. The errors all contain a consistent code and a description. The codes are all published, are easy to find nowadays, and have generally good documentation attached to them.
It's not a new trend - error-code based software would propagate that ERROR_INVALID_PARAMETER all the way from the function with the invalid parameter back out to the return value of the user operation, then helpfully tell the user "Invalid parameter!"
Exceptions with string messages and full stack traces might be yet another underrated Java invention.
God that error is nearly useless even to the developer. Last couple of times I've gotten it, I've dropped the DLL that the error originates from into Binary Ninja and run the debugger to figure out which parameter failed a check.
Even that at least helps rule out the most common issues, and is less frustrating than an update pooping out 0x80070070 and making you manually translate it to ERROR_DISK_FULL.
One of the humorists I worked with at Bell Labs in the 1980s would always report error code 13 as the error when no more specific error code was available. When one looked that up in the man page, it just said "you are unlucky".
shim has an EFI variable to control its verbosity, you can set it to output all the gory details with e.g. `mokutil --set-verbosity true`, and on a glance there are some tools on Windows too to modify EFI vars
Because most users are afraid of gory details. And the people who know enough to fix it are expected to somehow know how to turn on logging. It's the modern equivalent of "please contact your administrator"
The first thing the "administrator" will need is all the details. If they were printed, the person reporting could at least send a screenshot or similar.
No please don't do this. I have lost count how many times I tried to follow a link only to get a 404 page. If there is an issue where the app gives the user an error, show the error details & context directly and list the possible mitigation steps right then and there.
A URL with specific content is just another thing that now needs to be maintained along with the code and failure modes.
I think Windows BSOD including QR code was pretty clever idea, although unfortunately it's halfbaked in that it's just a fixed generic URL instead of something specific to the error.
The problem with bootloaders is they really can’t spare a lot of storage. Storing different QR codes for all the common errors might be asking too much.
You don't need to store the whole QR code, just code to convert an URL into a QR code. Or a good, short URL that can easily be typed, e.g. "microsoft.com/errors/1234"
I believe humans are vastly more stupid than you give them credit for.
It's the HN effect. The pool of geniuses here suggests to the reader that the pool of intelligence outside of this microcosm is similar. It's not.
Criminals are overwhelmingly stupid. It's why LEO catches so many of them. Smart people don't tend to do crime, they tend to sell their skills to more legitimate enterprises.
This has been my stance for years, but I am open to be persuaded why this is a terrible practice that will lead to kitten murder.
I saw someone else give a similar reasoning that if there were a booting error, they would never assume it was a rootkit, but some breakage between all of the booting cruft. I certainly lack any expertise to understand what happens during boot to be able to diagnose problems.
My stance is similar: I insist that any computer I use to run my main OS uses the CSM (Compatibility Support Module) method of booting. This effectively eliminates UEFI's role completely after control is handed to the bootloader, using the pre-UEFI boot method of locating the first sector of the boot device and executing that.
As a user, I see very little benefit to using UEFI.
I have a concern that if I find malware or suspicious activity on a system, when I report it through Virus Total or another channel I won't be believed if secure boot is disabled and that appears in the logs. Or if another threat actor got in during the brief period it was disabled and invalidated the audit trail.
"We won't accept any reports because your secure boot chain is too short." To put it bluntly and crudely. Not to dismiss a very real and important real-life issue.
I've lost people I love in my life over similar things. I don't want to be in a similar situation in other Walks or Paths of life.
It's a powerful incentive mechanism. Even if you're believed in the end how long did it take? How old? And now that difficult thing becomes a talking point when you just wanted to build.
You could if you want to, but if your distribution provides a UEFI bootloader (shim / grub / systemd-boot / whatever) signed by the default MS-trusted cert, or you're willing to set up everything yourself with your own certs, it doesn't hurt to enable it either (except when an incident like this happens).
The Mint forums pretty much tell everyone to blanket disable secure boot because nobody seems to know how to make it work, certainly not well enough to explain it to a beginner.
I accidentally checked "install media codecs" on the Mint installation which requires secure boot. Didn't think much of it but something went wrong later on in the setup causing a restart. Well, it left the secure boot stuff in a weird state and forced me to reset the CMOS because nothing was working or booting.
Yeah I thought Red Hat fought to get their keys installed in there, too. Which is why Fedora and RHEL (and derivatives like CentOS or Rocky) work alright OOTB
I installed Linux on a new laptop yesterday, and couldn't get either NixOS or Debian to install until I turned off secure boot. So I guess these distros don't bother getting every release signed by Microsoft.
At least it was easy to turn off. I just wish the error message mentioned Secure Boot -- it took me a few minutes to figure out what was wrong. At first I thought I had a corrupt USB stick or something.
There are two separate Secure Boot keys Microsoft uses: one which they use to sign Windows, and another which they use to sign everything else (the "Microsoft 3rd Party UEFI CA"). AFAIK, some recent laptops with Windows preinstalled come by the default with the second one disabled in the BIOS (it's a new Microsoft requirement). To install Linux on these laptops without disabling Secure Boot, you have to go into the BIOS and enable that key.
Most mainstream distros work fine with secure boot still enabled. You can disable it if you want, but if you use Bitlocker, disabling secure boot will require you to enter the recovery key, which is a massive pain.
You can always disable secure boot if you want to, but in this case installing the patches released two years ago would probably be a better fix.
<Tinfoil hat> I think there's more than meets the eye here. I think part of the reason MS is enforcing TPM2.0 and now this SBAT update is that there is widespread rootkit level malware and they are trying to stay ahead of the curve. </Tinfoil hat>
When it comes to the realities of dual-booting, I had tons of problems with Win7/8/10 with suspend-to-hiberfile.sys issues and updates 10 years ago breaking grub. 10 years ago I finally decided, "You know what, I'm just going to run Linux, if I really need Windows or Mac, I can run a VM or use a separate spare computer."
Since then I have successfully setup Secure Boot for my distro, learned how to tweak QEMU for performance and passthrough, got a working QEMU macOS VM (although having to update every few months to keep XCode working is a pain), and generally pretty happy with the state of affairs.
The German government caused Let's Encrypt to issue fraudulent certificates to xmpp.ru and jabber.ru by physically intercepting the server's network connection. https://news.ycombinator.com/item?id=37961166
IMHO, those aren't fraudulent certificates; they established effective control of the hostname, which is all a certificate implies. They didn't have authorization from the owner of the domain, but Let's Encrypt doesn't include ownership information, so there's no fraud there. Of course, this means someone who can MITM a whole server can also have a certificate issued to show everyone they're authentic.
You could potentially protect against this by cert pinning to a CA that won't issue to an interloper, or possibly using CAA records in DNS if you can be confident your DNS won't be MITMed or changed out from under you buy your registry. DNSSEC helps, if your registry (and the root) won't fold under pressure, but not if they do ... and DNSSEC is in the top 3 causes of high profile DNS failures in my estimation.
Because browsers can require certificates to be in the certificate transparency logs to be valid. Chrome already does this. If a government convinces a CA to create a malicious certificate and publishes this cert to the CT logs to perform MITM, it will get found out and that CA can close its doors.
Also, if someones DOES have this ability and gets found out, e.g. someone finds the certificate, it makes it clear someone had that ability. You'll know that root CA is compromised one way or another and it potentially gets burnt.
Thus, they'll only use it under the strictest smallest of circumstances where the reward outweighs the risk, in a high profile scenario, rather than rolling it out willy nilly.
Similar to when threat actors use a 0day.. if they use it all the time it eventually gets discovered and fixed. If they save it for a special case they may manage to use it a couple of times before it gets patched.
Browsers enforce that certificates are signed by two independent CT logs. The public keys of which is shipped by the browser. So a MITM would need to compromise a trusted CA and two CT logs to be able to pull off an attack undetected. Maybe not impossible but much more difficult than just a single CA compromise.
The browser is verifying that the certificate appears in public certificate logs. So if a TLA forges a certificate (whether with the cooperation of a certificate provider, DNS provider or domain owner) that is now part of the public record. And if they do it with any domain that has enough eyeballs, someone would presumably notice. Not to mention that it's an easy way for agencies from rival countries to tip a reporter or security researcher off that it happened.
Of course in reality most browsers don't actually check the certificate logs but only require timestamps signed by certificate logs that prove that at least two certificate logs know of the certificate. A TLA that can pressure at least two logs to provide those timestamps without actually publishing the certificates isn't really stopped. But at least that widens the circle of people who have to be in on the conspiracy.
In a perfect world browsers would do spot checks against the actual certificate logs, and require that the signed timestamps are from logs that are unlikely to be influenced by the same actor (e.g. a Western, a Russian-sphere and a Chinese-sphere certificate log). Your guess why we don't do either is as good as mine
That would be compromising the domain owner, rather than the threat model of Certificate Transparency which is compromised Certificate Authorities, especially given the number of government owned, publicly trusted (sub-)CAs.
The Snowden leaks made it clear that so long as the government has the means and motive to perform some kind of surveillance, they'll do exactly that. It may not be through the exact methods people are suggesting, but rest assured it is happening.
That’s another foundation of conspiracy theory: one specific example can serve as evidence for universal truth. Sure, the specific claims of theory A might collapse, but it might as well be true because it could be true because of past example B that is along the same lines.
I don’t doubt there is secret government surveillance we’d all be upset about. I’m not willing to use that general belief to assert the truth of specific unsupported claims.
The Snowden leaks weren't one specific example, they were dozens, involving every single big US tech company of any significance, and involving tons of different methods of surveillance.
I think you underestimate how close big tech and telecom companies are to three letter agencies. See the "Protect America Act" of 2007 which covered everyone's asses for warrantless spying.
Wasn't it the FISA Amendments Act of 2008? Or did the Protect America Act of 2007 also have immunity provisions?
edit: oh I see, the immunity provisions were first introduced with the Protect America Act of 2007 but they had a sunset date under that law so they were later made permanent by the FISA Amendments Act of 2008.
Congress already granted retroactive immunity for telecoms acting in cooperation with the US government with the FISA Amendments Act of 2008. I don't see why they couldn't do the same for Microsoft (assuming the law doesn't already apply to them).
> Release from liability - No cause of action shall lie in any court against any electronic communication service provider for providing any information, facilities, or assistance in accordance with a directive issued pursuant to paragraph (1).
- Section 702, subsection h, paragraph 3;
> Release from liability - No cause of action shall lie in any court against any electronic communication service provider for providing any information, facilities, or assistance in accordance with an order or request for emergency assistance issued pursuant to subsection (c) or (d), respectively.
id be shocked as well, but the small paragraph doesnt seem to preclude it.
people make mistakes, equipment can have unexpected behaviour, and people lie.
im curious, about if this would be considered compelled speech if someone said no you cant MITM my service unless there is an extant activity of concern.
its gotta be addressed in other section or paragraph.
Oh, you mean like the time Microsoft was the first company in the Prism program uncovered by Snowden, later followed by Yahoo, Google, Facebook, YouTube, Skype, AOL, and Apple? The program allowing the NSA to decrypt any traffic* or data of these vendors? The publication of which had, like, no consequences for Microsoft or the others?
Yeah. I don't think they're really afraid of repeating that.
The sad and depressing part is that along the way we lost all possibilities of running coreboot or libreboot as an open alternative.
The only real option is to buy a used laptop from before the T44x generation (if you really want it secure)... or newer machines that come with other perks like soldered-on batteries that destroy the mainboard along with them when they leak out eventually.
I am not sure what the consumer rights protection agencies on the planet are doing, but seemingly they've been asleep at the wheel for way too long now.
> (Tinfoil hat) (...) I think part of the reason MS is enforcing TPM2.0 and now this SBAT update is that there is widespread rootkit level malware and they are trying to stay ahead of the curve.
The only vendors that seem to do something against it are somewhat System76, Frame.Work, Purism and maybe Starlabs. But the huge majority of devices is under the absolute control of Microsoft's signing process now. So I would argue that this isn't a tinfoil conspiracy, but a strategical decision that MS made to re-grab their lost power on x86 systems.
Framework comes with Intel ME enabled, not able to be disabled, and barely updates their firmware. For example, they left logofail unpatched for a year.
As I said, the better option would be a pre-Haskell era CPU so that you can flash libreboot on it and don't have to worry so much about intel-ucode, but that would also imply a more than 10 years old laptop.
I just wish there would be more free and open options.
The RISC V meme of the Hackers movie from the 90s is now so old that it's never gonna happen anyways. Those CPUs are nice and all, but you're even better off using a Pentium CPU performance wise, and that's a 20 years old CPU.
>Those CPUs are nice and all, but you're even better off using a Pentium CPU performance wise, and that's a 20 years old CPU.
This is out of date information. Currently purchasable RISC-V CPUs (in e.g. Milk-V Jupiter) are already the level of Intel Core 2, with the important difference that Jupiter has 8x of them, whereas the top Core 2 chips were only quad-core.
Cores expected to ship in early 2025 on 16-core Milk-V Oasis are at the level of Intel Haswell or AMD Zen 1.
Akeana, Tenstorrent, SiFive and Ventana have IP available for licensing which performance is similar or above Apple M1.
There isn't much of a performance gap left to close.
> [...] there is widespread rootkit level malware and they are trying to stay ahead of the curve.
There literally is. BlackLotus bootkit actively abuses a vulnerability Microsoft has been trying to patch (by updating the blacklist the vulnerable bootloaders) for the past two years and it's still ongoing AFAIK.
Ubuntu regularly locks up and black screens when I try to sleep/hibernate. It's a very common problem that has nothing to do with Windows or Microsoft. I also have had 0 issues with dual booting for roughly 10 years now. HN wouldn't be HN without some baseless MS bashing.
I have had occasional issues with Windows and various flavors of Linux hibernating but nothing that happens with any regularity - at all - and nothing that can't be solved by simply rebooting.
This is also the only reason I ever thought of buying an Intel GPU, but then I realized "Wait, if I am buying a new GPU I can just use my old GPU for host/passthrough. I don't need a new GPU that is roughtly as good as my current one just for SR-IOV, I'd want one at least much better than my current one" (RX 5600XT, not really top but it does its job)
hibernate always have been more trouble than it's worth. and specially now when boots takes less time than loading your webmail.
it just screams you have no data hygiene. it's the extra step after living years with 723 open tabs.
qemu passtrhu is the way. and if you don't own expensive hardware (i.e. only integrated graphics like all feasible laptops), just dual boot with your own signing keys so you don't have yo worry about revocation crap. either its signed or not. revocation is just replacing the root PK keys.
> because otherwise they're shipping a vector that can be used to attack other operating systems and that's kind of a violation of the social contract
I see the end of the chain still ends up at "trust" in humans/companies at some level. Microsoft broke dual boot systems because they think they know what's best for someone else's system and that's not okay.
Major question for me is, are the grubs that are getting rejected completely unpatched, or were they patched by distros without updating the "security generation"?
I'd be also really curious to hear how MS was attempting to do dual-boot detection, I hope someone (more skilled than I) would reverse engineer that bit from the update.
> Major question for me is, are the grubs that are getting rejected completely unpatched, or were they patched by distros without updating the "security generation"?
> I'd be also really curious to hear how MS was attempting to do dual-boot detection
I'm in the boat that they shouldn't doing dual boot detection at all, it sounds like everyone agreed to use SBAT to stop vulnerable bootchains from being exploitable and some Linux distributions got caught slacking.
I agree, this is the key question. It seems like all the distros, and microsoft, need to coordinate on the "security generation", whenever grub (or other linux boot component) releases an update or patch? Maybe that's an extra annoyance they didn't have before, so until now they just left that number alone, and kinda forgot about it?
Interesting. The question that immediately popped into my head was: How does the secure boot system determine the “security generation” of GRUB exactly? Sounds like just based on the assertion of GRUB itself (and trusted signature of the distribution that built GRUB)?
The fact that the list of allowed GRUB versions is itself manageable via a Windows Update points to some other issues with this particular security scheme, given Microsoft’s own recent history of mishandling private keys.
> It goes the other way too. An Ubuntu Update could put the Windows bootloader on the deny list.
I don't think this is generally true. Since most computers don't ship with Ubuntu's CA directly trusted their signed components rely on a chain of trust that goes up through Microsoft's 3rd party UEFI CA cert to their root. I don't know the specific details of UEFI's implementation but it seems incredibly unlikely that it'd allow a subordinate CA to sign an update that distrusts components upstream of it.
If an OEM does ship Ubuntu's root or if a system owner has manually installed it then sure, but that's not the majority of systems.
I don't understand what the expected behaviour is here? Let's say you dual-boot two copies of Arch so we don't drag M$ into this. You update one, get the latest bootloader update that increments this security generation thingy. You reboot to go update the other one as well but its number is too low, so it's unbootable. What now?
Hello I'm very excited to inform everyone that I'm completely cured from my HSV 1&2 recently. I have used Oregano oil, Coconut oil, Acyclovir, Valacyclovir, Famciclovir, and some other products and it's really helped during my outbreaks but I totally got cured! from my HSV with a strong and active herbal medicine I ordered from a powerful herbalist and it completely fought the virus from my nervous system and I was tested negative after 14 days of using the herbal medicine, I'm here to let y'all know that herpes virus has a complete cure, I got rid of mine with the help of Dr Osalu and his herbal exploit Contact him via. email drosaluherbalhome@gmail.com or WhatsApp +2348078668950 website:
https://drosaluherbalhome.wixsite.com/drosaluherbshome
From what I understood, the parent's question is not about Microsoft updating grub; it's about a person hit by the bug, and thus in a situation where Windows boots but Linux doesn't, using Windows to copy the correct file (probably extracted from an updated package from the Linux distribution they're using) to the correct place in the EFI partition by hand.
(The first obstacle would be that AFAIK the EFI isn't mounted by default on Windows, but I believe it should not be hard to tell Windows to mount it and give it a drive letter.)
This hit me 2 days ago as I was shuffling dual boot systems around trying to recover some old data for a client. Kind of hilarious timing tbh, right after I was done laughing arrogantly about CrowdStrike
Yah, I got hit by this as well. Was pulling some stuff off of windows, it updated overnight, rebooted, and I woke up to my default Ubuntu boot being horked. A bit of a WTF till I started searching for it. I'll be backing up and leaving that box as Linux + a vm.
Although MS' stance to block old vulnerable grub installs seems reasonable here, I've come to run Windows only for games and a single piece of legacy software (as a backup for my aging x86 Mac) without net access at all. The moment you allow Win updates, everything is up to chances. MS moving around registry keys and other shenanigans to force "telemetry" (aka ads and behavioral data scanning for ML) onto users, even on Windows Pro, should be telling enough. Needless to say, I'm running Win 10.
Fun fact. Just as this story was unfolding, I was installing a Debian system on an Acer desktop machine. The Debian installer wouldn't start with secure boot enabled because of this, but also, once I fixed up things, I couldn't get the firmware to recognize any entry added by Debian. It would hide and deactivate them on its own. I ultimately had to use a copy of the EFI partition on a USB key for it to work.
> Or does installing Windows contaminate the TPM module permanently?
It's not the TPM, it's a simple UEFI variable. AFAIK, there's a way in the BIOS to reset all these variables to their original default value, though you might have to use the "clear CMOS" jumper to do it.
Something seems to be wrong with the whole security model.
> those versions of grub had genuine security vulnerabilities that would allow an attacker to compromise the Windows secure boot chain
This feels like a "my secure compartments are all connected together" moment.
If Microsoft want to verify that they're in an all-Microsoft boot chain, sure, whatever, fine. But somehow the compromise of any loader allows compromise of Windows? And in turn Microsoft are able to break grub installations? Why is that acceptable?
(also, I feel a bit "I told you so" about this. Back when all this was being introduced I felt that (a) secure boot increases the risk of locking you out of your machine and/or data loss and (b) a situation where Linux is dependent on the collaboration of Microsoft in order to boot is very dangerous long-term.)
This is vaguely the experience that should have been present in an Empowered User centric BIOS.
First cold boot; BIOS verifies the hardware isn't broken, checks for a boot preference, finds none.
Present the User with a set of choices: Check for BIOS Updates (manufacturer), Check for OS Choices (manufacturer), Begin installing an OS (options list). Locally cached (present with the system) choices would be listed first. Microsoft Windows (installer) is probably OEM shipped (might not be). Linux / DistroName plugged in USB device, etc... 'Local Network boot (search)', and 'Install from the Internet' (shipped by manufacturer or added by local preference).
The BIOS would also support enrolling ANY signing keys of local preference with user confirmation. This should happen even at first boot for the keys known by the manufacturer; they shouldn't just be in there for free, confirming the key with the user should be part of the flow.
The BIOS _MUST_ also support multiple bootable entries, even if one is the default (without a timeout, even with only manual selection E.G. F12 / F11 / whatever... though this too should be standardized).
The best BIOS would be no BIOS, just find a drive, check for a boot sector, then boot from it. Have an internal USB slot that always gets boot priority for service and for advanced use cases.
The point of personal computers is to make _personal_ computing easy. Everything else can just be an add on.
> find a drive, check for a boot sector, then boot from it
And how would you call the System code that does this? Would you want such a piece of code to be able to Output something to the screen in case it can't find such a boot sector? Should it be able to take user Input (e.g. in case multiple valid boot sectors are found)? These are quite Basic requirements for any early-boot phase.
> This feels like a "my secure compartments are all connected together" moment. If Microsoft want to verify that they're in an all-Microsoft boot chain, sure, whatever, fine. But somehow the compromise of any loader allows compromise of Windows?
Exactly how would you propose starting software securely from an unknown environment?
> Back when all this was being introduced I felt that (a) secure boot increases the risk of locking you out of your machine and/or data loss
What you need is source of trust and right now its signatures which are outsise of the users control.
A 5 cent hardware button which gives you a small time windows to install a new trusted bootloader could achieve the same thing without trusting microsoft.
This doesn't actually address some of the scenarios SB is intended for. I.e. you're an IT administrator, you manage a fleet of 1000 machines, you want to ensure that they are all running secure bootloaders and secure kernels and secure software, top to bottom. In that scenario, every end user having a little "security vulnerability" button they can press if they get bored (or feel like being malicious) isn't appealing.
Having to send someone out to press the button at a thousand desks in order to update the bootloader? Also not appealing.
So don't do secure boot at all rather than saying "when one step in the boot chain is compromised that can compromise all later steps"? How is that a better security model?
Giving up is certainly an option, but it is not the preferred option for some people (myself included). A partial option is definitely better than giving up, as long as it is well understood.
In this scenario, people who are ready to give up can simply stop updating their software, which will solve their issue. YMMV of course.
I have seen recommendations to not dual boot with non-obsolete Windows, because its updates would have a high risk of screwing up grub, but instead give that Windows it's own hard drive, and boot it 'manually', by selecting the boot drive at startup in the 'BIOS'. Sounds like that was good advice ?
This sort of thing is exactly why I have automatic updates disabled on my Windows partition. I've been burned so many times by bad Windows updates breaking stuff. My favourite is when stuff breaks during the "configuring updates" stage after a reboot, leaving Windows in a boot loop with no error codes or anything to help you figure it out. And of course the documentation from MS is utter garbage. Most of the time the only solution I found was to reinstall Windows.
Now I always google around a bit before applying any fresh Windows updates to see of there's any breakage reported.
My Windows install is stuck in a boot loop like this - it spends 10 minutes trying to update and then fails, except maybe 1/3 times it then boots normally. I don't even try to do anything about it, I just marvel at it.
I have a Thinkpad that did something like this, it would try to install updates, fail and eventually boot into some kind of recovery wizard that demanded the bitlocker key. That wizard wasn't able to actually fix anything either but after failing a few times the system finally would uninstall the update. The whole process took over an hour with zero feedback.
I had to switch to Linux just to get a machine I could rely on.
Yeah, I dual boot. I think my efi partition is around 100mb. I forget if Arch puts just one backup kernel in there, but I feel like I saw a lot of garbage in there once that I had to clear out. Maybe that's the problem, will investigate, thanks.
Yeah, 100mb has been insufficient for some windows updates for me in the past. New windows installs create a 500mb EFI, but Windows 7/8 created a 100mb EFI and kept it if updated to Windows 10. Unfortunately resizing it is a pain, as the EFI partition is normally before your Windows partition.
Yeah, it turns out applying updates during boot is bad design. I'm sure plenty of people at MS realise it is, but I guess they don't care enough to fix it.
The last time I had to manages windows I used Unattended to wipe and re-install to a base level. I found that diagnosing and troubleshooting was not worth the effort.
no idea. This was the early 2000s. I'm sure it's based on the same thing.
I set-up a netbsd box as the server and could hook up as many laptop as I had network ports. I would then just hit the enter key or perform a few manually steps when things couldn't be automated.
I'm sure it's all based on silent install or the /s switch for install.bat.
If my memory is working.
if you're at a point you need either of them, just hire someone too work on the oem scripts.
for personal use, not really worth it imo
if you're installing the right version of windows (Enterprise ltsc) it's already one click install. and your applications will change every week anyway.
This is really bad advice—don't follow it. Zero day vulnerabilities are a thing, and you intentionally prevent yourself from getting those fixed quickly. Running critical software without updating may have been possible in some distant past, but it isn't any longer: You will catch an exploit or crypto locker at some point.
Microsoft abusing its update mechanism to pushing crap is nothing new, but downright refusing updates ins't the answer either.
When a windows update destroys your install, is it really any different from actual malware? I consider it one and protect myself accordingly.
At least you can be careful about the rest with adblocking, sandboxing and being irrelevant enough to not make your machine a target for anyone competent, which gives you a pretty great chance at avoiding them. If you keep built-in malware (and in recent versions, also spyware) running, then getting screwed by it is a certainty. Personally, I'll take my chances and I think the average HN user would not have any problems doing this, but I wouldn't really recommend this approach to someone that's not tech savvy. I'd give them a Chromebook instead.
> At least you can be careful about the rest with adblocking, sandboxing and being irrelevant enough to not make your machine a target for anyone competent, which gives you a pretty great chance at avoiding them.
That maybe used to be a thing, but isn't anymore really: There only needs to be a single, unpatched vulnerability in your network stack, the multitude of devices around you, whether at home, work, or in a cafe, none of which you control, might exploit.
And one more little piece of trivia; high levels of expertise usually come with increased negligence on the basics, because you're less careful. This affects pilots and nerds alike; just think of Ross Ulbricht.
Windows updates are too dangerous to trust automatically. I've been burned to various degrees too many times to think otherwise. If Windows is too dangerous to use without automatic updates, then it's just too dangerous to use, period.
Yeah all it takes for to drop dead is a single blood vessel bursting in one's head, one careless driver, one wrong thing eaten, one wrong step and you fall and break your neck.
It's always one unlikely thing. I don't think living in such paranoia is a life worth living tbh. Some small risks you just accept to live normally, and 99.9% of the time it'll be alright. With 2FA and other multi device safeguards the risk is acceptable. Frankly authentication for things has gotten so bloated that even the actual user has a hard time logging into things these days.
Frankly I'm more worried about losing or damaging my phone, if that happens then I'm far more screwed and it's a risk we all accept every day. I keep it in aluminium armour to de-risk :)
> I thought flatpak would fix this on linux, but every time I flatpak itself updates half of its apps break with mysterious error messages and refuse to launch until they're also updated.
Linux oldheads could've told you this would happen before the project was even created. We solved package management and dependencies in the 90s and no one has improved on it since. Just stick with stuff in your distro's repos. If it's not in the repos, don't use it. Problems gone.
Going for a windows build with wine instead of the Linux build sounds completely crazy, but then again Proton works exceptionally well on Steam so this might genuinely be the more long term stable option. I'll have to try that out lmao.
IMO secure boot is a waste of time for most scenarios, if theres closed source EUFI code running god knows what in the background, it dosn't matter how signed and secure your OS kernel is.
Ive never been sucessfully able to dual boot windows and linux on a mobo with secure boot turned on, it seems that is a feature not a bug I'm sure MS would never influence hardware vendors to make it dissadvantage a growing number of linux users.
TLAs from major powers probably have backdoors in your UEFI, mainboard or OS. But even if they do that doesn't mean they will use them on everyone, they probably keep the good stuff for the most valuable cases. Each use of an attack carries the risk of the attack vector being discovered and prevented in the future. And besides, there are threat actors besides TLAs of the USA, Russia and China.
If you use full disk encryption secure boot is pretty essential, otherwise an attacker can modify the code that asks for your credentials to also log them somewhere easily accessible, circumventing your entire encryption. If you don't do full disk encryption it's still a decent protection against some bootkits.
It can absolutely be more trouble than it's worth. It's not that useful in most desktop computers. But if you are traveling with a laptop it's probably worth some effort to keep secure boot working on that system (and make it more difficult to disable)
> If you use full disk encryption secure boot is pretty essential, otherwise an attacker can modify the code that asks for your credentials to also log them somewhere easily accessible.
In what threat model? If the attacker has access to your PC they can just as well install a physical keylogger intercepting the signals from the keyboard.
The main use case for disk encryption is preventing data loss when the device is stolen. That's a realistic threat that people face, not boogeyment coming into your house and replacing your bootloader with a malicious one.
If I am ever traveling to US, I am wiping the system, installing a clean, stock Linux distribution without any encryption, keeping everything valuable at home.
Once I am behind the border, I am reinstalling the system with encryption, then proceed to download key material and other important stuff from home over the internet.
I am never letting anyone near my unlocked laptop and if I ever find it turned off e.g. while visiting office toilet, I just assume it has been infected with firmware level rootkit and I am wiping it without decrypting.
If it's removed from my sight during the border check, I assume the same, purchase a new one in a brick-and-mortar shop and sell the infected one when I am back home.
agree its a waste of time, but we pay the paranoid cost is special occasion. it does make breaking FDE just a little bit more annoying/expensive.
the only time it's worth the hassle for we to enable it: travel to the USA, Russia and most of africa (if the country have USA backed airport security, like uganda). pause updates, enable secure boot with a disposable key we don't store anywhere. that on top of the usual FDE with plausible deniability dual boot.
but we still prefer to just fly contributors with blank devices if we can.
"trustworthy" according to who? Remember that dystopia does not appear spontaneously, but steadily advances little-by-little.
What's the summary? Microsoft (understandably) didn't want it to be possible to attack Windows by using a vulnerable version of grub that could be tricked into executing arbitrary code and then introduce a bootkit into the Windows kernel during boot. Microsoft did this by pushing a Windows Update that updated the SBAT variable to indicate that known-vulnerable versions of grub shouldn't be allowed to boot on those systems.
Who is Microsoft to decide what others do on their machines? Should they have the right to police and censor software they have no control of? In the spirit of Linus Torvalds: Microsoft, fuck you!
We are seeing the scenario Stallman alluded to over 2 decades ago slowly become a reality. He wasn't alone either.
https://www.gnu.org/philosophy/right-to-read.en.html
https://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
Things like TPM and "secure" boot were never envisioned for the interests of the user. The fact that it incidentally protects against 3rd party attacks just happened to be a good marketing point.
"Those who give up freedom for security deserve neither."