To trigger the crash, you need to write a bad file into C:\Windows\System32\drivers\CrowdStrike\
You need Administrator permissions to write a file there, which means you already have code execution permissions, and don't need an exploit.
The only people who can trigger it over network are CrowdStrike themselves... Or a malicious entity inside their system who controls both their update signing keys, and the update endpoint.
Anyone know if the updates use outbound HTTPS requests? If so, those companies that have crappy TLS terminating outbound proxies are looking juicy. And if they aren't pinning certs or using CAA, I'm sure a $5 wrench[1] could convince one of the lesser certificate authorities to sign a cert for whatever domain they're using.
Even if the HTTPS channel is compromised with a man-in-the-middle attack, the attacker shouldn't be able to craft a valid update, unless they also compromised CrowdStrke's keys.
However, the fact that this update apparently managed to bypass any internal testing or staging release channels makes me question how good CrowdStrike's procedures are about securing those update keys.
Depends when/how the signature is checked. I could imagine a signature being embedded in the file itself, or the file could be partially parsed before the signature is checked.
It's wild to me that it's so normal to install software like this on critical infrastructure, but questions about how they do code signing is a closely guarded/obfuscated secret.
Though, I prefer to give people benefit of doubt for this type of thing. IMO, the level of incompetence to parse a binary file before checking the signature is significantly higher (or at least different) than simply pushing out a bad update (even if the latter produces a much more spectacular result).
Besides, we don't need to speculate.
We have the driver. We have the signature files [1]. Because of the publicity, I bet thousands of people are throwing it into Binary RE tools right now, and if they are doing something as stupid as parsing a binary file before checking it's signature (or not checking a signature at all), I'm sure we will hear about it.
We can't see how it was signed because that's happening on Cloudstrike's infrastructure, but checking the signature verification code is trivial.
Kind of a side talent, but I’m currently (begrudgingly) working on a project with a Fortune 20 company that involves a complicated mess of PKI management, custom (read: non-standard) certificates, a variety of management/logging/debugging keys, and (critically) code signing. It’s taken me months of pulling teeth just to get details about the hierarchy and how the PKI is supposed to work from my own coworkers in a different department (who are in charge of the project), let alone from the client. I still have absolutely 0 idea how they perform code signing, how it’s validated, or how I can test that the non-standard certificates can validate this black-hole-box code signing process. So yeah, companies really don’t like sharing details about code signing.
This wasn't a code update, just a configuration update. Maybe they don't put config update though QA at all, assuming they are safe.
It's possible that QA is different enough from production (for example debug builds, or signature checking disabled) that it didn't detect this bug.
Might be an ordering issue, and that they tested applying update A then update B, but pushed out update B first.
The fact that it instantly went out to all channels is interesting. Maybe they tested it for the beta channel it was meant for (and it worked, because that version of the driver knew how to cope with that config) but then accidentally pushed it out to all channels, and the older versions had no idea what to do wiht it.
Or maybe they though they were only sending it to their QA systems but pushed the wrong button and sent it out everywhere.
that's assuming they don't do cert pinning. Moreover despite all the evil things you can supposedly do with a $5 wrench, I'm not aware of any documented cases of this sort of attack happening. The closest we've seen are misissuances seemingly caused by buggy code.
If you get have privileged escalation vulnerability there are worse things you can do. Just making the system unbootable by destroying the boot sector/EFI partition and overwriting system files. No more rebooting in safe mode and no more deleting a single file to fix the boot.
This would probably be classified as a terrorist attack and frankly it’s just a matter of time until we get one some day. A small dedicated team could pull it off. It’s just so happens that the people with the skills currently either opt for cyber criminality (crypto lockers and such), work for a state actor (think Stuxnet) or play defense in a cyber security firm.
Microsoft has leaked keys that weren't used for code signing. I've been on the receiving end of this actually, when someone from the Microsoft Active Protections Program accidentally sent me the program's email private key.
Microsoft has been tricked into signing bad code themselves, just like Apple, Google, and everyone else who does centralized review and signing.
Microsoft has had certificates forged, basically, through MD5 collisions. Trail of Bits did a good write-up of this years ago.
But I can't think of a case of Microsoft losing control of a code signing key. What are you referring to?
To trigger the crash, you need to write a bad file into C:\Windows\System32\drivers\CrowdStrike\
You need Administrator permissions to write a file there, which means you already have code execution permissions, and don't need an exploit.
The only people who can trigger it over network are CrowdStrike themselves... Or a malicious entity inside their system who controls both their update signing keys, and the update endpoint.