Hacker News new | past | comments | ask | show | jobs | submit login
Platform certificates used to sign malware (chromium.org)
684 points by arkadiyt on Dec 1, 2022 | hide | past | favorite | 234 comments



My understanding is that these certs are vendor-specific (and potentially more finely grained than that - I think it'd be fine to have a separate cert per model, for instance) and are rooted in the firmware, not the hardware - that is, it's not the secure boot signing key, and a new firmware update can simply use a different platform certificate.

Having said that, the fact that there appear to be multiple vendors affected by this (looking at those hashes in Virustotal, there are signatures that claim to be owned by Samsung, LG, and Mediatek, at least), and that's certainly concerning. My entirely unsupported guess is that they may all share manufacturing for some models, and certificates may have been on site there?

But the more significant question is how many phones trust these certificates. As I mentioned before, these could be model specific, and any leaked certificates could be tied to a very small number of phones. Or maybe Samsung used the same platform cert for all phones manufactured in the past decade, and that would be a much larger problem. With the information presently available we simply don't know how big a deal this is.

Edit to add: given Mediatek certs appear here, and given all the vendors linked to this have shipped low-end phones based on Mediatek SoCs, it wouldn't surprise me if it turns out Mediatek were the source.


Edit to add: given Mediatek certs appear here, and given all the vendors linked to this have shipped low-end phones based on Mediatek SoCs, it wouldn't surprise me if it turns out Mediatek were the source.

Not surprising either, given that you can far more easily find MTK's leaked datasheets and other useful design documents than any of the other major Android SoC vendors. I could own the detailed documentation for my phone (which is of course rooted) thanks to that insecurity, so you can't say it's entirely a bad thing.


The post title here should really be corrected. It's editorialized and is completely wrong.

There is no 'Android platform signing key'. Android is not a single OS provided by Google. It's an OS family. Each OEM has their own OS and keys, often per device or device family. Several non-Google OEMs either got tricked into signing malware or had at least one of these keys (platform key) compromised. The issue is clearly marked as a partner issue discovered by their program for reporting issues to partners (non-Google Android OEMs). Google detected it and that's why it's on a Google issue tracker. Platform signing key is used to sign app-based components in the OS which are able to receive platform signature permissions, including some quite privileged access. Any app can define signature permissions. Platform signing key is the one used by the highly privileged app-based components of the platform itself, such as the Settings app.

Platform key is only really meant to be used for first party code, normally all built with the OS but potentially updated outside it. It's not the verified boot key (avb key) or the key for signing over-the-air updates (releasekey by default but can be a dedicated key). There are also similar keys for several other groups of OS components with internal signature permissions (media, shared, etc.). All of these keys can be rotated, but aren't rotated in practice as part of the normal process of maintaining devices, partly because supporting phones for 5+ years is a recent development. The rotation that's done in practice is using a separate set of keys for each device / device family. For example, Google used new keys for each generation of Pixel phones and they currently use separate keys for separate device models although they've sometimes used the same keys for XL and non-XL variants in the same generation.

Even the verified boot key for the stock OS could be rotated by firmware updates since they're all shipped with OS updates and it's completely normal to do breaking changes requiring both new firmware and new OS. It'd require adding code to handle the rotation in several places (TEE and secure element pin it and use it as a bonus input for key derivation, which would need to continue using the old value) along with breaking anything pinning the hash for attestation. Only the SoC firmware signing key and firmware for other secondary components with the key burned into the hardware can't be rotated.


> Only the SoC firmware signing key and firmware for other secondary components with the key burned into the hardware can't be rotated.

Side note: those components do have fuses burned via updates that are used to implement downgrade protection dynamically. In theory, they could provide key rotation support. That may make sense as device support gets longer than 5 years. They should also really support more downgrade protection counter increments than they currently do. Snapdragon used to support 16 increments (16 fuses) but I'm not sure what they support in current generation devices. They called it a '16 bit version number' but you can't unset a bit.

OS downgrade protection is generally being done with special replay protected storage from the TEE (Pixel 2), the secure element (Pixel 3 and later) or simply by hard-wiring the version in firmware (very common approach before Android Verified Boot 2.0). It's a lot more flexible both in terms of being able to do key rotation in theory and not having any limit on # of updates. Pixels simply convert the Android / Pixel patch level such as 2022-11-05 into a Unix timestamp and that's the OS rollback counter stored in the secure element after successful boot. Happens alongside updating the firmware rollback counter, which rarely increases. Pixel 6 only bothered shipping the code to increment the firmware counter with Android 13 when they increased it. It's so uncommon that the external development community largely wasn't prepared to cope with it even though it should be done much more regularly.


Ok, we've changed the title to be that of the article.

(Submitted title was "Android platform signing key compromised".)


Take a look here, I found some strings mentioned on VirusTotal for the malware in question (drv.androidsecurityteam.club) here in the slideshow. Sounds like malicious updaters are being signed by some vendors perhaps?

https://maldroid.github.io/docs/vb_2022.pdf

I'll be curious when more details come out, but there's not much to go off yet.

EDIT: Sounds like this may all tie back to a contractor providing OTA update applications, either being compromised or directly malicious themselves: https://wuffs.org/blog/digitime-tech-fota-backdoors

"They provide Android ODMs/OEMs with the SystemFota updater APK, instructions on how to build OTA packages and a web portal where they can upload them and view statistics."


>My understanding is that these certs are vendor-specific (and potentially more finely grained than that - I think it'd be fine to have a separate cert per model, for instance) and are rooted in the firmware, not the hardware - that is, it's not the secure boot signing key, and a new firmware update can simply use a different platform certificate.

Correct, affected OEMs can still rotate the cert used to sign their system apps and then push an OTA update to deliver the updated apps. Then they can push app updates with that new cert.

However, V2 versus V3 signatures complicate things a bit, I think.


You can freely update from a v2 signature to a v3 signature.

If the minimum SDK version is below when v2 was introduced, Android will include both a v1 and v2 signature by default.

If the minimum SDK version is above when v2 was introduced but below when v3 was introduced, Android will only include a v2 signature by default.

It will only include a v3 signature by default if the minimum SDK version is above when v3 was introduced, which is rare.

The reason it works this way is because v2 was a significant security upgrade over v1 but v3 is just v2 with rotation support. If you do a key rotation, you get a v3 signature included to handle it for versions with key rotation support. If you support older versions, the users on those versions are still relying on the original keys. This is transparently supported in a way that it's backwards compatible.

It would simply waste space to include v2 + v3 when no rotation has happened yet. I don't know why they bothered to micro-optimize to this extent but they did.

For some reason the Play Store doesn't make use of Android's support for key rotations yet. They only support key rotations with Play Signing since they're phasing out non-Play-signed apps. If you switch from non-Play-signed to Play Signing, they keep signing the app for existing installs with the old key but switch to a new key for fresh installs. They do the same thing if you trigger a key rotation with the UI. They could rotate the key for existing users on Android versions with v3 support. They might as well rotate it even for users on versions predating it, since if they did get an OS upgrade it will start working right away.


I assume that the level that the malware is running at couldn’t simply modify the OTA file and inject itself back in?


Potentially but then you get into “reflections on trusting trust” type attacks, which are pretty hard to sustain.


According to the OP of the report, the key used to sign an OTA image and the key used to sign system apps are different.


If your device is rooted, it seems like maybe the malware could change the key used to verify the OTA update to a key controlled by the malware authors? (I don't know how that key is stored.)


Yes but actually no. While they may be able to install a bogus update, unless they also compromised the keys for verified boot the system won't boot if they change anything interesting as AVB has a hardware root of trust (typically).


I believe the author of this report and the discoverer are different people.


Oh yeah, that's true. Here's who I was citing then: https://twitter.com/maldr0id/status/1598467755887529990


Depending on the age of the devices in question, 'simply' may not accurately reflect the solution as Android vendors generally stop updating devices as soon as they can get away with it, security issue or otherwise. Unless these are newer devices, I'd expect that a fair number of them will never get an update.


All the more reason to highlight the simplicity. Any vendor who decides not to do something simple to protect their customers should be relegated to the “never buy” heap.


>Edit to add: given Mediatek certs appear here, and given all the vendors linked to this have shipped low-end phones based on Mediatek SoCs, it wouldn't surprise me if it turns out Mediatek were the source.

This implies that all the affected vendors shared their private keys with Mediatek. Why would they need to do this? (Genuine question, I don't know much about firmware. But it doesn't seem like it should be necessary at first glance.)


It's possible Mediatek (or an unrelated contractor) did the firmware work for them all so they're the ones holding the keys in the end.


> My entirely unsupported guess is that they may all share manufacturing for some models, and certificates may have been on site there?

I think manufacturing only needs the public key, so the compromise can’t really happen there.


Firmware builds may be outsourced to the ODM in some scenarios.


Is the "First Time Seen in the Wild" accurate? (2016-02-13 11:50:54 UTC)

https://www.virustotal.com/gui/file/b1f191b1ee463679c7c2fa7d...

If so, this has been around for 6 years.

EDIT: A number of listed APKs target API 22 (Android 5.1). This isn't definite, (and apps can be run on later phones), but this also implies some compromises occurred in ~2015.


Targeting an old API can be used to trick the compatibility system to kick in and expose APIs and behaviour that's restricted on newer platforms. This is how Termux is able to execute executables in ways that shouldn't normally be permitted by the OS, for example.

The Play Store doesn't allow apps targeting old APIs but the system itself will try not to break the old apps you've installed before installing your OS updates.


This thread leaves a lot of unanswered questions:

1. This was likely mitigated through a device update. What version did it roll out with? Which devices are still unpatched?

2. How was it compromised? Was it an OEM? An internal leak at Google?

3. What is the attack vector? It sounds like it was likely side-loading apps used by some attacker, but did any of these make it onto the Play Store?


1. We don't know what mitigation steps have been applied. However, it seems that at least some affected vendors are still signing their apps with compromised platform certs: https://www.apkmirror.com/?post_type=app_release&searchtype=...

2. Unknown. Could be multiple independent hacks of the OEM or an ODM, could be an insider, etc.

3. The attack vector is usually sideloading.


The latest November Samsung firmware for a phone in front of me has android.uid.system signed by the compromised certificate with SHA256 fingerprint 34df0e7a9f1cf1892e45c056b4973cd81ccf148a4050d11aea4ac5a65f900a42. This certificate is provided by com.samsung.android.svcagent version 6.0.01.6 which is also signed with the same compromised 34df0e7a9f1cf1892e45c056b4973cd81ccf148a4050d11aea4ac5a65f900a42 certificate. The latest version of com.samsung.android.svcagent I could find is 7.0.00.1[1] which has a creation date of 2 September 2022 and also provides the compromised 34df0e7a9f1cf1892e45c056b4973cd81ccf148a4050d11aea4ac5a65f900a42 certificate.

On top of that, at least for the S21 series Samsung phones in their Common Criteria evaluated mode seemingly use the compromised 34df0e7a9f1cf1892e45c056b4973cd81ccf148a4050d11aea4ac5a65f900a42 certificate provided by earlier version 5.0.00.11 of com.samsung.android.svcagent[2]. I couldn't find a published applications list for S22 series phones in Common Criteria mode but suspect com.samsung.android.svcagent 7.0.00.1 from 2 September 2022 would be more recent anyway.

[1] https://apkcombo.com/svc-agent/com.samsung.android.svcagent/...

[2] https://www.google.com/search?q=inurl%3Adocs.samsungknox.com...


Good to know!


It seems some of the malicious applications were pre-installed system update tools on cheap MediaTek devices, so no sideloading needed if that's the case.


I think the attack vector cannot be simply said to be "being able to run software of your choice on your device". We don't describe victims of emails with malicious attachments as having been compromised via the attack vector "sideloading" (or whatever the non-mobile equivalent of the word is). With this framing, it gets really easy to see sideloading as an evil that must be disallowed for our own good and we end up with devices that can do nothing that is not, with every update of every app again and again, judged to be allowable by an overseas vendor from another culture. What would be a better description though, something like installing software from a malicious source?


Android controls the App Store better than email controls side loading. I wouldnt recommend people side load unless they’re capable of auditing packages they’re loading, the equivalent recommendation in the desktop world is application allowlisting.

The general public has proven that they’re not capable of any type of sanity check to the point where I would call sideloading dangerous.


It is a matter of practical fact that only a small proportion of the Andriod-using public chooses to use the software of their choice by sideloading. Mentioning sideloading in this context shouldn't be interpreted as an attack upon or a dispargement of the practice of sideloading. I, as an Android user, am happy to learn that the (main?) vector of attack happens to be sideloading. (if true) This informs the broader conversation and helps individual Android users gague the likelihood of whether they have may have been directly affected.


2. State actor. Otherwise you'd have to either a) hack every vendor independently for its most cherished keys, or b) find some intermediary that keeps every vendor's most cherished keys. The latter is feasible for a private actor but unlikely, the former is feasible for a state actor and we already know they do this kind of thing.


Makes sense -- but how was it discovered?


Pretty straightforward, honestly: if the signature on a piece of malware can be verified by a corporation's private key, unless that corporation is a remarkably inept bad actor, that corporation and its signing key have been compromised.

Not saying it's how these were picked up, but that's the most obvious way.


I think you’d be surprised how many large companies have such poor control of their signing servers that anyone in the company with a valid login and engineering group membership can generate signatures for arbitrary artifacts.


> anyone in the company with a valid login and engineering group membership can generate signatures for arbitrary artifacts.

Trust me, I'm not at all surprised, but my point stands: it's either a compromise of the company or the key.


What would an actual secure workflow for signing artifacts look like?

I'm thinking: Final round of "code review" by security engineer on high-security single-purpose device, build artifact on that device, sign using hardware security module.

I put "code review" in scare quotes because code changes are potentially expensive at this point. For minor issues, turn to your standard workstation and file an issue for next release. For a major security problem, call off the release.


One guy named Jeff and his boss Clem have access to an offline PC with some signing software in a closet behind a badged door. Not TEMPEST secure unless they have govt contracts. For hardware stuff it might be at the factory.


Thanks, interesting.

I don't see how TEMPEST is relevant?


I mean, if you don't want state actors to steal your super secret keys, maybe secure the power lines and RFI


I searched on Google for RFI, not sure what it stands for?


I believe that RFI here is "RF interference." Here's links to learn more about the references above [1] [2].

[1] https://en.wikipedia.org/wiki/Tempest_(codename)

[2] https://en.wikipedia.org/wiki/Van_Eck_phreaking


Yep. It is pretty trivial to extract encryption keys via either control of voltage or detection of electromagnetic radiation emanating from a server


1: Hopefully the delay was so the device updates with time-based activation triggers would shut out the bad actor in as many places as possible at once.

2: I don't see any reason to share the master key with an OEM, the master key could be used to sign certificates down-chain but they should never share it.

This leaves 2 options:

- Google had waaay too sloppy key management (sitting on servers or even possibly developer laptops)

- Google had proper management by putting the keys on HSMs or some virtual HSM with multi-party activation, unless there was weaknesses in the HSMs(or the virtual HSM OS) then yes, some person(s) would've gained physical access to extract it.

3: Universal 2nd stage rootkit?


These aren't Google's keys. They are vendor-specific keys (e.g. Samsung's) used to sign their releases.


The bug doesn't mention any vendor, rather the bug specifies "Partner-Multiple", seen reports elsewhere that points to a particular vendor?


It's not hard to figure out which vendors are affected. Just search for the SHA256 hash of each malware sample on VirusTotal.

eg. https://www.virustotal.com/gui/file/b1f191b1ee463679c7c2fa7d...


Search google for the certificate SHAs. Most don't result in anything but this report, but one is for Samsung and another for LG.


Google isn't at fault, their certs weren't leaked (as far as we know).


Why would Google have these keys?


(posterity) Sure the primary OS/Bootloader key probably falls under vendors, but wasn't much of the core systems (phone, play, browser,etc) moved to be under Googles control due to the vendors being sloppy with providing updates for devices in the olden days? (And as such this key would be the thing that the os/loader grants privilegies to).

Edit: Noticed the comment about them being lowend(mediatek?) certificates.


My understanding is that with v3 google started requiring that app developers send their private keys to google, as a requirement for inclusion in the play store (using the 'Play Encrypt Private Key (PEPK) tool').

In light of that I guess it wouldn't be shocking if there were similar requirements for vendor's platform keys.


This is about non-Google OEM OS signing keys.

v1/v2/v3 APK signing has nothing to do with the Play Store requiring Play Signing for newly published apps. App bundles / split APKs also don't inherently have to be used the way they're using them. Entirely possible to use them outside the Play Store with your own signing keys. v3 signing is just v2 with key rotation support. v2 is proper whole file signing instead of v1 which was just the flawed JAR signing system.


Platform keys are only required to be v2, as far as my understanding goes.


Yeah that's the minimum since Android 11 for system apps targeting API level 30+.


I wonder if any of these leaked keys were used to sign the base android installers for the phones themselves?

If so, this might be a way to get new versions of AOSP onto Samsung phones that are bootloader-locked and have no current support by Samsung. Can anyone anyone experienced with Android+Samsung comment here?


That was my first thought after "oh fuck". Does this mean any current Samsung phone can now be rooted by the user?


Well, I think you could potentially sign any rooting app at least.

I'd like to see os-level signing so we can take older devices like Samsung tablets and put Samsung-less Android on it.


I'm guessing it's different depending on model, but on my SM-X900 (US model Tab S8 Ultra), the key used to sign the vbmeta image (Android verified boot's root of trust) is different from every key used to sign the system apps.


Yeah that's my understanding as well


Why this is Chromium issue tracker?

Or is the same tracker used across all Google projects?


Monorail is a fork of Google Code, which was Google's public source control offering for a while.

Repos from Google Code were mostly transitioned onto more popular platforms (like GitHub). Chromium wanted to keep using the issue tracker, so they took it over to create Monorail:

https://chromium.googlesource.com/infra/infra/+/refs/heads/m...

Chrome is the biggest one, but many of Google's public-facing issue trackers are on Monorail (which is hosted at bugs.chromium.org).


It just happens to be where vulnerabilities are disclosed under the Android Partner Vulnerability Initiative: https://security.googleblog.com/2020/10/announcing-launch-of...


For various historical reasons a lot of the device-related security work is housed on Chrome infrastructure.


>Why this is Chromium issue tracker?

Others have already answered the historical reason, but a more direct and technical answer: this is not, since that would be: https://bugs.chromium.org/p/chromium/issues/list

This is issue tracker for "apvi" (whatever it means), just happened to be hosted on "bugs.chromium.org" domain like batch of others.


APVI is Android Partner Vulnerabilities Initiative. It's a working group (?) that deals with security issues of third party vendors


Here's the real reason: nobody in a position to make it more consistent ever notices the inconsistency, because internally the use of short URL redirectors is ubiquitous. I can't be sure because I don't work there any more but I would be surprised if the canonical location of this issue for Googlers is something other than go/avpibug/100 or something similar to that.


What risks does this cause me as an android user? Does this mean my phone might accept auto updates that install malware onto my device?

What can I do to mitigate those risks?


No, you should be safe so long as you only accept updates to system apps through official app stores, like Google Play or the Galaxy Store. Do not sideload updates to system apps through third-party websites.


Malware on the Google Play store seems fairly common.

This article from Malwarebytes reported on a family of malicious apps with over 1M downloads: https://www.malwarebytes.com/blog/news/2022/11/malware-on-th...

This paper analyzed 1238 malicious apps grouped into 134 families: https://people.ece.ubc.ca/mjulia/publications/GooglePlayMalw...

An attacker who can steal these private keys can get malware uploaded to the Google Play store. Getting malware uploaded to Google Play is way easier.

And if Samsung's private key is stolen, I would certainly not be inclined to trust their Galaxy Store.

Now is a great time to go through your phone & uninstall apps you don't use or don't trust -- especially bloatware from the compromised OEMs. If I understand other comments in this thread correctly, the stolen keys allow the thief to escalate privileges from "ability to issue an update for a random app you have installed" to "ability to root your device".


Android has a global user base of around 2.7 billion[0] and nearly 100,000 NEW apps are released on the platform every month[1]. Given these facts, it simply does not follow that a family of malicious apps with 1M downloads or an analysis of 1238 malicious apps demonstrates "Malware on the google Play store seems fairly common."

The working hypothesis (gathered from other comments here) is that the main vector of attack for this case may be restricted to the sideloading system-level components (not end-user apps).

It's the end of the world as we know it, and I feel fine.

[0] https://sortatechy.com/android-users-are-there-worldwide/ [1] https://www.statista.com/statistics/1020956/android-app-rele...


>nearly 100,000 NEW apps are released on the platform every month[1]

Am I supposed to believe Google thoroughly vets all 100,000 of those apps?

My assumption is that any automated vetting system can be defeated by a serious attacker (the sort of attacker who can steal private keys). Just keep tweaking your malware until it gets past the filter.

>Given these facts, it simply does not follow that a family of malicious apps with 1M downloads or an analysis of 1238 malicious apps demonstrates "Malware on the google Play store seems fairly common."

Not sure 100K is the right denominator here -- how many of those 100K receive any attention at all by security researchers? The numbers I quoted appear to demonstrate that when security researchers look for this stuff, it isn't hard to find.

>The working hypothesis (gathered from other comments here) is that the main vector of attack for this case may be restricted to the sideloading system-level components (not end-user apps).

Check out this article: https://www.pcmag.com/news/study-reveals-googles-play-store-...

If 67% of unwanted app installs originate via the Play store, wouldn't it be most natural for attackers looking to exploit a stolen private key to take that most common route?

An attacker who can steal multiple private keys from large multinationals can also get inside the software supply chain for your favorite fart app.

>It's the end of the world as we know it, and I feel fine.

If you're writing software that people use, you have a special obligation to take security seriously. An attacker who gains root access to your phone could e.g. sniff passwords and steal 2FA codes, use them to log into Github/AWS, and do a ton of damage to people who are depending on you.


You missed my point about the number of new apps published per-month. I was not making any particular claims about how well they are each vetted, I was rather contrasting that to the "1238 malicious apps" you had referenced. Now, the perfect number of malicious apps in an official app store would certainly be zero, but sadly, we don't live in a perfect world.

What world do we live in, then? We live in a world where, even if 1238 new malicious apps were released _per month_, that would still represent fewer than 1% of all apps released. Yet, according to the report you mentioned, the 1238 malicious apps were published between 2016 to 2020. Rough math, then, gives us 1238 malicious apps out of 6M apps over that period. 0.02% of all apps. Not perfect, for sure. But I'm willing to live in that world. By the way, I'm not fixated on the 1238 number as if that's all the malware that existed duyring that time. But on the other hand, the writers of that report took their time and did the best they could to find as many malicous apps as possible for their research. So we have no evidence the true number was significantly higher.

>An attacker who gains root access to your phone could e.g. [do terrible things]

This is true, yet you still miss the point. If "malware on the Google Play store seems fairly common" as you claim, and malicious apps are therefore commonly ripping off passwords, 2FA codes, etc, where is the avalanche of tragic end-user stories we would be compelled to expect, by the laws of mathematics? 0.02% of the 2.7B Android users is still 55 MILLION end users. Yet, nothing near to 55M or 5M or even 50,000 end-users have had remotely catastrophic outcomes due to malicous apps hosted on the Google Play Store in any given sliding 5-year window (such as the one studied in the report).

Anyway, I've been generous with my words here since I sense you are sincere. But only a very few words are needed to reinforce my original point: Your quoted numbers do not support the assetion that malware is fairly common on the Google Play Store.


From the description in the bug "Any other application signed with the same certificate can declare that it wants to run with the same user id, giving it the same level of access to the Android operating system.", so this applies to all app installations and updates.


Presumably the Play Store will reject any app signed with these keys. I'd appreciate them making a statement to this affect. And someone double checking.


One would certainly hope. But until we know how these private keys were stolen, I would be suspicious of any updates signed by the affected organizations. If they just rotate the key without improving their security, the new key could get stolen too.


Yeah, I'd be surprised if Google Play wasn't already on the lookout to flag newly uploaded apps signed with these keys.


Honestly, if it's true that the Android security model allows an OEM-signed app to escalate privileges, Google should always be monitoring OEM-signed apps on the Play store very carefully. There shouldn't be more than a few hundred of them. Can't be that hard.

And when installing a OEM-signed app, instead of the normal permissions dialogue, there should be a giant red warning that says the app can basically root your device. If the OEMs don't like it, they can set up a second key pair with no privilege escalation capability to sign updates for most of their apps (the ones that don't need elevated privileges).


Another idea: Before installing an OEM-signed app, send the hash to Google and check if it's on an allowlist.


Easier than that. Only the vendor's publisher account should be allowed to sign with the vendor's keys. Implement the same policy for everybody, so if one publisher uploads a package signed with the same keys as another, the original publisher should be notified.


Or F-Droid, or other stores you trust. It's about trusted sources, not it originating from Google specifically, right? Theoretically that includes third-party websites, if you can be sure that the download wasn't compromised in transit, that the server isn't compromised, that the uploader is benign...


Any new app can get system privileges if it's signed by a platform key. This doesn't just apply to updates of existing apps.

I'd go one step further, don't install any apps from third party websites. Also think very critically about trusted app stores, make sure you know their signature policy.


I have YouTube vanced. Should I remove it?


What's the blast radius of this? Are only specific models of phones affected (if so, which?), or does this impact entire brands or the whole ecosystem?


I'm no expert but judging from what seem to be informed comments in this topic, the main attack vector might be restricted to the side-loading not of regular apps, but of system-level components. If that's true, the blast radius would be fairly small indeed (though still not good in those cases).


It's isolated to devices from the specific brands whose platform certs were leaked and are still being used to sign apps.


Here's a great example of the pitfalls of forgoing real security in favor of centralized "security" that exists to secure platform owners' bottom lines and to allow them to decide what can and can't run on devices.


One interesting thing is 2abeb8565c30a3d2af0fc8ea48af78a702a1c3b432845e3f04873610fac09e1b was mentioned in the original description 1 but removed in 2/3.

That package is signed with an Android key instead of a vendor one...

https://www.virustotal.com/gui/file/2abeb8565c30a3d2af0fc8ea...


I doubt that is a real Google-issued certificate authority based on the issuing CA certificate having an e-mail address identified as "nbbsw@nbbsw.com". It's likely the reason this sample was removed from the post?


Same thought here. The domain appears to be associated with Ningbo Sunning Software, a Chinese vendor and likely a Mediatek partner than anything Android.


Good catch after looking into it more this may be related to subcontractors providing vendors with malicious update tools that they then sign.

https://maldroid.github.io/docs/vb_2022.pdf


Good catch!


So Google's private key was stolen?


I'm not sure of the validity of the key, it's possible it was simply an untrusted key that had "Android" in the common name and got removed from the post as it was unrelated, but I'm curious if any more details surface on that one.


I'd love to have this key for my device so I can actually have full ownership of it.


These aren't the secure boot signing certificates. If your bootloader is locked this isn't going to let you replace the vendor firmware.


I would be able to use this key to sign terminal or file manager apps and get full access to my file system.


So would anyone else. Your filesystem, that is.


Only if he installs the "anyone else"'s app explicitly though, no?


Only if I sideload their apps.


So this was disclosed November 11 (edit: or maybe May 13 as per green text?) and became public yesterday November 30. Leaves little time for Android devices to get the new key no?


This isn't the kind of security issue where disclosing it allows new attackers to start exploiting it - because new attackers haven't compromised the keys.

So there's little reason to keep the compromise secret, except to let your partners save face.


>Leaves little time for Android devices to get the new key no?

It's not so simple, also rotating the keys could affect how devices get app updates, depending on whether or not an app has a V3 signature or not. V3 signature scheme supports key rotation, older schemes do not. OEMs are not required to sign system apps with V3 signatures. The minimum signature scheme version for apps targeting API level 30+ on the system partition is V2.

Affected OEMs can still rotate the cert used to sign their system apps that have V2 signatures and then push an OTA update to deliver the updated apps. Then they can push app updates with that new cert, but devices that haven't received OTAs won't receive those app updates.


If may 13 which would make more sense then there have been several android security patches since. Devices no longer supported are toast though


I think for the industry as a whole there should be some penalty or disincentive for going "we're no longer providing updates" and then "we lost the key that gives full access to the system".


Root for everyone? Maybe that's a good thing, finally restoring the balance of power from the megacorps to the actual owners.


Have any private keys actually been made public? I don't see any evidence of that. The only claim I see here is that they're being used to sign malware.


Do you realized that it's mandated by Google that a phone sold under the name "Android" has to be rootable?


Why can't I root a Qualcomm Samsung phone for the US market? What about carrier locked phones? Et cetera.


There seems to be some confusion about this, so I'll just copy/paste what I wrote on Twitter:

>Folks, this is bad. Very, very bad. Hackers and/or malicious insiders have leaked the platform certificates of several vendors. These are used to sign system apps on Android builds, including the "android" app itself. These certs are being used to sign malicious Android apps!

>Why is that a problem? Well, it lets malicious apps opt into Android's shared user ID mechanism and run with the same highly privileged user ID as "android" - android.uid.system. Basically, they have the same authority/level of access as the Android OS process!

>Here's a short summary of [shared UID](https://blog.esper.io/android-13-deep-dive/#shared_uid_migra...), from my Android 13 deep dive.

>The post on the Android Partner Vulnerability Initiative issue tracker shared SHA256 hashes of the platform signing certificates and correctly signed malware using those certificates. Thanks to sites like VirusTotal and APKMirror, it's trivial to see who is affected...

>So, for example, this malware sample: https://virustotal.com/gui/file/b1f191b1ee463679c7c2fa7db5a2...

>scroll down to the certificate subject/issuer, and whose name do you see? The biggest Android OEM on the planet? Yeah, yikes.

>Go to APKMirror and just search for the SHA256 hash of the corresponding platform signing certificate... https://apkmirror.com/?post_type=app_release&searchtype=apk&...

>Yeah, this certificate is still being used to sign apps.

>That's just one example. [There are others at risk, too.](https://twitter.com/mszustak/status/1598406354464829440)

>In any case, Google recommends that affected parties should rotate the platform certificate, conduct an investigation into how this leak happened, and minimize the number of apps signed with the platform certificate, so that future leaks won't be as devastating.

>Okay, so what are the immediate implications/takeaways for users?

>- You can't trust that an app has been signed by the legitimate vendor/OEM if their platform certificate was leaked. Do not sideload those apps from third-party sites/outside of Google Play or trusted OEM store.

>- This may affect updates to apps that are delivered through app stores if the OEM rotates the signing key, depending on whether or not that app has a V3 signature or not. V3 signature scheme supports key rotation, older schemes do not.

>OEMs are not required to sign system apps with V3 signatures. The minimum signature scheme version for apps targeting API level 30+ on the system partition is V2. You can check the signature scheme using the apksigner tool: https://developer.android.com/studio/command-line/apksigner

>Affected OEMs can still rotate the cert used to sign their system apps that have V2 signatures and then push an OTA update to deliver the updated apps. Then they can push app updates with that new cert, but devices that haven't received OTAs won't receive those app updates.


Any idea if signing up for Google's Advanced Protection program would mitigate/prevent potential attacks from this security issue?

My understanding is that signing up for this program blocks the usual methods of installing sideloaded apps (you can't install an app's apk file from your phone's local storage), and instead requires you to physically connect your Android phone to an external computer and use the adb CLI tool to sideload apps that are not on the Google Play store.

https://landing.google.com/advancedprotection/


If you're speaking from the perspective of an enterprise making recommendations, yes that'd be an option. As a user, though, you could just avoid sideloading.


Just trying to think if there are any other potential immediate recommendations for non-technical friends and family with Android phones from these vendors other than "don't sideload any apps" and "make sure to install any security updates as soon as they're available".


A possible way this occurred was through a hacker compromising a bunch of OEMs like Samsung and LG.

If that's your threat model, "don't sideload" seems insufficient as a response. A hacker who's able to steal the private keys of Samsung and LG (the "crown jewels") may also be able to replace the official apps they upload to the Play store with apps that contain malware.

Plus if I understand other comments correctly, a stolen key allows the thief to privilege escalate from "ability to issue an update for a fart app on your phone" to "ability to root your device".

So if you're serious about security, I would uninstall apps very aggressively, especially apps from the affected OEMs. You can fool around with fart apps on a separate device if you want.


Google “recommends”, gives out strong IBM vibes. No longer very end user centric but business customer centric if that makes sense.


What should they do?


> Do not sideload those apps from third-party sites/outside of Google Play or trusted OEM store.

Is that you Messrs. Pichai and Cook?

So you're saying this "security" isn't just a smokescreen to lock us into Google and Apple App Stores?


What is the risk here? Presumably Google Play would not accept an apk that was signed by "android", right? Assuming that is not the case, the only risk would be installing apks from alternative sources, like F-Droid, who should probably check for strange signings, too, IMHO.

Or there is "something else", some sort of bullshit that goes over the baseband, where updates cannot be refused (other than to put your device in airplane mode, or off) and now because the "platforms" couldn't protect their private keys from malefactors, kids with SDRs are going to effortlessly pown people's phones by pushing their own software over a possible, nay probable, proprietary baseband channel.

(Tangential scifi rant: And then you add the risk of manufacturer shenanigans at the PCB or chip-image level, and you can't really do due diligence on chips without your own electron microscope (and possibly not even then). I've had this worried thought with software, too, that there is too much complexity to really understand it. In the same way our computer hardware is actually too hard for any one person to understand "completely" - that is, possess the skill-set of every individual contributor on the engineering team of a company that makes smartphones.)


What is "signed by Android"? The leaked Samsung and LG keys are used in dozens of apps. Someone could in theory upload apps but claimed by Samsung on the Play Store, sign them with those keys, and compromise millions of users whose Play Store would suddenly update to the unofficial APKs (assuming that they get around Google's own Play Protect).


I assume (hope) that Google is smart enough to check for these vendor signatures, and possibly tie them to a specific vendor account.

With the switch to aab app bundles (https://android-developers.googleblog.com/2020/11/new-androi...), it's effectively impossible for most people to upload custom APKs to the Play Store anyway. I'm sure there are backdoor and special treatment for vendor applications to keep their signatures.

In fact, with the exception of private applications, Google requires you to hand over your signing keys when you upload to the play store. I doubt you'd get away with just uploading vendor keys to the Play Store console.


Where do you see that Google requires you to hand over the signing keys? That's one of the options, but you can also sign and then upload a signed apk.


https://developer.android.com/studio/publish/app-signing#enr...

Since 2011, GPlay requires you to use Play Signing. You can have Google generate the keys or you can upload your own keys, but the private key ends up with Google.

You can create a key pair for uploading artifacts (the "upload key") for which you only need to upload the public key, but the signing keys need to end up over at Google.

Older apps (uploaded before August 2021) are exempt, though. Apps distributed through other channels (F-Droid, Amazon App Store, etc.) are also exempt, of course.


Ah, so having to upload the key is new as of Aug 2021. If Google is indeed smart about it, they'd have blocked these compromised keys from being used by developers other than the whitelisted ones.


This is confidence-inspiring considering that Google now requires letting them sign your apps for some features now.

> OEMs have mitigated the issues above in previous updates.

Yeah, sure they have. Because OEMs are so good at updates. Would be interesting to see how many users such updates are available to. My guess is between 20 to 40%.


In my opinion, mobile Authentication should be perform along with registered mobile’s IMEI number + sim unique number or telecom providers unique token number. This can be easily done by Microsoft , intuine . This will be one additional layer of protection to your mobile auth


So phones are heavily locked down by manufacturers for our own security, but apparently they cannot secure their own keys. If anyone outside a few million nerds really understood this and saw all the vulnerabilities we have seen over the past few years, locking users out of their devices would be completely indefensible.


Are there any VirusTotal employees here who can help figure out the answer to this so that we can find all the affected APKs? https://twitter.com/ArtemR/status/1598444589269909504


The title of the post is super scary but I have no idea what it means. There is no description about it in the linked page. Reading through the comments doesn’t help either. I would just wait for a proper write up.


OEMs have a key they can use to sign apps. Some of those keys leaked. This was discover when malware signed with those keys showed up in the wild. This is bad because apps signed with those keys have the "system" authority, which means they bypass many permissions checks.


Based on what I’ve read over the past year about security issues, it’s coming into focus how some personal data of mine may have gotten leaked: it was never safe to begin with.


Is this like Google's central "root key"? Or does each Android OS distributor, e.g. Samsung, LineageOS, etc., have their own certificate?


The public evidence doesn't seem to suggest that any Google-owned keys are affected. Platform certificates are owned by the vendor.


I expect keys to be per software vendor, because otherwise lineageOS, and pretty much all small device manufacturers would not be able to sign their android kernel (required for pretty much anything secure, like google pay or safetynet), and giving them a central key would be a significant security risk compared to say Samsung. That said the private key for the CA could be compromised and in that case everything not up to date is toast


>That said the private key for the CA could be compromised and in that case everything not up to date is toast

I don't follow. Do different Android distributors make use of the same CA? If so why? I don't even see why they would need to make use of a CA if their public key is shipped with the device.


These platform certificates are all from different vendors, but the problem is that one of the leaked certs appears to be from the biggest Android vendor...Samsung. And that cert is seemingly still being used by them to sign apps.


There are different certificates for each vendor, therefore there are multiple hashes listed in the Bugtracker: https://bugs.chromium.org/p/apvi/issues/detail?id=100


Well that's a bummer


Interesting for whoever is curious:

> Fri, Nov 11, 2022 at 8:01 AM PST (20 days ago)

So it took 20 days to disclose.


The tags in the left sidebar have "Reported-2022-May-13"


Dang


I had speculated for a while that Secure Boot, Widevine, Trusted Computing, all of it seems like they have some pretty serious central points of failure. So much so, that it would be a modern heist of the century if they were stolen.

If someone (for example) got Apple's iOS signing key and Apple's HTTPS certificate, Apple could suffer catastrophic damage. If someone got the PlayStation 5 signing key or the Xbox One signing key, catastrophic damage there. In a way, it's a beautiful, super-secure house... built on a single ludicrously powerful point of failure. Good thing we don't have any corrupt government agencies who might want to bribe someone for keys... yet... hopefully...

This is actually something I would fear for the future. There have been countless physical heists - most recently in Antwerp, Belgium, where over $100 million in diamonds were stolen in 2003. We haven't had a major signing key stolen yet, but there's always that first day... if you can't keep $100M in diamonds safe, can you really be sure that you can keep a hardware signing key safe forever? Heck, the logistics of stealing the diamonds is insane - but stealing a key only takes a pencil and a piece of paper.


I would also like to add to this the recent paper documenting the attacks on the 4K Blu-ray ecosystem for another reason this could happen.

It was actually really secure, until... PowerDVD. It included the AACS 2.0 Copy-Protection Scheme as an encrypted, obfuscated DLL. Which would've been fine - except they accidentally were shipping an unencrypted, unobfuscated DLL containing all the algorithms alongside it. Big oops there. As for the key material itself - it used SGX [a Secure Enclave for Intel CPUs], but it turns out it didn't check for known old insecure hacked SGX firmware, any SGX firmware would do despite Intel strongly warning that isn't safe. So for pirates, Inadvertently unencrypted DLL + bad SGX provisioning = Ripping movies ~1 year after release despite completely new DRM.

There's always the possibility for stupidity to leak a key.


One engineer's "stupidity" is another engineer's "plausible deniability". ;)


A passing thank you to gjsman-1000 as his words resurfaced some history. As one with many years behind the curtain I recall an old trick that many glanced over since only 'advanced developers' long ago understood the impact: Compiling public releases as debug. Many a software would fail to strip the debug symbols from their code and so the games began. Using Softice and ollydbg along with ones favorite hex editor capable of sprinkling a few nop nops where needed and boom. The really bad ones were the companies that failed to comprehend what the pdb file was and shipped that with releases in debug mode - OOPS, there goes the i.p.


Similar to how Bethesda shipped the DRM-free version of Doom Eternal alongside the download for the DRM version.


AACS is designed to expect some keys will leak, so they have revocation mechanism for new titles.


Right... but here's the hilarious (for a pirate) bit.

AACS knew player keys would leak and hardened their system against it... but they didn't realize that BDXL readers, which existed before 4K Blu-ray was released, could actually read 4K discs with modified firmware. Also, said BDXL readers had no Secure Boot and could be easily modified.

With a modified player (which they didn't expect), you can read directly from the disc the data, which is (at the end of the encryption chain) encrypted with a per-disc key that is unchangeable. Normally this wouldn't matter because you can't read the disc directly - but with the modified player, you can. The player key goes through a long, complex process to end up with that unchangeable, unrevokable unique disc key. The pirates solution? Keep the drive keys when they discover them secret, and just generate the disc-unique keys and post them online for every disc. Then, with the modified drives, every disc in the list is permanently broken, and AACS LA has no idea what drive key is leaked and can't revoke it.

And so here we are... AACS LA doesn't know which drive key the pirates are using, and when disc keys are generated, the disc is cracked forever. Worst of all worlds for AACS...


>but stealing a key only takes a pencil and a piece of paper.

You need to have a serious discussion with your HSM vendor if that is the case.


In 2019, Ledger investigated their HSMs, and got it to the point where they achieved a persistent backdoor. On the HSM. Because the HSM they targeted didn’t actually have working Secure Boot(!). It was also FIPS-certified if that comforts you.

https://i.blackhat.com/USA-19/Thursday/us-19-Campana-Everybo...

Better hope Apple didn’t use that one - and that they regularly apply updates… odds of Apple trucking along with a several-year-old HSM don’t seem that low…


"only takes a pencil and a piece of paper"

The attacker: - controls the host - can communicate with the PCI card


I wasn't saying you could just steal it as though it was on paper - you'd need to run terminal commands on even the least-secured system. I'm more talking about how, if you were an insider at a company (or managed to sneak in), you could potentially attack the HSM on-premises and dump the key, then smuggle it out with pencil and paper instead of having to sneak out the entire HSM (which would be quite bulky, likely to be detected as stolen)...

If Apple was using that HSM model (which is only FIPS-2 Level 3, which is still bad but maybe not as secure as a FIPS-4 they would use), an employee could potentially acquire a similar model at home, discover the vulnerabilities, and then exploit it on-premesis under the guise of a firmware update or maintenance. After that, jot the key down on an index card and put the HSM back like nothing happened. Sleep on it for 3 years so nobody remembers, and then wreak havoc.


OK, but even then ... an offline root CA should be powered down and locked inside a safe. An online CA should be inside a data-center that has specific staff with badge access, audit trails, surveillance video and probably be in its own separately locked cage.


I don't believe they do.


Getting to it might be very, very hard. But once it's in your hand you've got it. Diamonds can found as you're leaving the building, or you can be stopped at the airport, or the big pile in your house can be found, etc.

A signing key is like, 4096 bits at most? 4 sms messages will do it.


So the plan is to ... what ... break the physical security precautions of the HSM during a burglary and then leave without the operator noticing? When I say "break the physical security precautions" that typically involves things like grinding open the casing, depotting the innards, avoiding trigger the self-zeroing mechanism etc etc. and doing so while avoiding the anti-tamper sensors.


The threat is more of an insider one, and the human element is always the weakest link in the chain.


If the key is located only in the HSM, and there are no known flaws in the HSM, yep, that's a hard nut to crack, I'm not going to claim I could do it. But given enough resources, with time and planning and insiders? Take a look at https://cryptosense.com/blog/how-ledger-hacked-an-hsm - and that's a remote attack.


Or have a corporate spy join the security team 3 years prior, and do just one subtle thing?

The Googles and Apples of the world are probably safe against even their own people, but do you trust Tracfone?


No hand waving: what “one subtle thing” will this insider do to make this attack possible? I’m guessing FIPS 140-2 has already thought about it.

HSMs are hardened against individual bad actors. Their threat model envisages the presence of nation state actors.

Is it possible that an HSM attack happened here? I wouldn’t bet on it.


I think their point is that it's easier because you don't need to do the "leave without the operator noticing" that you need to for physical items. Once you have access to those 4096 bits you don't have to escape with them, you just need to transmit 4 sms messages.


An HSM is going to make that difficult. The whole point of an HSM is that you can’t get the key from the HSM, you can only take the entire HSM with you.

Of course it is possible to read a key from an HSM, it’s just designed to be incredibly difficult. I don’t think I would be able to do it, at all, even if I could bring the HSM home with me for a couple weeks.


And in the case of the HSMs I've dealt with, just taking it home is enough to render it useless without some incredibly specific and difficult precautions.


Are you familiar with the threat model an HSM is designed to counter?


Are they though? I doubt that Apple, or Xbox, or PlayStation, use HSMs for storing their most sensitive keys. If something goes wrong with the HSM, can you recover the keys, or did you just bork your update system forever? In which case, does the key exist outside the HSM? If it only exists on multiple HSMs, how is it protected in transit between them, and is that vulnerable?


All this, and more, have been successfully implemented many times. Watch the videos of the DNSSEC key ceremonies to get a bit of an idea about how deeply thought out these processes around hardware roots of trust work.

Unfortunately, the HSM vendors are generally just as shifty as a typical SaaS vendor, so don’t believe their promises of secret inviolability if you can get uninterrupted time alone with a device.

Realistically, any decent shop manages all of these risks pretty well, but there’s a loting rigmarole to do it robustly.


That's why your HSM supports, for example, the ability for an appropriate entitled crypto officer to export the private key onto multiple different smart cards for separate custody with M of N required to restore the key, and all that other good stuff.


Right... but at the same time, does that mean that if I break into the homes of Phil Schiller, Tim Cook, Craig Federighi, Greg Joswiak, Jeff Williams, and Johny Srouji and steal their keys, that I could reassemble the key that would normally be stored on a HSM in Apple Park?

In some ways attacking the homes might be easier than attacking Apple Park. (Just thinking out loud...)


Assuming that you also can get those people to disclose the eight-digit PINs that were required by the crypto officer at the time the custodians enrolled their smart cards?


Getting the pins would be the easiest part.


If you think attacking the homes of billionaires is easy or novel, I suggest that perhaps you have not considered that you are not the first person with that idea.

Private homes of the ultra wealthy and powerful are usually protected in non-obvious ways that most houses on most streets are not, to put it mildly. It turns out that machine guns are legal for private citizens (such as a 24/7 armed private security team), even in California, if you are wealthy and connected enough.


Yes... and no. Worth remembering according to the Steve Jobs biography he had no security in his suburban home and left the back door unlocked every night, and all the neighbors knew that's where Steve Jobs lives. (Bill Gates once visited and remarked, "You live here? All of you?") On the other hand, for an attacker, this may have been an obvious sign Steve Jobs didn't keep anything useful at home.

But assuming that's not the case now, my point still stands about how it would be maybe easier than attacking Apple Park, relatively speaking. Though, according to Apple's WWDC videos, invading Apple Park to swap an M1 between a MacBook and an iPad is easy when you are the CEO and wearing a rubber mask ;)


I doubt that they are entrusted with keys.


I've heard that Xbox used HSMs for the signing keys way back in the days of the original Xbox. I'm not sure why they'd have stopped.

And if Xbox was doing that 20 years ago, I'd expect Apple to be as well.

It'd make sense for Sony, but some of their approaches to software security have been lol inducing, so it wouldn't shock me if they aren't.


I know that at least some of those companies use HSMs. Multiple, actually.


This is why they make backup HSMs.


But if I understand it right, it was Google that screwed up this one. I always thought that they were not taking their responsibility as an OS vendor seriously and they should have taken this horse in back of the barn and shot it years ago.


Given what we know so far, there's no indication that Google is associated with this at all.


If I was a betting man: some smaller fish in the Android ecosystem practiced exquisitely terrible key management outside an HSM, got burnt, and will have to slip KPMG or Deloitte an extra big kickback if they ever want to see a clean SOC 2 again.


A theoretical piece of paper and pen


> We haven't had a major signing key stolen yet

The DigiNotar case seems fairly close ...

https://en.wikipedia.org/wiki/DigiNotar

https://darknetdiaries.com/transcript/3/


There’s also the HDCP key getting leaked.

https://www.themarysue.com/hdcp-master-key-confirmed-blu-ray...


This summarizes a thought I had for a while.

Muggings and robberies happen every single day, probably by the millions.

Hacks also happen, although not that frequently.

Now, what would happen if you combined both? Say a rogue criminal group threatens/kidnaps/holds a sysadmin at gunpoint and forces them to export all the private keys, the keys to the kingdom.

It would be like the explosive mints in Mission Impossible, where if you make the two ends touch, BOOM!

Make the hacking world and the physical mugging/robbing worlds intersect and you probably got an extremely dangerous combination, a nuclear bomb waiting to explode.


This is a well know, and I'd imagine well used technique

see https://en.wikipedia.org/wiki/Rubber-hose_cryptanalysis


> Make the hacking world and the physical mugging/robbing worlds intersect and you probably got an extremely dangerous combination, a nuclear bomb waiting to explode.

This is easily within the capabilities of any state level actor. And unlike a nuclear bomb, it can be kept out of the news. I’d wager it’s already been done.


I haven't heard the government complain about encryption since early 2000s. I'd wager you're correct.


I think you're describing the plot of Firewall with Harrison Ford.


And you know nobody actually has their high-value signing key protected by a series of complex offline vaults and checks and balances like you'd see in Ocean's 11 - at best it's on the other side of a room on an air gapped computer.


The IANA key ceremony is pretty close to best practice: https://kimdavies.com/key-ceremony-primer


This is the second time today that I've seen reference to a "key ceremony", which I hadn't heard of before. Sure I expected root key holders to have some kind of formality around key management, but not a 5 hour event live-streamed on youtube! https://www.iana.org/dnssec/ceremonies/45


Right... because the more you lock the key down and try to secure it, the greater the risk something in your security will go wrong, and then you will lose it yourself. Losing the key is not as bad, though still catastrophic, from a corporate perspective. Imagine if Apple couldn't distribute a software update ever again. Much better to not invest in super-strong security that has that risk... but then you have an increased risk of theft...

It's almost like cryptographic signing keys are the modern day Ring of Power...



The Simpsons got it right: https://youtu.be/eU2Or5rCN_Y


that matches my experience of an unnamed large multinational datacentre company

to get to the floor needed to go through the multilayer security, id checks, etc

the cleaner had left their mop in one of the secure doors, bypassing most of it


That's not strictly true. At one point, my desk was next to the room that held the vault for one particular signing key. You'd have to get through the building security, through a room guarded by one access control mechanism, and then into a vault secured by a second mechanism. It wasn't guards and guns but it also seemd sufficient for the task at hand.


For a many million dollar key, a few armored robbers with guns would solve this problem. Or to quote xkcd: get the hammer


Depends on if you are concerned mostly about covert access or overt access. I'd argue the former is quite a bit more serious in the case that keys can be revoked online.


Depends how long they need it for.

"Hi, we have your family hostage for a week and if you tell anyone we have this key we'll ship you back parts"

More than enough time for a nation state backed actor to spread a lot of damage.


Yeah covert is the main issue - you need to be sure that dgacmu didn’t wander into the office one door over and grab a copy of the keys.

Some sort of read-only memory that logged how many times it had been read might help.


more like sleeper cell agents


“But the plans were on display…”

“On display? I eventually had to go down to the cellar to find them.”

“That’s the display department.”

“With a flashlight.”

“Ah, well, the lights had probably gone.”

“So had the stairs.”

“But look, you found the notice, didn’t you?”

“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.”


That's kind of how I feel trying to access the text of this article. All I can see when I look at the page or view source is a bunch of executable code. You have to do quite a bit to be able to read this security warning.


The places that care store them in HSMs which few people have physical access to.


> If someone (for example) got Apple's iOS signing key and Apple's HTTPS certificate, Apple could suffer catastrophic damage.

...people could see what's going on on their phone. I think this could be a good thing.


Can we call this Death Star security? Invincible quintillion-dollar project, unless you hit the tiny trash vent at the back of canyon 35.


> recently in Antwerp, Belgium, where over $100 million in diamonds were stolen in 2003

One of my favourite books I’ve read was about this heist

https://www.goodreads.com/en/book/show/7071759-flawless


Given how much modern hardware is on check-for-updates-online mode by default, how long would a stolen signing key be valuable? I'd imagine the OEM would respond to the theft by cutting new keys and pushing an update to supplant the old key. There'd be some details to hammer out (retroactively declaring the old key untrustworthy past date of theft, allow-listing a bunch of software and peripherals that couldn't be updated to accept if they matched key + checksum), but it feels like stealing such keys would have a very short window of utility.

To be clear, this is not to imply resolving the theft is trivial... Some engineers will have a bad week. It's more that it's solvable from the aggrieved party's side in a way that a diamond theft isn't.


If you stole a hardware key (which I am speaking of), you've just smashed Secure Boot on all the devices that use it, forever. You can boot whatever software you want on your PlayStation, or Xbox, or Windows Secure-Boot-Enabled PC, or iPad, or similar. Every device with that key has had their locks smashed and there's no going back. Once you've done that, downgrading firmware, modifying firmware, undoing software updates, it all becomes a lot easier.

Bonus points if you stole a Windows signing key and a Windows Update CDN Certificate. Send out fake Windows updates, approved by Microsoft, to do whatever you want. Maybe even force them to use your own middleman update server and only recognize your own keys. Fun stuff.

Got the Apple signing key? Revoke any app you like and have it instantly lock out from millions of iPhones. Break every app for everybody on iPhone and give Apple employees and users a miserable week and cause a political panic. Sky's the limit.


UEFI Secure Boot signing keys can be updated at runtime, and old keys can be explicitly marked as untrusted. Validation of key updates is performed by the firmware, not the OS, so this can't be trivially subverted.


Right, agree. What I'm asking is: why can't the OEM over-the-air-update Secure Boot on all the online devices? Is there no channel to re-key Secure Boot from an online source?

I agree that anyone who wants to muck about their device on purpose could just not connect to the Internet, but I'm focused on protecting the users who don't want to be victims of hacks.

(My only mental model for this is the bugs in the early protection on the original XBox and how Microsoft fixed it with OTA updates and specific games they released that patched the bug).


> Is there no channel to re-key Secure Boot from an online source?

There is no such method. Any such method would be potentially vulnerable if the device was jailbroken (from a purely software hack) - in which case, you would need some sort of permanent key to verify the re-keying which defeats the point. As a result, the Secure Boot key is burned into the chip and unchangeable.


They key is burned into the device.


> Got the Apple signing key? Revoke any app you like and have it instantly lock out from millions of iPhones. Break every app for everybody on iPhone and give Apple employees and users a miserable week and cause a political panic. Sky's the limit.

How are you getting to the iPhones? You need to be able to intercept any internet connection too.

nb I'm not sure there is such a single signing key.


> Send out fake Windows updates, approved by Microsoft, to do whatever you want.

Send them... How? You're planning on setting up a MITM attack on everyone's internet access?


At a nation state level attack that is completely possible.


If a state wants something from you, they'll send armed agents at your door.


If 'your' state that is. There are any number of states unfriendly to each other.


Having stolen the key, the thieves themselves could push such an update and lock the OEM out. Imagine millions of Apple devices getting an update, locking Apple out while simultaneously installing some nefarious code which uploads all saved credentials to the thieves' server. Sure, the time window is short (if the breakin was detected), but having full-access to millions of people personal data could be a long term money-trove while simultaneously bankrupting Apple through damages. One could short-sell companies by leaking sensitive data, impersonate people and send messages, blackmail people, even simple ransomware would be devastatingly profitable.


You’d have to impersonate Google’s web servers to push the update, wouldn’t you? That would mean both hijacking the DNS and faking or stealing the TLS cert. That’s not impossible, but it’s pretty much a moderately-well-equipped-nation level attack.


If a 'heist' to steal such a key would be pulled off, adding credentials (or stealing them) to the update-server (even for a one-time update push) doesn't seem out of the realm of possibilities.


If you own the OS, you should be able to install updates from anywhere.


If someone copies the key with pencil and paper, the original remains in place. There will be nothing to show that anything happened... until signed malware is detected in the field. (Which is exactly where we are now - with signed malware from a bunch of different keys.) The amount of damage depends on how long the signed malware was out there, and what its uptake rate is.

Sure, it won't destroy or infect the entire world. It's still pretty bad, though. And so far I haven't seen anything saying how long it was out there.

Also: How did so many keys get compromised? There may be a bigger (or more systemic) problem.


" ... cutting new keys and pushing an update to supplant the old key ..."

What about the keys to say Windows updates?

If someone manages to nick the key you can be sure that they would push a new one and revoke the "old" one whilst simultaneously deploying nasties.

100M (whatevs) is chicken feed compared to the rewards from compromising something like Windows updates. Bear in mind that a simple pecuniary minded criminal gang might leverage a lot of coin mining or whatever for a while and generate an awful lot of electricity usage but a state level actor gets the real crown jewels: information.

Anyway, that's far too complicated when there are far easier ways to crack rather a lot of the world. Look at the Solarwinds compromise. That sort of thing won't (and wasn't) be the last.

Supply chain attacks in the IT world are probably the worst in terms of fall out and the potential losses are vast in comparison to a bit of shiny carbon falling down the back of a sofa. However, for some reason, in this most bizarre modern world we find ourselves in, we still fixate on a bloody diamond!


The beauty of stealing a signing key is you don't necessarily tip off the company by getting yourself a copy of it. (The diamonds were removed, but the key stays where it is)

If you were clever enough to get the key you might hold it until it was truly valuable for your exploit.


Bonus points for letting everyone think, the diamonds are still there.


Depends on which key is stolen. You can't update the certs on distributed users without some root certificate key. If that key is compromised, there is no trust root which the clients can trust anymore. The thief can revoke all the keys the original institution holds, so they have no path for changing keys.


To do that revocation, they'd have to be able to attack the update channel itself. And practically speaking, that attack is hard to pull off; you can probably successfully update most devices, which might be good enough.


> government agencies who might want to bribe someone for keys

Why would they need a bribe? They have a whole mechanism for "do what we want, and you're not allowed to tell anyone about it". I would be shocked if NSA/FVEY didn't already have every signing key they could get their hands on, especially from companies that are friendly to them like Microsoft https://en.wikipedia.org/wiki/National_security_letter


As you may know, PS3 was the example of failure.


The comments on some issues are downright unsettling. https://bugs.chromium.org/p/apvi/issues/detail?id=34#c9


All the comments seem to be deleted. Do you remember what was unsettling about them?


One was a video of a dancing child with a cellphone.


If I understand this correctly this is an orbital nuke on android security.


The existence of leaked platform signing certificates breaks a core Android security feature: the application sandbox. Theoretically, a malicious actor with the leaked cert can sign an app and declare the shared user ID Manifest element to run in the same process as 'android', ie. the operating system process.


I think it’s more an orbital dung beetle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: