The two exploits in the subject line are two completely separate things. The main focus of the linked article at ps3hax.net is a key which is used to HMAC-authenticate the service mode dongle. The source code for that is here:
The more interesting hack was announced at 27c3. A team comprised of Wii hackers has discovered Sony's main boot-signing private key. This is like discovering Verisign's private key -- you can now issue any SSL cert you want. They can sign any hypervisor they want, which leads to running any code you want.
They were able to do this because (surprise), there was a crypto mistake in the implementation. Two (or more) ECDSA signatures were generated with the same secret nonce. Apparently Sony doesn't read our blog because we discussed this flaw before:
The cool thing about this flaw is that the private key is not present in the PS3 anywhere. It (probably) only exists at some locked down code-signing center. However, a software flaw in the way it generated the signatures was effectively painting the private key on the side of every signed code module released.
I know enough about crypto to know that I need to stay away from it. I can tell people about common pitfalls and all that, but I'm nowhere near good enough to stay out of all of them myself. It's times like these that I'm happy to be on the side of analyzing this stuff rather than developing it.
It's definitely easier to look for known flaws, such as obvious spec violations like this ECDSA signing flaw. It's much harder to review a high-assurance system and be sure you've anticipated all possible ways something might fail years down the road.
One point I have not said recently is that crypto review is very expensive in terms of time and money. So the design approach to security problems should be roughly:
1. Avoid crypto if possible. Store data on the server, for example. Doing this correctly is orders of magnitude easier than developing crypto protocols.
2. If using crypto, use something high-level. GPG is a great example of a bundle of crypto primitives with a well-understood protocol for encryption, integrity protection, and key management. PGP has been around for 20 years now.
3. If none of the above works, develop custom crypto protocol. But budget 10x for review as for design/implementation. So if you spend a week and $10,000 developing it, spend 10 weeks and $100,000 to review/improve it. This includes external review, not just internal. This goes for everyone, even "experts".
My main point is: "Crypto costs a lot. Are you sure you want to pay for it?" Because if you do implement custom crypto, you (or your users) will pay for it one way or another.
Part of the problem is that most people who have to use it aren't mathematicians. And even those of us who have math degrees may not realize every single one of the assumptions or requirements that go into each operation (especially if they change due to new attacks being discovered).
So just knowing that something is dangerous isn't the same as having everything you need to deal with the danger.
I'm surprised that I've never heard of a cryptography wiki somewhere where people try to list all the requirements for using each algorithm or technique securely in simple terms. E.G. a page on each technique or algorithm telling you which numbers must be random, unpredictable, never repeated, etc. Or telling you that if a person has these numbers, they can do X, Y & Z. Or that if you don't verify that this padding is right, people can forge messages. With citations linking to the attacks.
Incidentally, I'm hoping that I'm wrong and there really is such a thing out there, somewhere.
That knowledge is spread out over a bunch of people who bill upwards of $500/hr. What's more likely to get built is a wiki that tells people just enough to feel comfortable while writing dangerously brittle crypto code --- to say nothing of the people who will get the recommendations on such a wiki wrong.
It's also all downside. No one person has all these details. The primary contributor to such a wiki is inevitably going to miss horrible faults. If you're that guy, why bother?
> The primary contributor to such a wiki is inevitably going to miss horrible faults. If you're that guy, why bother?
Even those $500/hr experts miss horrible faults from what I can see. I mean, you were telling me several months ago on HN how hypervisor secured game consoles were one of the few places where we see high-end security in the consumer space and that we can't just assume that the hackers will always win.
But the way I see it, there aren't any magic bullets in the security world. Security is a Red Queen problem. In other words, you have to keep running as hard as you can just to avoid falling behind. If you stop running, someone will eventually catch up to you.
> No one person has all these details.
That sounds like an incentive to collaborate to me. One might expect that people should want to gather and organize important details like that. The fact that any such an endeavor would inevitably be incomplete and require updates should go without saying.
The main problems would be starting with enough information for people to want to contribute to it and having moderators who were good enough to vet contributions for accuracy.
I don't think the thing you want to have happen is going to happen.
I think what could happen is, for lack of a better term, a half-assed wiki that leads people to code broken e=3 RSA implementations and feel safe doing it.
"In DSA, the k value is not a nonce. In addition to being
unique, the value must be unpredictable and secret. This
makes it more like a random session key than a nonce. When
an implementer gets this wrong, they expose the private
key, often with only one or two signatures."
There is no standard term for "nonce that must remain secret", hence I believe that "secret nonce" is the best I can do. The term "nonce" is general enough that it can be public; it only must be unique.
I take great pains to create a reasonable term where none exists because I agree terminology and consistency are important in crypto. If you have a better term, I'm happy to hear it.
I believe calling it simply a "secret" or "random secret" would lead to less confusion. But my point wasn't so much to prove you wrong, but to point out that the issue was more subtle than "sony didn't read the crypto manual".
I liked your comment in general, but I think the jab at Sony for not reading your blog and the parenthetical (surprise) insinuate a level of boneheadedness that's unwarranted. Maybe that's just my reading of it.
"Random secret" doesn't capture the fact that it must not be reused across messages. A session key ("random secret") can be reused to encrypt multiple messages, a DSA secret nonce can't.
You need three concepts: unique (used only once, ever), unpredictable (pseudo-random), and secret (never revealed to anyone, before or after use).
You should read the DSA spec, FIPS 186-3, section 4.5. They call this parameter the "Per-Message Secret Number", which doesn't capture the fact it needs to be unpredictable. (Later in that section, they mention it is "random", but the name of the parameter doesn't have that notion.)
My surprise is not that Sony didn't read our blog, but that they didn't read FIPS 186 when implementing their concrete-vault root signing tool. "Per-message" is spelled out right there in the title of section 4.5.
"Nonce" means "a token that is used once." It's sort of meaningless to say "used the nonce twice", but the only way around it is rigorously formal, but even more circuitous and confusing speech:
"Two (or more) ECDSA signatures were generated with a constant value provided where a secret nonce value should go, making it not effectively a nonce."
Apologies if this is overly simplified, not sure of your level of expertise (can give you the more complex one too if you want more information!). The dongle in this case is a USB diagnostic tool used by Sony employees and technicians to put a PS3 into service mode.
Essentially it's plugged in and the PS3 started up. The PS3 communicates with the dongle and swaps a set of 'secure' keys to authenticate that the dongle is a legitimate, then runs some code to give you access to all sorts of options you normally wouldn't be able to see/use.
What these guys have done is found the master key found in all PS3s that allows it to authenticate any/all service dongles. Using this information one can generate their own service ID and ultimately create their own dongle.
Basically this was possible to do because the protection mechanisms in place to protect the key relied on the rest of the system not being broken. Once the system was hacked it was simply a matter of time before this was decoded as well.
From my own experience, service mode on a lot of embedded devices typically exposes some diagnostics maybe let's you load some things you couldn't otherwise load, I assume it's the same on the PS3. I also assume this doesn't compromise Sony's ability to sign software or allow third parties to sign software, however in service mode you might not need "signed" software.
Heck, there are off the shelf solutions for this stuff, there are chips you can load a set of keys in to at manufacturing time and they contain all the crypto in the chip such that there is close to no way it could "leak" out. I'd assume IBM, Toshiba and Sony would use something like that and if they properly generate keys the only real way the "master key" could escape would be a rogue employee leaking it. They knew people would attack the platform.
Yep you're spot-on in this case, and as you say the software signing keys hasn't been compromised, though they aren't needed in service mode.
There are definitely ways the dongle keys could have been better protected (and I'm sure a few people are having some very serious talks about why they weren't), but have to give Sony kudos for having a system last 3 years without being compromised, and even now it's only easily broken at ring 2; the gameOS level of the system.
It's simple to protect the dongle keys better: sign the dongle ID. In this way, only the public key exists on the PS3, and the system is secure (if implemented properly). As it stands, their system is equivalent to having both the public and private key sitting on the PS3. No matter how well you protect this key, the system is still broken in theory.
Agreed with the way it could be done better though they didn't actually have the private key on the system. They really badly implemented their crypto so they might as well have though.
I'm referring purely to the dongle attack. When you use an HMAC in that way, your secret is your "private" key. It also just happens to be the "public" key as well. That's why it's a terrible design.
It's really not so amazing. Unlike other platforms, the PS3 gave hackers a good deal of freedom to do as they wished when it was released. It was only a few months ago when OtherOS was removed that people got pissed and actually started looking at jailbreaking it in a serious way.
Citation? I the latest news I could find on google was that the nine or so class-actions were consolidated into one, and that Sony has filed a motion to dismiss, but nothing more that that.
It's clear from the vulnerabilities found that it is more the communities fault for not jailbreaking sooner (lack of need) than to Sony's futile attempts at protecting their console. I mean come on, using the same Random value for all ECDSA keys and signatures?
Why, oh why, would you ever use an HMAC in this way? HMACs are great for validating your own data (e.g. "secure cookies"), but anyone who can validate an HMAC can also generate them. Repeat after me: HMACs are not signatures.
The key difference here is one between a message authentication code [1] and digital signature [2].
A digital signature uses public key cryptography: one person can generate and sign messages, everyone else can verify. A common one is the DSA[3] -- but there are many others, including freakishly fast elliptical curve versions.
I think the difference between AWS and the PS3 is that in the PS3's case, every single PS3 has to be able to validate the authentication code, whereas in AWS's case, only AWS needs to validate the code, so the secret material is only shared between a particular customer and AWS, but not with everyone else. It's not the same use case.
I agree, though, that they have been a little sloppy with the terminology. Some of their APIs refer to "Signature"s, when they should say "Authentication Code"s. I think, though, that this may have led to even more customer confusion than the current situation, as most people aren't aware of the difference between HMACs and signatures.
- It allows any currently available PS3 to be put into service mode.
- It allows users of 3.50 or below to downgrade.
- It doesn't allow 3.55 users to downgrade (currently, though they can still access service mode).
This isn't the PS3 master key just as no one knows the PSP master key
This will enable any user of firmware 3.5 or lower on the current hardware to run backup games that don't require a firmware of greater than 3.5, pretty much the same story as PSP modding.
Are you sure? I think that the PS3 uses real public-key crypto to sign the OS all the way down from the initial boot. The best you could do with this is load an alternate signed OS image (possibly an earlier one that is exploitable).
I don't really know much about PS3 hacking, so this is all just a guess.
Uh, you have it backward. Since it's verified boot, you can't easily alter the signed OS image. This is better as it allows individuals to sign executables that will then execute natively in the regular OS.
This is how original Xbox and Xbox 360 softmods work. The regular OS boots and either an exploit occurs in the font package, music player, MechAssault or Splinter Cell (the latter two used for bootstrapping the former two) or in the 360 the xbox is "rebooted" virtually to avoid the verified boot into an alternative kernel that has signature checking removed.
Note, I know nothing about PS3 hacking either and am making assumptions based on the connotations of the word "master key", "signing" and other comments here along with my knowledge of xbox1/360 hacking.
I think you are mistaking what "master key" means here. They found the dongle HMAC secret, which means that anyone can create a new dongle for getting into service mode, which is apparently useful for downgrading to a different OS in some cases, but has no utility outside of that.
It's not the master key for cryptographically signing executables or OS images.
EDIT: WOW! Okay, looks like they screwed up big time. They used the same random number for all their signatures, which means that they effectively leaked their private key for various bootloaders in the system. The chain of trust is toast.
You're sending me on a roller coaster of emotions here mmastrac! That's pretty cool. I picked up a 360 modified it to play Reach and then sold it for over twice what I paid. I need something new to mess around with, this could be fun.
Not yet leaked, but apparently derived due to Sony's poor random number generator. See the presentation here: http://www.youtube.com/watch?v=X6CA4fqAdsc (part 1...money shot is in part 3)
Interesting, although I don't know what it means exactly.
It looks like they use simple dongles for entering service mode. These dongles are authenticated by an HMAC rather than public-key crypto (big mistake on Sony's part).
So this means I'll have to sit through another 20-30 minute system update when all I really want to do is play a game for 15 minutes (all the time I really have for the PS3).
My PS3 is collecting dust because the AppleTV is a better media player and a cheap PC with Steam installed is cheaper than a PS3 and just a few $70 game discs.
https://github.com/winocm/ps3-donglegen
The more interesting hack was announced at 27c3. A team comprised of Wii hackers has discovered Sony's main boot-signing private key. This is like discovering Verisign's private key -- you can now issue any SSL cert you want. They can sign any hypervisor they want, which leads to running any code you want.
They were able to do this because (surprise), there was a crypto mistake in the implementation. Two (or more) ECDSA signatures were generated with the same secret nonce. Apparently Sony doesn't read our blog because we discussed this flaw before:
http://rdist.root.org/2010/11/19/dsa-requirements-for-random...
And before that, we discussed a variant of this attack when the Debian PRNG was broken:
http://rdist.root.org/2009/05/17/the-debian-pgp-disaster-tha...
The cool thing about this flaw is that the private key is not present in the PS3 anywhere. It (probably) only exists at some locked down code-signing center. However, a software flaw in the way it generated the signatures was effectively painting the private key on the side of every signed code module released.
And people still think crypto isn't dangerous?