Hacker News new | past | comments | ask | show | jobs | submit login
Breaking the Ledger Security Model (saleemrashid.com)
232 points by danr4 on March 20, 2018 | hide | past | favorite | 79 comments



Saleem Rashid, the security researcher who discovered this vulnerability and proof of concept is 15 years old. [0]

[0]: https://krebsonsecurity.com/2018/03/15-year-old-finds-flaw-i...


I met him on Telegram a while back. The joke is that he developed a wallet app in X months, but could have done it in less if he didn't have homework.

He's definitely one of my motivations to pursue blockchain/cryptography. I used to be secure (in my bubble) that I was the best at what I do (for my age) until I met Saleem.

Gratz Saleem :)


I was willing to tolerate Ledger's issues here (I'm a user and I've heavily recommended their products) if they fixed them -- the problems never should have happened, but they are marginally better than Trezor in a lot of other ways.

Unfortunately, the Ledger team appear to be asshats, so there's every reason to fear using them in the future, even if they've fixed this specific issue. If they're going to handle a legit issue like this (downplaying it, etc.), I'm not willing to trust them.

I'd love to see three products created:

1) Split client/server wallets (so you don't need to trust the operator or your local machine alone); some work has been done on this. Also, split PC + SGX or phone + secure element wallets. This is basically software-only in cost, but at higher security, so it would be suitable for $0+ balance accounts.

2) A low end ($20?) wallet with a cheap display and button. If you retailed them at $100 but sold them to providers at $40 they could probably give them away for a lot of accounts; basically like Yubikeys but with displays.

3) Some higher-end hardware wallets; essentially HSMs plus display/input. HSM technology basically stagnated in 1995; there's a lot of need for something better today. This could be in the $10-50k per unit range for a lot of high-value keys if actually implemented well and in a way which had no "trust us" demons. There are hundreds or thousands of potential customers, and more by the day.


At at previous job, we made an HSM. We spent a lot of time considering manufacturing and evil maid attacks. There are solutions.

The biggest important leap of faith is that the manufacturer must be able to lock themselves out of the device. This involves physical tamper resistance and people processes. And lots of key management.


1) There's already some work on this. Take a look at https://carbonwallet.com

A multisignature wallet with one key encrypted browser side and another in a mobile app.

https://github.com/carbonwallet/CarbonKey

The CarbonKey app is a simple PWA (Progressive Web App) so you can deploy it yourself and completely bypass the wallet owners supply chain.


I don't trust any hardware wallet, that's why I use a brain wallet I forked from the keybase.io team to use the stronger argon2 hash function, takes 4 minutes to generate a wallet (with the strong setting that comes with the golang version), good luck trying to brute force this one.

https://patcito.github.io/mindwallet


> I was willing to tolerate Ledger's issues here ... if they fixed them

> Unfortunately, the Ledger team appear to be asshats, so there's every reason to fear using them in the future, even if they've fixed this specific issue

Your first two lines completely contradict each other.

The Nano S is #3 in your list.


The problem wasn't the breach, it was their messaging around the breach. I accept that most products have bugs, even security-critical bugs in security products (although this one was a bit extreme to tolerate...).

The Nano S isn't really an HSM-containing-wallet. It (and the Trezor) are somewhere between a smartcard containing just the keys and an HSM. There's "trusted" display and input, but not the whole wallet. The Nano S also does have an element of "trust us" vs. an easily verifiable design.


1. Can be implemented using N-of-M signing. GreenAddress does something similar.


> Before I get to the details of the vulnerability, I would like to make it clear that I have not been paid a bounty by Ledger because their responsible disclosure agreement would have prevented me from publishing this technical report.

Ridiculous. Ledger, pay him the bounty that he quite clearly deserves.


On the ledger blog:

https://www.ledger.fr/2018/03/20/firmware-1-4-deep-dive-secu...

> We would like to congratulate the three security researchers who found these bounties.

> Saleem Rashid – MCU fooling

> We fully appreciate their contribution, and they certainly deserve their rewards.

> We have asked each security researcher to sign our Ledger Bounty Program Reward Agreement, that you can review as part of our transparency process

> (this document doesn’t prevent the researcher to publish their own reports).


The "Ledger Bounty Program Reward Agreement" appears to have a clause that may allow Ledger to prevent a researcher from publish their own report.

>"You have complied and will continue to comply with the responsible disclosure process described in the Ledger Bounty Program which includes your agreement (a) not to disclose the security related bug to anyone without Ledger’s prior written consent," - [0]

I'm not a lawyer so I could be reading this wrong or maybe they never intended to enforce that clause.

[0]: Ledger Bounty Program Reward Agreement https://www.ledger.fr/wp-content/uploads/2018/03/Ledger-Boun...


I was curious whether there was any advantage to being a 15-year-old in this case, so I did a bit of research.

As a minor in the UK, he's not capable of entering into a commercial contract, so he could void the contract at any point before the age of 18. Normally when a contract is voided, the parties are returned to their prior state. But, it seems that the doctrine of restitution makes an exception for people incapable of contracting. They are not required to return benefits received under the voided contract.

I am not a lawyer, but I think he could have signed the contract, published his piece, and kept his money even if it was against the terms of the contract.


There's something quite funny about him being capable enough to discover a vulnerability like this but not being (legally speaking) capable enough to enter into a contract. :)


It's a weakness in the wording, any sane judge would use there discretion to come to a verdict that favored reality versus the weakness of law to account for this young mans intelligence.


Agreed, but if it's one or the other I would rather have "Discovered vulnerability with Ledger crypto wallet when I was 15" on my CV than a couple thousand dollar bounty


I got a Ledger wallet and was too paranoid to trust its seed generation, so I wrote some code to generate my own seed on an ESP8266 (after a failed attempt at doing the same thing with just a PDF and dice).

Here's the writeup, with code you can audit/try:

https://www.stavros.io/posts/perfectly-secure-bitcoin-wallet...


The checksum in a 12-word seed is only 4 bits. This means you could absolutely pick 12 random words using nothing but a dice. When importing the seed on a hardware wallet, if it complains that the seed is invalid (because the checksum is invalid) just change the 12th word by picking the next one in the BIP39 list (https://github.com/bitcoin/bips/blob/master/bip-0039/english...). If it complains again, tries the next one. And so on. At most you may have to try 2⁴ = 16 different words. One of them will pass the checksum. And there you go, you have a valid seed that was generated with a simple dice roll, without the need for custom code running on an MCU or offline computer.

Edit: BTW your method of calling MicroPython uos.urandom() to generate the seed is not necessarily safe! The MicroPython API does not guarantee the entropy always comes from a secure hardware RNG. Just "when possible". This means depending on your MicroPython version, how the framework was compiled, what exact revision of the ESP8266, entropy may or may not come from a secure hardware RNG. As a former InfoSec professional reviewing hardware/firmware/software security-related code, I often found flaws at many levels in this area.

Edit #2: In fact, after more looking into it, the current version of MicroPython relies on an undocumented register (https://web.archive.org/web/20160417080207/http://esp8266-re...) that "seems" to be a hardware RNG however it has never been determined if it is suitable to use for cryptographic purposes. It would be a lot safer to generate your seed on an offline Linux laptop booted of a Live USB or equivalent (with no storage device), using the good old cryptographically-secure getrandom(2) syscall than blindly trusting an undocumented sketchy ESP8266 register whose implementation is completely unknown. If I were you I would discard any wallet created using your ESP8266 code.


The problem with that is that inputting this in a Ledger, for example, is very cumbersome, and trying sixteen times could take an hour.


How often do you need to regenerate your seed? That hour is more than short enough for me.


For me, it compares disfavorably to just having an ESP8266 generate the seed for me.


use the offline tool. will take you 30 seconds.

https://www.ledgerwallet.com/support/bip39-standalone.html or https://github.com/iancoleman/bip39

PS: even a 24 word checksum takes just a couple minutes


It'll take me 30 seconds to use, but setting up the Raspberry Pi and formatting the SD card afterwards is a hassle. You can't beat the ESP8266 method, since you can just look at the code and be sure what it's running.


I'm not sure the ESP8266 is a great way of accomplishing the goal of being able to know what's running by just looking at the code, unless the Espressif has suddenly opened their firmwares and toolchains. An AVR (like an arduino) would probably be better suited for the task, as the open toolchains are mature.


What would Espressif be able to do? Somehow retain the entropy it gave to the code that you used to generate the key and then send it to their servers when you reflashed with an unrelated program and connected to WiFi? That seems a bit far-fetched.

Regardless, sure, the Arduino is also a good platform to do this on. It doesn't run MicroPython, though, so I implemented my particular program on the ESP for speed/ease of development.


Don't forget the Debian bug that created weak keys from 2006 to 2008. These sorts of things happen. It's best not to attempt to disparage the possibility.


Shit I got mine through Amazon a few months ago and was a bit worried it could be compromised. My paranoia is kicking in a bit more now after this article. Do you have any recommendations on checking that my Ledger isn't compromised? Ledger has some suggestions on their site, but as far as I understand they only check the integrity of the wallet apps.


From what the article says, you basically can't be sure, unless you check the hardware. I would advise at least flashing the original firmware that you can get from Ledger themselves.


I just saw Ledger posted this.

https://www.ledger.fr/2018/03/20/firmware-1-4-deep-dive-secu...

Sounds like it will check if it is compromised via the update.


Am I correct in understanding that the exploit would most likely be cleared by re-flashing the firmware to a known good image before you use it? Especially if you did so using JTAG it seems like it'd be very difficult (albeit probably not impossible) for the firmware modifications described in the article to persist through a reflash as long as you're reasonably confident that the hardware itself hasn't been altered. That doesn't get rid of the evil maid scenario, but it does get rid of the supply chain attack, which is IMO the more concerning one. Evil maid attacks can be mitigated by physical security, but the supply chain attack is out of your control.


Yes, and that's what the update does. Since the bootloader wasn't modified (at least with this particular attack), flashing a known-good image fixes it.


The problem is, if you flash with JTAG you're basically just trusting your host computer not to be compromised. And isn't not having to trust your host computer the entire point of a hardware wallet?


Big difference, you're only trusting the host computer (and the JTAG dongle) once. This is manageable, use an airgapped junk laptop with no HDD or similar if you're ultra paranoid. Sure perhaps the firmware is compromised and leaking data through some super exotic attack but I mean come on. That should give you a pretty reasonable level of confidence. You can never 100% trust a device you didn't design and fabricate every aspect of yourself, there's always some risk with any hardware token.

I'd also argue that trusting your host computer is certainly better than trusting the supplier. Shifting the burden of trust from a device you don't control to one you do is at least an improvement.


Ok thanks will do.


Move you coins to a new wallet addr, generated offline, and do the reflashing as others have commented.


unsure how feasible this is , but maybe check if they have a software version of their key generator and confirm the codes you use in that are identical to what your device displays.


That will completely defeat the purpose of having a hardware wallet, as you will be compromising the keys by divulging them to a networked computer.


run https://github.com/iancoleman/bip39 on an airgapped laptop


Matthew Green had a good analysis on why this is interesting also from outside a crypto currency perspective https://twitter.com/matthew_d_green/status/97606641626793984...


Interesting part: "For example, the iPhone SEP has a direct connection to the fingerprint reader, because the application processor isn’t trusted with that data. Weirdly FaceID departs from this but I digress."

Anyone have any info on why FaceID works a different way? Because it needs more processing power?


I have not seen any sources which explicitly state that FaceID doesn't work the same way as TouchID does, but perhaps it's the lack of specific details in the iOS Security Guide [0] that might have led to this assumption?

In fact, FaceID solely relies on the IR camera to do its work. You can cover the front-facing (normal) camera and your iPhone would still unlock successfully. Conversely, the newly touted Animoji feature does NOT rely on the IR camera at all, as evidenced in this iPhone X review [1] at 11:40. It may be the case that the OS don't have access to it.

[0] https://images.apple.com/business/docs/iOS_Security_Guide.pd...

[1] https://www.youtube.com/watch?v=9Ca8zWJOlFQ&t=700


I based my speculation on tweets by some developers who have been able to capture raw TrueDepth data in their apps. See eg https://mobile.twitter.com/braddwyer/status/9306828799773614...

I can’t swear to you that this is exactly the same depth data that FaceID uses. Maybe it’s been downsampled in some way that makes it safe to give to apps, without enabling attacks on FaceID. I think I’d be a bit more willing to believe that if Apple’s Security docs actually said that. To me it seems more likely that the raw depth maps are available to the app processor (and to apps!) because the SEP isn’t powerful enough to perform the recognition task on its own.


The application processor needs access to the camera so that you can take pictures and transmit them outside of the phone.


Which it could do through the SEP if the SEP were powerful enough, or through a secondary interface.


What would that accomplish? The entire point of having the SEP wired directly to the fingerprint reader is that fingerprint images never make it to the AP because there's no reason for that. If the SEP is just going to copy every image to the AP why not just have both processors connected to the camera? (which also removes the security surface of any SEP->AP image copying code)


Cool attack, but the the proposed mitigations don't seem to solve the threat model.

An attacker on the supply chain can always add a part that interposes on all i/o to the secure parts. This would have the same impact, unless I'm missing something, as compromising the insecure micro. The underlying problem is that there's no cryptographically secure path between the secure element and your eyes.

Ledger's verifiable erasure scheme is pretty interesting, actually. I prototyped something similar and ultimately abandoned it due to the high complexity and bandwidth requirements. From the sounds of it the major differences were that ledger didn't attempt to wipe and then reinitialize, but instead just tried to verify know state. Might still be made to work by changing that, although good luck over a uart.


An attack on the supply chain would have to happen at the very last stages when the device is assembled, as the assembler would otherwise notice the added component. I would even dare to say that the attack on the supply chain you're describing can only be done by the device assembler itself.



Could the manufacturer perform this type of "attack" on their own supply chain?


Seems like Ledger finally published a security advisory[0].

It also seems like they finally accepted OP's bug report as part of their bounty program.

[0]: https://www.ledger.fr/2018/03/20/firmware-1-4-deep-dive-secu...


Disclosure: I'm the creator of Mooltipass.

When creating the Mooltipass offline password keeper we actually spent a considerable amount of time thinking about solutions for the particular problem explained on this website.

We therefore opted for the following techniques:

- only allow signed firmware updates, signed using an encryption key unique to each device

- given that firmware flashing using external programmers requires complete flash/eeprom erase, we implemented a challenge/response protocol to check for tampering during shipping.

Obviously things are way easier when you don't allow custom firmwares to be flashed on a device. But as a general rule I wouldn't trust a device that would allow other programs to run on it (eg phones, computers...)


This needs much more visibility than it presently has. Device attestation is a difficult topic, and the writeup shows exactly why.


Agree. What's mostly worrying me is that Ledger's high ranking officials seem to lack basic communication skills for a company selling such critical devices.

You can see that in the author's section of his dealing with Ledger, and if you go to /r/ledgerwallet subrredit you'll be able to see their interaction rubs off as quite a bit condescending to downright insulting[0] considering people have lost money over some issues, shrugging it off as "it's your fault".

[0] https://www.reddit.com/r/ledgerwallet/comments/7tvyar/psa_do...


Wait... so the ledger allows any firmware to be uploaded without signature checking? damn :(


Seems insane right? Their entire userbase only exists because of the power that PKCrypto gives us to build lasting lines of trust and they don't sign firmware images....


Ledger's synchronized post[0] is a nice read.

[0] https://www.ledger.fr/2018/03/20/firmware-1-4-deep-dive-secu...


I have both a Ledger and Trezor and would recommend the latter even before this vulnerability. Their Ether support is meh, but for BTC it just feels more solid and well thought out. That they are open source (hardware and software) and have been very responsive to attacks found also helps on the trust front.

Haven't played with the newer Model-T with a touchscreen, I'm talking about the original Trezor which I believe they have no plans to EOL.


Trezor's design does have problems of its own:

https://medium.com/@Zero404Cool/trezor-security-glitches-rev...

Which to my untrained eye looks more severe.


That was addressed in a firmware update. All systems will have vulnerabilities, the important bit is how the companies react. Trezor has been pretty great IMO in that respect and that all their stuff is open gives me a bit more confidence that it has been well attacked and patched.

https://blog.trezor.io/trezor-firmware-security-update-1-5-2...


I picked up a Ledger in a raffle at a meetup a few months ago, and it's just been sitting around ever since because I was concerned about this kind of risk. It's a bit of a shame... Can't ethically sell it for the same reason. Fortunately, I was recently able to use it to introduce a curious friend of mine to Ethereum, using small amounts, so there's been at least some benefit.


If you're worried about using a secondhand Ledger, then shouldn't you be worried about using a brand new one you fetched from the assembly line yourself?


A device with unknown provenance is much more likely to be compromised than one from an organization making good money from a legitimate business model.


Sure, but this is the type of product where trust is not what you want to rely on at all.

Edit: I mean: if it's possible for the manufacturer to compromise their customers, then in the case of crypto wallets, you have to assume that they will screw their customers over. The potential profits for a company like Ledger, running a long scam, waiting for the moment their take hits a high enough sum to be worth doing a runner and relocating to some island paradise under a fake name, are just too great to allow for trust.

Until now, I assumed the whole point of HW wallets was a 100% assurance of trustlessness. Now, it seems maybe that isn't the case.


That sounds like extremist individualism. You have to trust someone. The potentially nice thing about cryptocurrency in this regard is that you have more control and transparency about who you're trusting and what they're doing with that trust. But there's still trust involved.


That's seems a little abstract to me. All I can say is, a hard-learned fact for anyone who's followed the crypto scene from the early days is: strangers will take your money if it is at all possible. You do not have the protections you're used to having from society. Trust is a weakness that will be exploited.


> The potentially nice thing about cryptocurrency in this regard is that you have more control and transparency about who you're trusting

No, you don't. That's, like, the exact opposite of the whole model of cryptocurrency, which is trusting an anonymous collection of people.


The model of cryptocurrency is more accurately: trust a majority (50%+) of participants to act in their own financial self-interest.


Keep in mind that, even with these vulnerabilities, the Ledger is still the safest you can get for cryptocurrency storage.


Except an encrypted seed in cold storage. If you want to spend it you'll have different problems though.


Yeah, the problem with offline/paper seeds is that, if you want to spend them, you're going to become more vulnerable than with a hardware wallet, and they're single-use.

Not to mention that they're hard to generate securely.


Trezor is better.


How so? Trezor had a vulnerability where you could extract the private keys by measuring power draw.


Trezor seems much worse, in nearly every security write-up I have read.


I build and sell a hardware security dongle that runs 100% open-source software and generates its keys using an on-board HWRNG.

https://sc4.us/hsm/

I'm actively seeking help in porting a wallet app to the hardware. I am willing to pay to get this done.


>Physical access before setup of the seed

I can't watch the video (says unsupported mime type on my browser), but I heard the attack goes something like this: intercept the wallet, set up the wallet and copy the seed, repackage it, hope the victim uses the wallet as-is (with the seed you copied). Can't this be mitigated by resetting the device when you first get it?

>2 variants of "firmware reflash"

I thought firmware updates are signed?

edit: disregard the second point. further down it mentions that only SE firmware is signed

>Evil Maid attack

Is it even possible to mitigate this? At the very least you can steal the original, replace it with a replica that looks the same, BUT all it does is send back the password to you (via bluetooth, wifi, gsm, whatever). You can even hook up the stolen wallet on the other end so it correctly respond whether the password is correct/wrong and immediately drain the wallet once the correct password is transmitted.


> Can't this be mitigated by resetting the device when you first get it?

No, he demonstrates a device going through setup and "generating" a predetermined seed. It's not the easier "your wallet has already been set up for you" attack.


To be precise, he overwrites the entropy source with zeroes, so that all words (except the last one) of the seed are "abandon". With a non-zero entropy source, you could generate random looking but deterministic seeds.


I can't say I understand a lot of it. Where can I read more on hardware exploits and how people figure them out?


Very logical explanation of the exploit. Thanks.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: