The term of art for this kind of attack is SPA (simple power analysis). The fact that this is possible means that this is amateur hour engineering, with very little security QA. The more sophisticated attacks use DPA, differential power analysis, which accumulate the results of multiple runs and performs statistical analysis.
When I was on a team that made a similar device over a decade ago, we were up to our eyeballs in academic papers about SPA and DPA hardware and software countermeasures. That doesn't mean we didn't make any mistakes, but at least we were knowledgeable enough to hook an oscilloscope up and see what's going on.
These guys completely ignored an entire class of attacks, known in detail for a couple decades, (in reality since 1943, declassified 1972 ([1]). I wouldn't trust these guys to protect anything of value.
There is a software solution called blinding. The idea is to 1) scramble your input (message, private key) by multiplying with a random blinding factor, 2) do the crypto math and 3) divide out the random blinding factor.
Since all the crypto math happens with what is now essentially random numbers, you gain no knowledge from power analysis.
Downside of this method is that it requires additional operations and your crypto algorithm needs to be suitable for this. There is literature on this method, but don't be confused by "blind signatures" which is something different.
Any crypto wizz around that can tell me if secp256k1 ecdsa can be blinded?
But this is an enclosed system. Can't those blinding factor values also be pulled via the same monitoring techniques, or at least give enough information to reduce the number of possible values for brute forcing them?
Definitely not an expert, but I think this defends against differential power analysis as you can't statistically correlate multiple runs (assuming you can't force or precisely measure the random number used each time).
In this particular attack, it looks like the attacker measured the power drawn over USB. This could probably be defeated by buffering power.
Include some kind of energy storage on the device, such as a battery or an ultra-capacitor. Your external power source (e.g., USB) charges your storage device, and your computation is powered from the storage device. Only charge the storage device between complete operations.
Someone watching power over USB would then see no power draw during key operations, and a smooth power draw between operations.
If your threat model includes attackers that will be able to probe inside the device while it is doing operations, and so who could look at the power on the far side of the battery or capacity, then buffering power won't be sufficient.
I was thinking the same thing. The equivalent of a power factor conversion stage at input makes this type of attack impossible unless they have access inside the product.
I have absolutely zero experience in things like this, but would an integrated component (so that it can't be ripped off) that draws a random amount of heat (through a controllable variable resistor) solve this?
Random noise would just mean that you have to take multiple measurements and average them. Depending on the system in question, "multiple" may or may not be impractically many.
It's not out of scope for a hardware wallet. It is by design supposed to be losable without compromising your private key. Using an actual secure element (engineered in-silica to thwart this type of side channel exoloit) or even clever software can prevent this attack, or at least make it exponentially more difficult.
You have to enter a pin before unlocking the trezor, and you can add a passphrase to your wallet. Both make the attack much more expensive for an attacker who just stumbled on the device.
Because any process with i/o capability on the host OS can read the private key on the USB?
Short of a oscilloscope hiding in the computer you use (totally possible), no process can derive the private key from the Trezor in theory.
Even if you have processes that blatantly copy every USB's contents (or even log all interactions verbosely) and log all key presses/clipboard interactions on the machine, you can still use a Trezor without compromising anything.
You can also verify that your clipboard is not being manipulated as the Trezor can verify the address it will be signing a transaction with on the display.
I believe they are suggesting that the above comment "Short of a oscilloscope hiding in the computer you use" might not be so far fetched. The hardware needed to do the type of analysis used for this attack has been around a long time and is presumably very cheap at scale. So if for example a nation/company that has control over the manufacture of a large number of USB controllers decided they wanted to be able to do this kind of power analysis on all USB devices plugged into their controllers they could do so easily.
Esentially adding a micro controller with an Analog to Digital converter that can watch the power pin inside the USB controller/port itself would be relatively simple and cheap.
Although they don’t have 20MHz of bandwidth and the necessary sample rate there are USB3 power delivery controllers on the market today which do exactly that. Some even embed the microcontroller too.
Malware on the host PC can steal your private key. An attacker who gets brief physical access (eg, you look away for a minute) can quickly copy a USB drive.
While this attack shows the trezor is probably a bit amateur-hour, it does provide some amount of value.
Just like you can prevent a "paper wallet" from being compromised by not letting anyone casually look at it?
If I invested in one of these hardware wallets, I'd be interested in making it cost at least $X where $X is greater than the value in the wallet. I'd also like some time component, Y, that would allow me to transfer the money before the private key was found.
The main selling point of hardware wallets is that they can interface with an internet-connected device to sign transactions without exposing the private key to that device, which has a much larger attack profile.
Cost $X to attack? That makes complete sense. And if you know the physical device is compromised then you can "break the seal" by restoring the seed phrase on another device to transfer the funds elsewhere before an attacker is able.
There's a whole industry that's doing just that. Look up Hardware Security Modules. It is likely that you have a security device like that in the machine you are using right now (a Trusted Platform Module). People didn't simply threw their hands up, there are solutions for various problems in this space, with physical access being present in the threat model.
>It is likely that you have a security device like that in the machine you are using right now
funny you say that, because TPMs aren't actually mandated to be tamper resistant, only tamper evident[1]. what this means is that you won't be able to extract the keys without destroying the device, but if you delid the chip and probe it, you can probably extract the keys. I suspect it's the same with other HSMs you see in everyday life (smart cards, smartphone with trustzone, etc.).
There are different devices for different security needs. When you're protecting the key material for a revocable certificate, tamper evidence is sufficient: when you detect tampering, revoke the certificate. FIPS 140-2 Level 2 devices provide this level of security and are common in end-user credentials like smart card badges and the TPMs in laptops. FIPS 140-2 Level 3 provides tamper resistance meant to defeat most attacks, that's the device you'd want to use to protect a root of trust or important encryption key. Level 4 devices are meant to hold up against as many attacks as possible, even when the attacker can push the physical operating environment far outside normal bounds (solvents, liquid nitrogen, extreme heat, etc).
i did a quick skim of the article for "resist" and couldn't find anything to back your claim. all the article says is that smart cards have better security because they're isolated from the host (which is a security measure, but doesn't say anything about physical tampering resistance), and that some smart cards have tamper resistance built in.
For true tamper resistance you need to have some way to actually detect tampering and erase the secrets, which usually leads to some battery-backed SRAM and associated tamper response circuitry.
While there are some smart cards and smartcard-like HSMs (Fortezza comes to mind, but it uses the battery primarily for integrated RTC and seems to not contain any tamper detection mechanism) with integrated battery, common smartcards does not have battery.
If you'd like more evidence of what I'm saying, read Smart Cards, Tokens, Security and Applications or Secure Smart Embedded Devices, Platforms and Applications. Both are graduate textbooks covering smart card design and development.
The term "smart card" is frequently misused in popular nomenclature. As a technical term, it refers specifically to contact or contact-less (like NFC) cards with an embedded chip which are, at minimum, physically tamper resistant. For example, a typical credit card is not a smart card. A SIM card is a smart card (or token).
Tamper resistant does not imply tamper proof, which can also be a source of confusion.
There's no absolute solution. But that doesn't mean that they shouldn't have protected against well-understood classes of attack. Vulnerability to an attack that needs 5 minutes of physical access would be much better than vulnerability to an attack that needs 30 seconds.
This was published in 2015. Updating to a more recent firmware, which requires a PIN when computing the public key, eliminates the utility of such an attack.
As jochen mentions in his timeline in the post trezor asked he delay publication until they could fix the bug in release 1.3.3. he says they included a fix on Mar 30 which you'll find in the link above
It's amazing how hard it is to make something secure when considering side channels. Almost any bit of information can be used to extrapolate more - just consider this type of power-based attack, timing attacks, meltdown/spectre, CBC padding oracles, etc.
Plus, suddenly people are pumping millions (or billions) into private keys on little devices, it is amazing.
It seems that way from the outside, I'll grant. But I'll explain by analogy why I don't think it's as crazy as one might otherwise at first glance assume.
It's possible to create gold artificially, but because of the cost of doing it vs just purchasing the natural stuff, you can be almost certain any gold you come into contact with is not artificially created.
In much the same way, even though there are indeed tons of theoretical attacks that compromise various crypto keys linked to massive fortunes, because of the way the sector has evolved in terms of not making high value keys available in a low security context, and mitigating those threats in a high security context, the likelihood that any given crypto "account" you care to mention will ever be hacked is similarly low. It's not worth the attackers to invest the necessary resources to snag your minor android wallet stash, and even if they tried to bring those resources to bear on Xapo's vaults, they would fail.
Also, the sector as a whole has basically pioneered a whole new class of security measures, at least to the extent they're now in reasonably wide circulation, and improved security consciousness and software in general around the space immensely, exactly because now a bug isn't just some time and embarrassment, it may well be losing millions of dollars in value. In that context it makes sense to invest in the security to do it right and think carefully about all possible attack surfaces and vulnerabilities.
Some impressive work in this field has been done by Tel Aviv University. They where able to get private keys out of a laptop by merely touching the laptop [1] or placing a smartphone nearby [2].
"Silence on the wire" [1] hypothesizes finding the state of systems from the outside world - or based on the lack of information. Found it entertaining to read and this article somehow reminded me about it.
Awhile back, I was looking at using a TREZOR as a U2F key. I liked it since the interface looked good, confirmation of sending the key on the device itself, and the software was open source. For those two reasons, it looked nicer than using a Yubikey, which closed it source with the Yubikey 4 and didn't provide confirmation on the device itself.
That said, this attack didn't make the TREZOR look great. What's a good option for a U2F key? On a similar topic, with WebAuthn, is it worth waiting for FIDO2 keys?
I recommend to use 2FA asap. Whatever implementation is better than none, though it depends on your attack surface/vector and even on the question how good your 1st factor is (your password).
You can get a cheap, good YubiKey which is U2F/FIDO2 certified for 20 USD/EUR [1]. These communicate over USB-A or USB-C depending on the one you buy. We're in a transition of USB-A to USB-C that's always a rough time. Do you go for backwards compatibility, the (better) future standard, or both (buying two)?
If you prefer Bluetooth or NFC the one I mentioned above is too limited. Then Google recommends a Feitan for the former, and I recommend a YubiKey NEO for the latter. YubiKey NEO is also open source.
With regards to TREZOR/Ledger, 2FA usage is more of a nice by-product. People buy it as Bitcoin wallet 2FA and that it can be used for more is an afterthought. One that can save you a couple of bucks though.
Either way I recommend at least 1 2FA backup. Could be codes, or an app, or a second YubiKey, or even SMS. Though, obviously, each of these has their weaknesses you could host the codes in e.g. cryptosteel [2] at a notary. I plan to do that with my important passwords (ie. the password for my password manager).
Yubikey of course, however there's no reason you can't use the TREZOR, just update the firmware and regenerate the seed if you think your device may have been compromised in the meantime.
I think it's worth considering that these attacks are problematic in theory, but they still require quite a bit of access and opportunity.
If your personal threat model includes physical attacks on a U2F key, then it's unlikely the key is the weak link. The attacker would more likely use that physical access to root your computer.
If your threat model does not include physical attacks, then any U2F device (except software-based ones) is fine.
The "oscilloscope" Jochen used for this is a huge piece of junk, quite frankly. I wrote the library he used to interface with the scope for this attack, and I frankly wouldn't even bother touching that "scope" unless I needed a kind of quick two port DAQ. It would be far smarter to spend a little more and buy a Rigol 1102 (or similar) instead.
In terms of technically why it's really a bad piece of hardware, I can elaborate if someone wants details, but at a high level it lacks any form of hardware triggering (and everything is pretty much driven by the relatively slow on-board 8051 (no FPGA or anything similar)), and data transfer only happens over bulk USB transfers; furthermore sometimes the firmware just drops data.
Definitely. 1054Z is a bargain for what it is. I’ve got a DG1022Z gen as well and you can use the two together to do protocol injection. Record with the DS1054Z and dump into DG1022Z arb memory then use the DS1054Z pattern trigger to trigger the DG1022Z.
Totally amazing bits of kit and together cost me less than a bottom end 100Mhz Tektronix scope. Both have Ethernet too so I do a lot of scripting with python and the two. You can add your own features then! So far I've written a simple distortion analyser similar to the one bob Pease described for ED magazine in around 2001. Basically it subtracts input and output and then applies arbitrary phase shift and gain and then subtracts that from the original signal to see where the distortion occurs in the cycle.
Only complaint is that the DG1022Z csv parser is a piece of shit if you want to load your own arb data. Works better over LXI.
I have a 1054Z and use it in a pretty basic manner, but when I want to take screenshots, I find that the Ultrascope software is super crashy. I'm on Win7 64bit. It basically crashes almost every time I run it. I can restart it but it's irritating.
All I need to do is take screenshots, is there a quick way to do that over USB without having to use Ultrascope?
Note that the 8051 core on the FX2 is mostly there just to set up and manage the FIFOs, which run at the full 48 MHz rate supported by USB 2. Building logic analyzers and other things that don't rely on basic FIFO operation was not Cypress's intention.
Yeah, I think that was it. It's kind of fun to program (in the sense that 8051 assembler is usually less crazy than x86/x64 assembler), but it's way underpowered in that way it got used for the 6022BE.
In case this could help to improve that scope, I'll share my best FX2LP trick:
The key to get the best out of FX2LP is to set MPAGE to 0xe6. This way you can index USB registers at 0xe6xx through 8051 r0 and r1 -- no need to load register address to DPTR. Most interestingly, this also works for XAUTODAT1 and XAUTODAT2 registers, which can act as (optionally auto-incrementing) read/write pointers anywhere in xram. This enables "fast" memcopy, memfill, strlen, etc.
If r0 points to XAUTODAT1 and r1 to XAUTODAT2, and auto-incrementing is enabled, memcpy becomes just this:
memcpy_loop:
movx a, @r0
movx @r1, a
djnz r7, memcpy_loop
AUTOPTR1H/AUTOPTR1L and AUTOPTR2H/AUTOPTR2L set the pointer address.
With FX2LP autopointers, 8051 native DPTR isn't typically needed at all.
My understanding is that generally speaking, it's throwing money at bad and you're much better off spending the $350-$500 or so for a proper entry-level scope. Check out the eevblog forums if you're in the market. [1]
edit: it is worth noting that if you can deal with the design and hardware limitations, the limitations of the included proprietary software should not necessarily scare you off, sigrok [2] has support for a lot of these cheap devices. Since writing this comment, I also just discovered a newer project called openhantek [3], which seems to be very active.
Saw that OpenHantek when I was making sure I'd be able to use it with linux.
I have no need for an oscilloscope, nor do I even know how to use one. They've always seemed pretty interesting to me, but when I've looked before I've managed to convince myself out of spending XXXX on something I may only use once. If we're talking in the sub 100 range then it seems like something I could use or not use, and then upgrade it if I actually find it wanting (ie I invest enough time to find something I want to do with it but can't).
Tell me again how we're supposed to be competent enough to run our own banks, and that as long as we never make a mistake reversing transactions are not a needed feature...
I'm not sure to be honest, but I wouldn't be surprised if there would be hack attempts or people looking for those if they er, "found" a hardware wallet somewhere. Anyone that has a hardware wallet is bound to have a nontrivial amount of cryptocurrencies on there.
When I was on a team that made a similar device over a decade ago, we were up to our eyeballs in academic papers about SPA and DPA hardware and software countermeasures. That doesn't mean we didn't make any mistakes, but at least we were knowledgeable enough to hook an oscilloscope up and see what's going on.
These guys completely ignored an entire class of attacks, known in detail for a couple decades, (in reality since 1943, declassified 1972 ([1]). I wouldn't trust these guys to protect anything of value.
[1] https://www.wired.com/2008/04/nsa-releases-se/