The NSA would have to be damn slick to fit any kind of spying tool into a CPU bitcode update. The CPU doesn't know what the code that it's running does or what the data that it's handling is. It would need to set aside memory to do pattern matching on the code/data being processed to introduce any kind of back door. Then, it would need to hide the back door so nobody notices, which would potentially involve huge amounts of code to disguise the fact that the data has been modified.
An example of an attack would be for the NSA to muck with random number generation to bias it towards certain values in a very subtle way. That would mean that encryption would be less strong, since the total number of possible values needed to brute force is lower. This would be difficult to hide: if they attacked say, RdRand, anyone using/researching RdRand would be able to see that it's no longer NIST/FIPS/ANSI compliant. They would need to attack it such that the back door was only open when nobody was looking (when there's not a very large number of those commands being run, or when the command is part of a larger piece of code). Even then, anyone studying the final output from the affected code could potentially determine that the random number source was biased.
Either way, I'd think that an attack like this would be pretty damn hard to pull off, and Intel has very little reason to help the NSA with that. Can you imagine if it was found out that Intel was helping the NSA by weakening encryption? Everyone would go out and buy an AMD machine.
- continuously compute a CRC from those bytes, resetting the CRC to zero when encountering a magic value, say when encountering a jump instruction.
- whenever the CRC hits some magic number, shortcut the random number generator to return a few hundred zeroes.
Alternatively, one can have a similar circuit attempting to detect the password checking code, and have the first code set a 'please let the next password comparison instruction succeed' flag. The two circuits even could be the same: for a given sequence of 'check password' instructions, it is easy to make the hardware or microcode handle some special password as a universal key.
And yes, that is 'for a given sequence of instructions', but those won't change that often in popular OSes, and you can always issue a microcode update if, say, a service pack, security update or compiler change changes that sequence.
Tinfoil hat of: I think Intel could easily do something like that, but I doubt they do. Some day, somebody will reverse engineer that CPU, notice a small, weirdly connected part and get curious as to what it is doing.
I realize I'm a bit late on the reply, but a few thoughts:
- CRC would easily encounter a collision within the first few minutes (at the most) of running any arbitrary code (your operating system, for instance). You'd need something much longer and less prone to errors, like SHA. Otherwise you'd be getting lots of weird problems as the CPU triggers the backdoor constantly.
- More complex hashing would take more memory and lots of CPU time, something that would doubtfully be available to at the microcode level. If every instruction computed a non-trivial hash, someone would quickly notice performance issues ;)
- Shorting the RNG would produce some very suspicious results, and anyone looking at any kind of fuzzed tests, testing encryption strength, or running tests against something like OpenSSL would quickly discover the backdoor.
- Any kind of "the next comparison is always true" would easily trip even the most basic of unit tests, and it would happen every time. Making it only happen randomly might help, but would likely still be found relatively quickly.