Yeah, while it seems that every modern CPU is affected by the recently discovered vulnerabilities, Intel and AMD still refuse to release their source code for the ME and PSP. Therefore, everybody on this planet has to run hardware with a lot (by all odds) of unpublished zero-day vulnerabilities.
And to make matters worse at least for Intel we have indications that those security wholes are the result of a calculated risk to afford a higher development velocity: https://danluu.com/cpu-bugs/
I think "remote" here means "host to TPM chip". Which is still bad, but not on the level of "install a rootkit on a powered-off machine" like some of the Intel ME exploits.
AMD PSP is basically their equivalent to Intel's ME, so this is not surprising... but then it says
This function is called from TPM2_CreatePrimary with user controlled data - a DER encoded [6] endorsement key (EK) certificate stored in the NV storage.
If I understand correctly, this is related to SecureBoot and to do such operations with the keys and certificates, the user has to have physical access to the BIOS/UEFI setup already, correct?
The PSP is already quite long in the tooth. I think AMD will switch to ARM's recently announced "SecurCore" soon, just like Qualcomm did for the Snapdragon 845:
I think Intel's ME is much more complex than AMD's PSP. Does anyone know if AMD's PSP has a full network stack and the ability to interact with network hardware independent of the main CPU's OS?
TPM's are supposed to be resistant to physical attacks.
With this flaw, someone can just stick a bootable USB stick in your computer to mirror the LUKS/bitlocker disk drive and get access to the keys in the TPM which protect that drive.
For discrete TPMs the specification explicitly says that they are not required to be resistant to physical attacks (probably because that would require specifying what kinds of attacks it is supposed to be resistant to).
The Platform Security Processor (PSP) is built in on all Family 16h + systems (basically anything post-2013), and controls the main x86 core startup. PSP firmware is cryptographically signed with a strong key similar to the Intel ME. If the PSP firmware is not present, or if the AMD signing key is not present, the x86 cores will not be released from reset, rendering the system inoperable.
"Timeline ======== 09-28-17 - Vulnerability reported to AMD Security Team. 12-07-17 - Fix is ready. Vendor works on a rollout to affected partners. 01-03-18 - Public disclosure due to 90 day disclosure deadline."
For a while, I was pretty excited about secure enclaves, as a tool before homomorphic encryption reaches practicality. If remote code execution on the PSP means broken remote attestation, that hope goes down the drain, quickly.
Maybe, the keys in the PSP are still protected by secure computing technology, like ARM TrustZone…
Computer security has been ridiculous for quite some time. Your only chance is tons of layers and early detection that something's not OK. I'm really happy that everything that's happening is happening. Sad that things like Cloudbleed got so little attention outside HN-like circles.
I'm happy because it's gonna have to change. Whole stack revisited. Eventually. These things speed it up. On the long run, the thing that holds most value, in my opinion, is information. Not physical things, not energy, information. Bitcoin is a big step in that direction but I don't just mean cryptocurrencies. If you can't keep your information secret the value is destroyed.
I see two paths. One, we do a huge refactoring of how do we do computations. Super clear assumptions and provably building simple layers on top of that. I'd like that. The other one is that we keep this whole messy legacy. And security will become based on more and more layers and heuristics. Which would eventually become AIs competition. Brr.
Just some random ponderings, I'm not a security expert.
Maybe with the new, secure stuff, have it implement a padded cell where can run the old stuff. What's inside the padded cell might become a security disaster, but at least it's kept inside the cell.
We've done more or less that several times in computing: At first the code just ran on the computer and had full access to everything. Soon we got memory protection, privileged instructions, and operating systems. Then we got rings of security, virtual memory, virtual machines, etc.
> I'm happy because it's gonna have to change. Whole stack revisited. Eventually.
I used to believe this kind of thing, but now I think you greatly underestimate human indifference and interest in effort conservation (uncharitably called "laziness").
Look at Intel's response to Spectre/Meltdown. Are they going back and redesigning their microarchitecture with new hardware-enforced safety rings [that actually enforce, lol] and new ways to block timing attacks without sacrificing performance? Seriously doubt it. From LKML it sounds like they're just going to hardware-accelerate IBRS/IBPB to make it faster to shut down branch prediction in risky situations and leave the rest of the shebang as-is.
Even when the forecasted apocalyptic events occur, it's amazing how little anyone cares, or how little gets recognized. Surely there are people who've speculated (ha!) attacks like Spectre/Meltdown, given the knife's edge nature of hardware virtualization on x86, and advised against multi-tenancy. Surely there are people who have paid attention over the last ten years to the dozens of sandbox escape attacks that already exist without exploiting the microarchitecture! Are they getting their due? Is anyone asking why people didn't consider these possibilities or listen to the people who warned them? Nope, because they just don't want to hear that. It's all "Oh gee how could Intel have done this to us?!" when "How could you have acted like this was safe" is an at least equally valid question.
TPMs, again, are another example of exactly the same thing. Major exploits in them are 100% routine by now. Does anyone care? Google is quietly working to remove them from their own machines but it doesn't seem like anyone is going to get any real headway outside of that. Do freedom advocates like RMS get their due? Nope, they just get told "Bugger off with your 'I told you so'."
Have you ever spent months or years warning your bosses about something, only to have that thing happen, and watch them hand-wave it away and get extremely irritable after you mention that they had fair warning? Most semi-aware engineers probably have, because this happens constantly.
Admitting, realizing, and honestly correcting our mistakes is just not a thing that people do, unless they feel substantial direct and personal pain that the brain decides greatly exceeds the forecasted effort expenditure to correct the issue. Such negative force cannot be applied over an industry at large unless there is a very specific and coordinated demand from the handful of people at the tippy-top, as in the case of Spectre/Meltdown, since in the age of cloud computing, those exploits fundamentally jeopardize the profitability of every major tech company.
Let me answer that for you. In the period of the last 10 years, the world has all but switched to mobile devices. Mobile devices that make windows look like a secure operating system. In theory vendors promise 2 years of "safe" operation, and I am unaware of a single case where they actually shipped phones without major security vulnerabilities (and known, to at least some of their development team).
Internationally, iPhones do not matter. They're like 10% of the market, so I'm focusing on android phones here. And it's not like iPhones don't have exploits for them, it just means a few more years, something more like 4 year, until they're exploitable.
It is regularly reported that 40% of all android phones are vulnerable to individual vulnerabilities. At least half of all active android phones do not receive security updates, even in the case of serious vulnerabilities (and that patched "half" technically is described as anyone who ever got at least a single security update). How many of the total amount of android phones are trivially hackable if you run an app on them ? I'm going to say at least 75%, and at least including all phones more than 2 years since they were released.
So no. Nobody cares. We all know how bad the wintel situation is, and android is worse.
We need a global security disaster to happen so totally that regulators intervene and hold these vendors accountable.
If we want to understand why users do what they do, perhaps we should ask the UX people. Alan Cooper in his book "The Inmates are Running the Asylum" says that one of the differences between programmers and ordinary people is that programmers worry a lot about what-if scenarios while ordinary people just hope for the best and then handle surprises as they arise.
If we are to change behavior of consumers, we have to work with their natural motivation. I think the most realistic plan is to subsidize core infrastructure with enough high-quality opensource software and hardware to drive commercial interest out of all security-sensitive components.
It's like when Wikipedia is subsidized by its editors to provide the common good of education. Opensource can be similarly subsidized by developers to provide the common good of security.
"Google designed Titan's hardware logic in-house to reduce the chances of hardware backdoors. The Titan ecosystem ensures that production infrastructure boots securely using authorized and verifiable code."
This is what we need. Authorized and verifiable code, none of this opaque binary blob BS.
Does Azure's composable FPGA design offer potential for assemblage of a higher level of abstraction, my poor analogy is the web assembly reduced to instructions/ primitives but you control the gate logic that's run, and isn't that logic then totally isolated from any other design side effects?
If you by some circumstances had a terrific clean and tidy system in a functional language, wouldn't that offer a higher level of possible "primitive" operations?
If you stay with a system that is as open as possible from the lowest levels of the hardware to the highest level of the software, and if you airgap, and audiogap, and RF-gap the system permanently until it ceases to exist, you are pretty fine.
Also, more practically, two computers with different ISA and underlying hardware that compute the exact same high level semantics, that don't know each other but transparently share the necessary hardware (for example hardware random number generator), talking to the world through a simple electronic checker, that stops the system if both computers don't communicate exactly the same information bit by bit, is also pretty safe, even if you use backdoored computers. Just make sure both computers don't contain identical backdoors (which is not that difficult).
High and sufficient security in computer systems is practically possible. We just don't work at it. Instead we work on JavaScript and WebAssembly and proprietary hardware and software.
The Spectre attack (for example) is an innovation in breaking complex systems. It's not just a hardware bug that can be easily spotted with a more cleverly designed process, or prevented with good security practices. It's a new way to look at the very general and basic concept (not implementation!) that was introduced years ago and was considered pretty safe for all these years.
It's the complexity of everything that we do with computers that needs to be addressed, not just the quality of software and hardware testing and exploit mitigation. Mitigation techniques can't stop every unknown exploit, just some of them; in a sufficiently complex system there always will be a way to break the system in an unexpected and conceptually new way. Besides, they are additional layers of complexity on their own, and you can't fight complexity with complexity.
Code, that is unreviewed, unaccounted and executed automatically. Now it shall be high-performance, too? Does the sandbox work? Does it really work? Are there no side channels? Are you sure? How do you make sure you don't take part in a DDoS attack or mine cryptocurrencies for somebody else? These are just points I can come up with spontaneously.
Besides that, the appification of the web is bad because it leads ultimately to dependency on software that is outside of the users control.
How does it do that exactly? Anything you can do in wasm you can do in JavaScript, only was can do it faster. This freak out that some people have over wasm is bizarre to me on a technical level. I think it comes down to lots of JavaScript devs being threatened by more difficult languages being useful for web pages.
> Anything you can do in wasm you can do in JavaScript, only was can do it faster.
It's faster and more flexible because it can easily be targeted by compilers. That is the problem. This might sound surprising. Allow me use an analogy to explain it.
Let's assume some new technology was invented to more easily breed cattle for meat production. I completely understand why some people would want that, and develop it. I think breeding and killing cows and bulls just to eat a steak is unethical. So, I would absolutely refuse to work on the technology, and I'd expect the same of everybody that cares about being ethical.
Now, coming back to JavaScript and wasm, it is used to deploy code in a way that takes the control of the software from the users to the developers. The deployed code is unreviewed, unaccounted, unsigned and executed automatically. I consider unsafe in the computing sense. So, I consider code execution on the web unacceptable. Since, wasm makes that easier and more efficient, I'm opposed to it.
On top of that I consider JavaScript a bad language. I'm worried by how much it's pushed as a teaching language.
I can see where you are getting confused. It is actually just faster. Again, there is nothing you can do is wasm that you can't already do it javascript.
> The deployed code is unreviewed, unaccounted, unsigned and executed automatically. I consider unsafe in the computing sense. So, I consider code execution on the web unacceptable.
All of these things apply to javascript.
> On top of that I consider JavaScript a bad language. I'm worried by how much it's pushed as a teaching language.
This has absolutely nothing to do with anything in this thread. It is pretty clear that you have biases and frustrations that have nothing to do with technical merits.
It's irrelevant if it's just faster, or has other technical merits, too. What those specifically are is irrelevant, as the political and societal consequences of advancing that way of code deployment are bad independently of that. My whole point is solely based on these consequences.
You keep making that assertion, but you haven't really backed it up by anything. If you are talking about obfuscation, javascript can be obfuscated just as much as webasm. Again, all IO must happen in javascript anyway and anyone can look at a text representation of webasm.
Honestly, not really. You'd be surprised how much valuable information people leave laying on their desks. Or loose-leaf in a backpack that is half zipped. Or in their pockets. Or on their car seat. The list goes on.
I am more aware than you might think, but not as worried about what can be stolen in a few hundred pages as what can be stolen in hundreds of thousands of documents.
I'm also not enthusiastic about how it's more secure from governmental snooping to mail a hard drive than it is to send its content over the Internet.
At least the second part, the system of the two computers and the checker, is compared to even the very simplest parts of a modern computer laughibly simple.
Or we could just get ME and PSP off of our chips like people have wanted for years. They have been major security and privacy risks ever since their inception.
Luckily, the politicians in Germany and Europe wake up. They want to build up European chip and hardware facilities to have the full chain in Europe. Also they plan to demand certification and customer visible labels. Finally!
Luckily, the politicians in Germany and Europe wake up. They want to build up European chip and hardware facilities to have the full chain in Europe. Also they plan to demand certification and customer visible labels. Finally!
This is the same Thomas de Maizière that backed a law that allows German law enforcement agencies to order companies to insert back doors in their products:
"Luckily, the politicians in Germany and Europe wake up. They want to build up European chip and hardware facilities to have the full chain in Europe. Also they plan to demand certification and customer visible labels. Finally!"
So far similar efforts by EU were rather underwhelming - but this one is probably the most important. I believe EU is the only global actor that can achieve the goal of creating reliable hardware and software. The still decentralized nature of EU means that no partner can afford any unilateral action (like backdoors) - and a conspiracy on the level of whole EU is impossible.
How does geopolitical location make hardware and software secure? Hint: it does not. It is clear there is demand at some level for secure computation and patching current hardware and software is not going to get the job done. Certification and labels imply we know how to do something we'll enough to say this is the right way. It is clear that we do not, so certification of hardware is most likely to just ensure every chip has the same exact problems.
Physical security is not necessarily automatic, but it's much more straightforward than computer security. You don't have to worry about someone in Russia getting a hold of your pen and paper while you're sitting there with it in your room.
I think that anyone who has worked professionally understands that it's a miracle we make it through life with the relatively limited quantity of exposures and accidents that we have. Things like Spectre/Meltdown usually don't get the notice of people who care to expose it publicly until they've been privately theorized, discussed, and practiced in some form for many years.
Personally I believe that if Spectre had come out 10 years prior, the likely response from Linus et al would've been "How about instead of crippling useful CPU speed optimizations, we just don't let random people feed instructions to our CPUs." Obviously, with cloud computing underpinning so much critical profit/surveillance-- uh, I mean, infrastructure-- these days, that won't fly. (Meltdown is a different story since the CPU is supposed to be protecting that.)
Computers are very complex systems designed by people. Work with more than 5 people and you quickly learn how much trust is warranted in complex systems designed by people (hint: very little).
I absolutely believe that relying on the security properties of the physical world, particularly "this item cannot exist in more than one place at a time, nor can it be replicated and transmitted across the earth in under one second", is much more reliable than any computer security.
Pen and paper is the only way to go for the truly paranoid.
I would not at all be surprised if Spectre and Meltdown were already known at nation state level, they have a lot of resources to throw at problems like this. The fact that Google provides this service for free is an amazing counterbalance to that kind of power, the bugs don't magically disappear but at least the playing field has been leveled a bit.
It is my impression that analysis of side channels has been done and professionalized in the intelligence community for a long time before it became an important consideration in the general IT community.
Adi Shamir, the S in RSA, has done tons of work on side channel analysis, especially of hardware crypto, for decades. Timing attacks, voltage attacks, EM, you name it.
So it's not unknown. But as a counterpoint I had a shocking moment in the 90's when I learned that Faraday Cages (to prevent TEMPEST attacks) were being designed with a second Faraday cage inside them to protect the light bulbs.
Seems that the interference between a CRT and a fluorescent bulb are sufficient that you can detect information on the power lines leading into the room. So they caged the bulbs to keep them magnetically isolated from the computers.
The big differentiator is how attacks can be scaled. Most people/companies aren't individually a worthy enough target to develop an attack against a reasonably protected system. But with a lot of these types of attacks one can compromise a large number of systems in a largely automated manner, without risking ones personal physical security.
So it's possible and it's just a matter of optimization. Pens and typewriters leak data acoustically, so let's replace cameras with microphones to reduce costs. Tiny microphones with antennae can be mass produced cheaply and they are easily hidden. Delivery can be automated too, but it's much easier to embed the spy devices in common products people frequently buy.
Pen and paper in a good old fashioned steel cabinet (you can get those with some nice solid wood enclosing as well) require actual physical access to read.
However, side channels exist. If you write classified information on a correspondence pad, then the pad itself becomes a classified item, too. Obviously.
> Pen and paper in a good old fashioned steel cabinet (you can get those with some nice solid wood enclosing as well) require actual physical access to read.
On the other hand, also the bad part is that pen and paper require actual physical access to read ;)
not sure why somebody would want to use a fTPM over a dTPM. Especially since most computers now have a dTPM. (well my cheap 80 € mainboard can switch between fTPM and dTPM.)
And to make matters worse at least for Intel we have indications that those security wholes are the result of a calculated risk to afford a higher development velocity: https://danluu.com/cpu-bugs/
This year is going to be fun...