It's absolutely astounding to me that anyone could claim, with a straight face, that the PSP "enhances security".
The fact that one is not even offered the option of disabling the PSP (or the ME for that matter) tells us everything we need to know about the true purposes of these features.
To be fair, the PSP is involved quite intimately in the various features for securing VMs. With the Zen processors, it should in principle be possible to send an encrypted VM to a cloud provider and be certain that this is the VM that runs, and that the cloud provider has no way of readings its RAM (thanks to RAM encryption with a PSP-generated key).
External audits of those security properties are of course basically impossible, but it is true that the PSP enhances security for some use cases. AFAIK those are only data center use cases, though; ordinary users don't benefit.
It's true though that while the PSP as it is today doesn't hurt users, it could be turned into an anti-user artifact in the future. That's not really in AMD's interest, though.
A lot of these CPUs are purchased not by individual consumers but by corporations. Spend ten minutes in a typical IT department and you'd want any assistance you can get in protecting the computers from users.
By all means, make it an upsell. If corporations ar that interested on it (what I doubt, I've never seen the interest), it can even be an expensive upsell.
But why is it shoved into everybody, and impossible to turn off?
> But why is it shoved into everybody, and impossible to turn off?
I suspect business reasons? Or lack of demand? Maybe some of the same reasons it's not open source even though there is no reason it shouldn't be? Why can't you turn this (trusted computing) stuff off on iPhones etc?
All current-run Intel processors include ECC support on-die in the memory controller. They go out of their way to disable it on consumer processors so they can extract more money from Xeon, even though it's right there in your Core i5's silicon, going unused.
Business history suggests that if they could boost profits by selling ME only to enterprise customers at a premium, they would do it in a heartbeat.
It's mind bogglingly expensive to make a die, so of course the i5/i7 and e3 use the same die.
I've not noticed any xeon premium though, the Xeon E3-1230 for instance seems to be on the same price/perf curve as the i7-7700 and i7-7700k.
Xeon E5's are much more expensive, but that's because it has more cores, ecc, dual socket capable etc. The E3 is basically the same chip as the i7 with different features enaled (overclocking instead of ecc).
He is saying they are blocking ECC even though is available, why ME is not similarly blocked and available only on premium CPUs, why we can't even disable it?
It's because individuals don't protest loud enough, but corporations want this. It's market pressure. Apply market pressure by raising a consumer-level fuss if you think it's important to have chips without this.
Every other thing that individuals don't want badly corporations want becomes an upsell used to segment the market. But those "security" engines don't.
Yes exactly. Just like servers. There just needs to be a big lawsuit, or law, or something, to make it clear that owners of a device shall have real ownership of that device (i.e. be able to have your own boot keys and capability to disable/enable features like trusted computing).
Isn't it somewhat like a fuse in an electric circuit?
Let's say, for example, that you had perfect overvoltage protection implemented in firmware, and could disconnect the mains quickly enough to prevent damage or a fire. But, the feature is implemented in firmware, which can be modified, so a malicious individual could disable this protection and expose you to risk of damage or fire.
By implementing an unmodifiable security feature (like a physical fuse), you minimize the risk of a malicious individual bypassing the protection or security control.
Oh, I'm absolutely in favor of tamper-resistant hardware security modules.
But I also want the freedom to turn them off. If my threat model prioritizes despotic governments with the ear of AMD/Intel/ARM Licensees over, say, phishing rings, I should have the freedom to roll the dice if I so choose.
Sorry, I don't understand how this prioritizes despotic governments over phishing rings. Could you please clarify that?
I think the goal here is two things:
1. Allow organizations that want a trusted execution environment to refuse to boot if the OS is not signed properly, to prevent risk of unapproved software execution.
2. Prevent attackers with physical access to the device from booting an unapproved OS that might be meant to extract encryption keys, or otherwise extract or tamper with data.
These are very specific attack vectors that large enterprise customers are asking for to protect their data from malicious insiders, or just anyone with physical access to the datacenter and bad intentions.
That it might be used to prevent Linux from loading is simply an unintended consequence.
I believe points 1 and 2 could be addressed by implementing a trust-on-first-use model whereby the customer can set an initial signing public key for the PSP, which is then used to sign specific boot parameters.
That way, as the customer, I can tell the PSP that I'd like to either require PSP-mediated secure boot, or permit boot if the PSP is disabled. OEMs and vendors selling to enterprise customers can pre-sign it for convenience, and tinfoil hats like me can disable it. Everybody wins.
One of things I was taught is that you can't really protect if there is a physical access. Yes, you can make it harder, but that's pretty much it. That's why anyone who is serious have multiple layers of security.
Take for example Google data centers, you need access to enter the facility and access to whatever you supposed to work. There are security guards which will follow and stay with you while you are performing your work.
The mechanisms in the CPUs are there to protect CPUs from their users. Let say again you are a Google and are planning to use this technology. Why would you trust a third party to decide (by signing) what can run and cannot run on the CPU. What if that 3rd party happens to be your competitor. The issue is that person/company who owns the CPU doesn't have full control over it, they can't load their own certificates and use them for signing. They need to trust 3rd party with it.
Yes, that's why things like UEFI were invented to secure the hardware.
Those things are not necessarily bad, the problem is with having control over what your computer runs. It's about whether you decide that or a 3rd party that you might not necessarily trust.
If things are modifiable before the machine boots, and can't be modified once system has booted, then malware should not be able to modify it.
You can google around for secure boot, trusted boot, chain of trust, etc. Breaking secure boot is the biggest vulnerability a device can experience (e.g. Apple will pay you the most money for this type of break).
This is more like a fuse that can read and modify the memory of the computer it delivers power to, is accessible over the internet, and for which you cannot get the source code let alone run your own code on it.
I don't disagree that these features create legitimate reasons for concern. For me, whether or not such features are attractive more or less correlates to whether or not the product is a good fit for my needs. If its not attractive, then that particular CPU SKU is not the right product and I should decide if the tradeoff of PSP for performance is worth making and if not I should choose a different product.
Faced with such a decision, if I am inclined to suspect nefarious motives, I'll assume that it is just as easy to install the ARM cores or the equivalent on other chip SKU's and just not tell me via the spec sheet since I have no practical way of determining if it has been done...and hence should probably assume it has been done.
Still waiting for the threadrippers, excited to see the benchmarks and if it will be worth buying. Will be using it for mainly developing elixir applications, which use all cores.
That's why encryption is enabled through page table bits: Of course you keep the pages unencrypted in which you do DMA.
But you're right, a hostile DMA-capable device (eg. thunderbolt, firewire, pci-e) can only read/write garbage to those pages.
But since they can still write there (even if it's gibberish once decrypted), IOMMU remains the better solution to protect memory from hostile devices.
1. USB controller will have DMA enabled, but you'll get back unencrypted memory from the memory region allocated to the OS that initialized the USB controller (i.e. all your RAM if you only run one OS)
2. USB controller will have DMA enabled, but you'll get back encrypted data from memory (I think this is less likely)
3. DMA must be explicitly enabled by the OS, so until that occurs DMA will remain disabled.
The anandtech article does mention that OS modifications are necessary unless the encryption operated in Transparent mode (a setting in UEFI). In Transparent mode I would assume any DMA attacks which work today would continue to function since the behaviour of the platform is identical to previous processors.
> In Transparent mode I would assume any DMA attacks which work today would continue to function
That makes sense, DMA must be possible in transparent mode or otherwise nothing would work. Which also means there must be physical possibility for PCI devices to use the encryption engine.
However, I suspect that the only key ever available to PCI devices is the key of the host OS (if transparent mode is enabled) and encrypted guests can't use DMA to passed-through devices at all. Otherwise, the host could try to program a passed-through device while the guest isn't executing at the moment and mess with the guest's memory using DMA. If that's true, a simple solution to hide from DMA attacks is running inside an encrypted VM, assuming that they really got the design and implementation of this feature right.
IOMMUs are designed to prevent peripherals accessing all of memory, so you don't need to disable DMA.
If a device did manage to escape the confines of the IOMMU somehow, then it would likely just get the encrypted pages, which would be garbage without the keys to decrypt them.
Linux and macos use the IOMMU for protection by default. Windows needs a lot of configuration to achieve that otherwise it only uses it for virtualization.
You don't need encryption to disable USB DMA. IIRC Apple and Microsoft disable DMA in recent versions of their OS when you lock your computer. Don't know about Linux, would appreciate if someone shared the state of affairs there.
OK, I see. For the record, USB devices can't simply request the host controller to read or write whatever they want, as it was the case with FireWire and Thunderbolt, which is why people say that "USB doesn't have DMA". The controller will only do DMA to/from OS-specified USB buffers.
This SE post says something about DMA in USB 3.1, but I couldn't quickly find any confirmation that something has changed in version 3.1 in that regard. I didn't follow this standard closely, though.
However, 3.x controllers usually run firmware blobs of unknown quality and if a device manages to pwn this firmware then it may turn out to be able to access other memory. I think that's what the poster was worried about.
I don't think it's possible to "disable USB DMA". At least in case of 1.x/2.0 controllers most of the communication between OS and controller happens through RAM data structures and nothing would work without DMA.
> This SE post says something about DMA in USB 3.1, but I couldn't quickly find any confirmation that something has changed in version 3.1 in that regard. I didn't follow this standard closely, though.
Nothing has changed with regard to DMA. What _has_ changed is the introduction of the "Streams" feature which allows the host to establish multiple buffers in advance and have the device choose which of these to write into.
That's useful for asynchronous task delivery, eg on storage (which is AFAIK the only major use case of the feature so far): The host requests 10 transfers, prepares a properly sized buffer for each of them, and the device can fulfill them in whatever order is fastest, directly into the buffer that the OS considers the most useful for the job.
In principle, read()s from USB devices could be mapped directly into the reading process' memory space (in practice that won't happen), without the driver having to copy data around (just some signal when the data arrived and the buffer is valid).
What memory gets written to (and how much of at each address) is still managed by the host controller driver.
> What _has_ changed is the introduction of the "Streams" feature which allows the host to establish multiple buffers in advance and have the device choose which of these to write into.
This feature should be safe in principle, although it does seem to increase HC attack surface a bit ;)
One more thing about 3.1 is that it can share physical ports with Thunderbolt. In such case, an innocent-looking USB gadget may turn out to do interesting things.
> This feature should be safe in principle, although it does seem to increase HC attack surface a bit ;)
The HC attack surface is already significantly increased because XHCI does many things on its own that used to be done in the driver (eg. the whole address assigning handshake).
XHCI is the first USB controller variant that a USB device project I worked on was able to lock up the controller so hard it took a cold reboot of the computer to fix. That was... unexpected.
(and I can't say I'm happy about that type of complexity ending up in places where I can't fix them)
The same is btw. true for SATA, SCSI, SAS: the "DMA modes" (UDMA etc.) mean that the host controller is doing DMA transfers, while SATA drives cannot initiate nor control DMA transfers.
Anyone know where the AES key is stored for the encrypted memory? Is is a fixed, hardcoded key inside the processor, or does the user set it by some means?
Speculation here. Since the PSP is effectively leveraging TrustZone, I'm guessing it's generated and stored inside the TrustZone itself. This article states it's generated by a hardware RNG, so user control is unlikely.
One important factor missing from this article is the AES cipher mode being used. Not sure how you'd be able to use an authenticated mode and maintain random access, so maybe XTS or even ECB?
"Secure Virtualized Encyrption (SEV). SEV in many ways resembles the SME, but in this case, it enables owners to encrypt virtual machines, isolating them from each other, hypervisors, and hosting software. "
Sounds like the keys are managed by VMs can isolated from hypervisors/hosting software.
I can't think of any particular issues with the Ryzen desktop platform that messed with Linux compatibility. If anything, I expect Ryzen Mobile to be much more usable on mobile workstations than the Intel solutions I have to work with right now.
I've got a Lenovo W540 with a i7-4700MQ and a NVidia Quadro M1000, the integrated intel graphics are garbage and I constantly have performance issues running GNOME 3 - I'm going to run this thing to the ground until the mobile chips with vega graphics are available and then ask for an upgrade from my employer for having less problematic graphics alone..
It's not processors themselves that are an issue usually.
Usually the difficulties in getting things running is the booting the system, and the motherboard chipset and device drivers. It's also to do with the stuff that goes along with just "running" but CPU features. Does power management work well? Does ECC work at all? Is virtualization relatively bug free?
Since laptops are full computers packed with all hardware inside, you don't want any driver issues on your system and neither do you want power management issues. That's usually what people mean when they express concerns about new CPU platforms and Linux.
From what I've heard Ryzen runs Linux pretty well on desktops at least, so let's hope they can keep up the good work on laptops too.
Ryzen, Ryzen Threadripper and Epyc are all the exact same chip, the latter just use MCMs with 2 and 4 dies respectively.
Epyc being for for the datacentre, it needs good Linux compatibility, so AMD have already submitted the necessary kernel patches. I imagine that means Ryzen will work well too, at least for the portion of the chipset inside the CPU.
> From what I've heard Ryzen runs Linux pretty well on desktops
Confirmed. I'm running one here-- 1700 (8c/16t), no-OC, stock cooler, X370 chipset and Samsung-based DDR4-3200 RAM-- on Ubuntu Xenial/Elementary and it's been rock solid and temps <50C doing desktop, development, and VM's.
Same, 1700, 32Gb Corsair Vengeance at 2666, 4 weeks not a single problem and I've battered it, was transcoding 30,000 multiple page tiffs yesterday (don't ask..).
The thing is a straight out monster for a 300 quid processor.
Um, I have to ask. The only thing I've used multipage TIFFs for is turning processed page scans into a PDF. But not nearly that scale. What was the context?
Exactly that for an enterprise web app that attached all the physical paperwork of an order to the digital record.
Original dev just shoved all 30,000 in the webroot as multipage tiffs, average size 25Mb-30Mb.
End users couldn't easily view attached scans (multipage tiff isn't that prevalent), converting to pdf with jpeg compression took the first set from 76.8Gb to 6Gb with far less traffic on the network and much easier handling (PDF is far better supported).
> TBH I don't know much about incompatibilities CPUs can cause in Linux world.
I remember one, a long time ago. This is from memory so it might not be 100% correct but the main details are true, and it's an interesting story:
CPUs have identifiers on them. GenuineIntel, AuthenticAMD, etc, models, plus a number to indicate what generation.
Pentiums were P1, P2, P3, etc.
Intel released the Pentium 4 (one of their worst chip designs - the next one, Core, was based on Pentium 3 rather than Pentium 4).
Which Linux saw as a P... FIFTEEN?
Some jackass at Intel though 'IV' - ie roman numbers for four - was a cool thing to put as the CPU ID. So Linux saw 'Pentium 15' and freaked the fuck out while people confirmed that yes, Intel were actually stamping Pentium 4s as '15'.
Zen has shown excellent efficiency figures for both the CPU itself and entire systems. That does not necessarily translate to good mobile systems, but is a good sign.
Why would you be concerned about power usage and thermal dissipation? Are you concerned the power saving features don't work as good as they could with Linux (like with Skylake? at first)?
Otherwise the reviews of the AMD Ryzen CPUs put AMDs 8 core parts around Intels high end 4 core parts [0][1][2][3][4][5][6][7] in terms of power consumptions, which is promising looking at the announced 4 core mobile APUs with Vega graphics. Also nice for thermal dissipation: AMD uses solder for the heatspreader, which helps with thermal dissipation.
My only concern would be the idle power consumption since the PCs are idle most of the time, which seems to be ~10 watt higher than Skylake/Kaby Lake due to more power hungry mainboards. But maybe that can be fixed in Notebooks?
[0] especially the R7 1700 which seems to run at a sweet spot with 3 ghz base clock.
> 850 points in Cinebench 15 at 30W is quite telling. Or not telling, but absolutely massive. Zeppelin can reach absolutely monstrous and unseen levels of efficiency, as long as it operates within its ideal frequency range.
It's generally the case that clever software optimisations hit intel's platform first due to it's spread. Here's hoping AMD gathers interest in providing platform specific optimisations
Yes, that's true. In fact I think the software side is what's holding back AMD the most (together with OEM's producing Notebooks which are subpar compared to Notebooks with Intel CPU - but maybe that is different in other regions).
History will tell you that AMD and laptops don't play nice together. AMD CPU's have typically very high TDP's.
However i am hopeful that AMD will change their ways from a crude brute force "moar power" approach (very high wattages/TDP's for performance gains) to a more finely-tuned, intellegent, and environmentally-friendly approach with higher efficiency (getting more performance per Watt/TDP)
You're out of touch on this topic. Firstly, AMD's newest architecture is competitive power wise with Intel's newest architecture.
Secondly, TDP does not mean what you think it means. TDP is the maximum power a processor is designed to run at. It only tells you something about peak power performance, not idle power or average power perfomance. Especially not in modern processors that can idle at 5-10W despite having a 95W TDP. Laptop and desktop processors run well below their TDP specification 99% of the time. It's not a useful metric for determining power consumption. At all.
Actually, TDP is the nominal expected power dissipation of a processor, and is intended as a guideline to design the cooling system. It is the intended power dissipation of the part when running "real (non-synthetic), expected workloads".
A processor's actual power consumption can easily exceed the TDP. (Actually a processor's power consumption can SIGNIFICANTLY exceed the TDP)
Yep and TDP as a measure of cooling design is inexact anyway.
My old bistro laptop was sat at 55C today and 85C two weeks ago, ambient temp today was 16-17C (I.e British Summer) and two weeks ago 30C (not British Summer ;) ).
"For years AMD’s processors for business PCs supported additional security technologies (collectively known as AMD Secure Processor and Platform Security Processor before that) enabled by the ARM TrustZone platform with the ARM Cortex-A5 core. AMD’s previous-gen PRO-series APUs included Secure Boot, Content Protection, per-Application security, fTPM 2.0, and support for Microsoft Device Guard, Windows Hello, fingerprint security, data protection and so on."
I honestly don't know who uses them. I've not seen it in finance or health. Government?
TrustZone is not a "management" feature in the sense that Intel AMT is; it is a security feature that can prevent/obscure certain hardware access (basically Ring 0 for peripherals), but does not allow for out-of-band machine access like AMT (although in-band machine access, with the firmware circumventing the user OS is a possibility).
Trustzone is ARM's hardware access control. But I suspect AMD's PSP which incorporates an on-die ARM core rather than just implement the Trustzone ip, is doing a lot more. Reportedly it can manage DMA actions itself independently of the amd64 core.
Seems like a catch-22 of adoption. Not worth the effort to develop on unless it is ubiquitous enough? I'm sure there will be growing (security) pains as well. Also not sure exactly how AMD allows development for the PSP (specific vendor keys?).
I've seen IME (or its more featured server companion iLO) in both servers and workstations for out-of-band remote management.
It's really nice from a corporate admin view - you can always make sure machines have latest updates installed, reinstall them when users mess stuff up... and you don't depend on the machine being powered on or a working OS!
Exactly. Those boxes were bought and paid for by the corporation. They are the company's property, and they have the right to access them whenever they want.
It's a shame that those backdoors then find their way into consumer products, but it's silly to assume that a corporation doesn't have the right to access their own equipment.
Except legally they do and morally it's debatable.
I have no expectation of privacy on a work machine as the company has to protect its IT systems and assets.
However I do have in my case since it's a Linux desktop with encrypted disks to which I have the password alone in a company full of Windows machines with an outsourced IT department who couldn't support Linux if I asked.
That very well may be, but you are the only one talking about AMT, and I still can't get a CPU without an undocumented privileged CPU thrown in. If I had a choice, I would not get ME
But does anyone actually use them for that? In the server space people do not and instead use management cards from the server manufacturer, but I have never worked with desktop IT in an enterprise setting. My guess is that enterprise workstations is the intended audience for the IME.
Do you guys think these are worth the price premium for home server use? Are they really "better quality" (more margin/tighter silicon screening, etc) or is that part mostly a marketing gimmick?
The fact that one is not even offered the option of disabling the PSP (or the ME for that matter) tells us everything we need to know about the true purposes of these features.