Hacker News new | past | comments | ask | show | jobs | submit login
Understanding L1 Terminal Fault aka Foreshadow: What You Need to Know (redhat.com)
336 points by jterrill on Aug 14, 2018 | hide | past | favorite | 116 comments



This is _bananas_.

- Unlike previous speculative execution attacks against SGX, this extracts memory "in parallel" to SGX, instead of attacking the code running in SGX directly. It always works: it doesn't require the SGX code to run and it doesn't require it to have any particular speculative execuction vulnerability. This also means existing mitigations like retpolines don't work.

- It lets you extract the sealing key and remote attestation. That's about as bad as it gets. Because SGX is primarily about encrypting RAM, anything that pops L1 cache is game over and this is a stark reminder of that fact.

- The second attack that fell out of this allows you to read arbitrary L1 cache memory, across kernel-userspace or even VM lines.

The good news here is that the mitigation is somewhat straightforward. It's a pure L1d attack: flush L1d (or prevent things from accessing the same L1d via e.g. core pinning) and you're fine.

If there was any doubt left that speculative execution bugs were an entire new class and not just a one-off gimmick...


>> - It lets you extract the sealing key and remote attestation. That's about as bad as it gets.

It could have definitely been worse, with the leak of the fused secrets or a breach to integrity of the microcode (the two things that together constitute the TCB, which put simply is the only piece of the system you assume will never be broken).

All in all, assuming a microcode update can counter the attack as Intel claims, sealing and attestation secrets will be rekeyed via the KDF rooted in the fused keys, so that you can start afresh.

Of course, operationally speaking, that is a total pain but it is frankly remarkable to see this kind of deep recovery strategy finally built into consumer devices (and yes, I know DRM is unfortunately the main driver, but there are still some very legitimate use cases).

>> flush L1d (or prevent things from accessing the same L1d via e.g. core pinning) and you're fine

No, you are not fine. As the paper explains, an adversary (which is by definition more privileged than you are) can operate between the moment you use secrets in L1 and the moment you flush them out. Only the CPU (silicon or microcode) can assist you in the flushing of L1 when you exit enclave mode.


> No, you are not fine. As the paper explains, an adversary (which is by definition more privileged than you are) can operate between the moment you use secrets in L1 and the moment you flush them out. Only the CPU (silicon or microcode) can assist you in the flushing of L1 when you exit enclave mode.

It gets worse, if the adversary has root they can force all the data in L1 cache. This allows them to read all memory pages of an executed and non-exuctuted encalve.

There is nothing what can be done against this from the enclaves point of view.


Agreed.

> Only the CPU (silicon or microcode) can assist you in the flushing of L1 when you exit enclave mode.

This seems correct, upon double-checking. The interrupt process within SGX is called Asynchronous Enclave Exit (AEX) and does not give the enclave an opportunity to run any code upon interrupt, though it is possible to run code upon every enclave entry (via code placed at the Asynchronous Entry Pointer). I'm not sure that would help with any speculation-based exploits, however.


There's more going on than just the SGX attack. What I'm not saying is "add this 1 instruction and everything is copacetic" -- what I am saying is that the patches for at least some of the vulnerabilities are somewhat straightforward.


That's fair -- I was talking about a specific SGX enclave instance, not SGX in general. In particular, you're right that you can only leak things from an SGX instance, but you can't break all future SGX enclaves forever on a given CPU.


If helpful a few Cloud Providers Responses:

Google Cloud

- Google Cloud's protections against this new vulnerability: - https://cloud.google.com/blog/products/gcp/protecting-agains...)

- GCE Related information: - https://cloud.google.com/compute/docs/security-bulletins

- GKE Related information: - https://cloud.google.com/kubernetes-engine/docs/security-bul...

Oracle Cloud

- https://blogs.oracle.com/oraclesecurity/intel-l1tf

Azure

- https://blogs.technet.microsoft.com/virtualization/2018/08/1...


For AWS: https://aws.amazon.com/security/security-bulletins/AWS-2018-...

(Disclaimer: I work at AWS, but I am not linking this in any sort of official capacity. I don't know any more details beyond what is listed in that bulletin, and can't answer any questions related to this, unfortunately.)


Contrast:

"Meanwhile, we suggest using the stronger security and isolation properties of EC2 instances to separate any untrusted workloads."

with:

"Google Compute Engine employs host isolation features which ensure that an individual core is never concurrently shared between distinct virtual machines. This isolation also ensures that, in the case that different virtual machines are scheduled sequentially, the L1 data cache is completely flushed to ensure that no vulnerable state remains."

The former does not inspire confidence. Given the hypervisor on EC2 is opaque to me, I'm not sure how I'm supposed to avoid co-tenanting in a risky fashion.


It does say, "All EC2 host infrastructure has been updated with these new protections, and no customer action is required at the infrastructure level."

"Meanwhile, we suggest using the stronger security and isolation properties of EC2 instances to separate any untrusted workloads." As I read it, it is talking about running code within your instance - if you have untrusted workloads, rather run them in a separate instance, so as not to encounter issues like this cross-process.


Amazon offers "dedicated hosts" and "dedicated instances", which you can specify in the "tenancy" field when launching an EC2 instance. I bet that's the stronger isolation that they're referring to. Costs more, though.

https://aws.amazon.com/ec2/dedicated-hosts/


This guidance is primarily for customers that are running untrusted code within their EC2 instances.

The "strong isolation of EC2 instances" refers to the properties of isolation provided by EC2's virtualization compared to processes within an operating system. It is challenging to safely and securely run untrusted code within sandboxes and processes using general purpose software. However, the hypervisor and hardware based virtualization of EC2 instances is engineered to provide isolation between mutually untrusted instances.

There are several reasons that customers may want to use dedicated instances or dedicated hosts, so we provide those tenancy options as well. The most common reason customers use dedicated hosts so that they can bring their own software licenses, which are often tied to a physical host.

Disclosure: I work for AWS.


> Given the hypervisor on EC2 is opaque to me, I'm not sure how I'm supposed to avoid co-tenanting in a risky fashion.

In Linux, at least, the Xen hypervisor on EC2 exposes some information about itself at /sys/hypervisor. In particular, I think /sys/hypervisor/uuid would allow you to detect co-tenanting (between two VMs of your own).

Not saying that I think you should do that, or that'd I'd want to — it'd be a PITA to coordinate amongst VMs, and I'm not sure it would matter (what if you're co-tentant w/ a malicious VM? even if you detect it, how do you get out of it?). That is, inside the VM seems like wholly the wrong place to attempt to deal with this. But I don't think many people realize /sys/hypervisor exists.


How is the latter possible in case of 1 vCPU VM instances if 1 vCPU == 1 hyperthread? Do they give you a full core in this case?


1 vCPU instances do not simultaneously share cores with other customer instances via SMT (Intel Hyper-Threading Technology). A core can be sequentially time sliced between customer instances when you use a fractional (m3.medium) or burst (T2 instances) CPU instance type.

https://aws.amazon.com/ec2/instance-types/ includes a note that provides some additional information about when vCPUs utilize Intel HT Technology: Each vCPU is a hyperthread of an Intel Xeon core except for T2 and m3.medium. (emphasis added)

https://aws.amazon.com/ec2/virtualcores/ provides a table of the number of cores allocated per instance. Notice that an instance like m5.large that has 2 vCPUs and 1 core. An instance like t2.small has 1 vCPU and 1 core.

Disclosure: I work for AWS


Um, no. If I schedule a micro instance in a particular zone and nothing else, for the duration of my timeslice the VM will have to monopolize the core. Optimal use of vCPUs would demand that two VM threads get scheduled to the core to take advantage of HT, which Google says it won’t do. Timeslicing doesn’t solve this problem. At least that’s my reading of what they said.


Re-read what I wrote above. It covered both simultaneous and sequential sharing. Here is the part you are concerned about: instances do not simultaneously share cores with other customer instances via SMT.


Re-read what I wrote. Imagine a machine with 8 cores and 16 hyperthreads. Now imagine you need to optimally use the execution ports in each core. To do that you would need to have 16 VM threads all feeding the ports simultaneously, 2 per core. If all your workloads on that machine are from different customers, you can’t do more than 8. If you timeslice more than 8, then the question is, what is it that you’re selling as a vCPU? If it’s a timesliced core, then it’s not what they say they’re selling (a hyperthread).


You:

> If it’s a timesliced core, then it’s not what they say they’re selling (a hyperthread).

vs.

_msw_:

> A core can be sequentially time sliced between customer instances when you use a fractional (m3.medium) or burst (T2 instances) CPU instance type.

AWS instance types page ( https://aws.amazon.com/ec2/instance-types/ ):

> Each vCPU is a hyperthread of an Intel Xeon core except for T2 and m3.medium.


Not in Google’s case. There vCPU == hyperthread. To quote https://cloud.google.com/compute/docs/machine-types: "For the n1 series of machine types, a vCPU is implemented as a single hardware hyper-thread". One more area where Google is more explicit. Only their partial vCPU instances are timesliced: "Shared-core machine types provide one vCPU that is allowed to run for a portion of the time on a single hardware hyper-thread on the host CPU running your instance".


Except this thread has been about AWS, not Google.

Who cares how Google defines things?


It was about both.


> Optimal use of vCPUs would demand that two VM threads get scheduled to the core to take advantage of HT

It would.

So those cores won't be used optimally. No big deal.


Yeah, and the throughput of HT threads on an already busy core was not that high, last I checked.


timeslicing works fine, you just have to flush the cache when switching


I’m not disputing that. I’m just saying that for the specific case of jobs with 1 vCPU which are all from different users you can only schedule half the billable capacity.


Or map pairs of hyperthreads to each requested vCPU, or disable hyperthreading. Which kills co-tenancy overcommit and makes the infra more expensive for a cloud provider.


Google writes that ‘Google Compute Engine employs host isolation features which ensure that an individual core is never concurrently shared between distinct virtual machines’.

However, GCE does offer shared core machine types (f1-micro and g1-small) with 0.2 and 0.5 vCPUs respectively. This seems to contradict their statement (unless the cores are not shared after all, but that doesn’t make sense from an economical standpoint).

Also, they offer machines with one vCPU, but since a vCPU is only a single hyper-thread and not a full core, this still allows for the core to be shared over multiple VMs. If this means that Google will stop using hyperthreading and instead give everyone a full CPU core per vCPU, that will likely give noticeable performance benefits (but cost more for them).


I think the key word is 'concurrently'. My guess is that micro and small instances indeed share CPUs, just never 'at the same time' and L1 is flushed before transition from one VM to another.

I work for Google Cloud, but not related to security or OS development, so am not aware how it is actually being done.


seems slow. Having to flush on every swap, I mean. I wonder if there wont be a move away from offering these kinds of offers: if offering 0.2 of a core means putting in 0.3 effort, that's a 20-30% drop in how many you can run per core.


512 entries, 12 cycles to refill each entry from L2, at 3GHz that's only 2 microseconds worst case to refill the L1 cache. And that's if you keep the same task in it. If you're switching tasks most of it is going to get evicted anyway.

GCE uses KVM, which defaults to the linux scheduler with time slices from 0.75ms to 6ms, so the extra impact should be negligible. It's possible they tuned it weirdly, but I can't think of any reason to do so.

Flushes that occur from hypervisor calls could possibly have an impact, but those will happen whether you share a CPU or not.


> and L1 is flushed before transition from one VM to another.

Is this true?


it's what the Google blog post was indicating


I believe that this is likely true as they seemed to implement this for the TLbleed flaw.


The key word is "Concurrently".

They can give both hyperthreads to one machine for 50ms then both hyperthreads to another machine for 50ms.

For VMs with less than 2 vCPUs I suspect they give them two virtual threads and just schedule them for half the time they would otherwise.


If hyper-threading should be disabled for maximum security, this is good for AMD CPUs which maximize cores per socket.

2 months ago thread on OpenBSD and hyper-threading: https://news.ycombinator.com/item?id=17350278


Or just keep using AMD CPUs, because yet again they are unaffected https://www.amd.com/en/corporate/security-updates


Never mind that AMD sockets used to last longer (less frequent incompatible socket changes), which meant you could do longer with your motherboard. Not sure if that still is the case though.



Ryzen Pro has a more stable socket.


this is still the case


I'm seriously thinking about getting an AMD CPU for my desktop when this i7 4770k finally needs replacing.


I upgraded from an i7-2600 to an 1800X over a year ago. Once the memory issues were solved (running at 3200 MHz vs ~2800 MHz), I've been happy with my purchase.


I just went from i5-3750K to Ryzen 2700X and had a similar issue getting to the advertised memory frequency. Looks like it’s par for the course and will get sorted out in due time. I’m happy with the AMD CPU outside of that one problem.


I recall that the big (last) problem was setting the command rate to 2T. I eventually figured out that my ASUS board had a "geardown mode" enabled that always forced 1T, so I turned that off and I've been at full speed since. Not sure if this will help you.


What was the memory problem specifically, and how did you solve it, if you don't mind going into that?


(answered in sister comment)


OTOH it seems like all the major cloud providers are still happy using Intel. They probably have a pretty good reason to do so.


It's something of a self fulfilling prophecy I suspect. People use Intel because people use Intel. "Nobody got fired for buying IBM" etc.

As a sysadmin (who admittedly doesn't deal with hardware much) these issues with Intel chips (the mitigation of which can seriously decrease performance), and the relative ease with which AMD has come through the problems make me wonder if we would be better with AMD.

I'm reminded of this experiment:

https://www.dowellwebtools.com/tools/lp/Bo/psyched/16/Smoke-...

"People respond slower (or not at all) to emergency situations in the presence of passive others."


> and the relative ease with which AMD has come through the problems make me wonder if we would be better with AMD.

I think the only reason this is the case is the difference in market share. There are much more Intel processors out there, so finding an Intel vulnerability makes a much better academic paper (and much more lucrative for bad actors).

AMD might have made some inroads into the enthusiast market lately, but with cloud providers they are basically non-existent. These guys buy (or manufacture) servers by tens of thousands. They hate rebooting servers, even if they got a VM live-migration stuff worked out (as everybody does these days). BIOS and RAM problems that people mention here make AMD a complete no-go.

I'm sure that now there are people at Amazon/Google/Microsoft thinking hard about reducing their dependence on Intel, but I doubt we will see any difference for years.


Convenience. Yes they're both x64, but you know your supply chain, vendors, what works and what doesn't, and maybe even it's just for peace of mind.

We introduced some AMD servers in January, and the setup had its ups and downs. First We had to go through multiple BIOS updates for stability reasons, also kernel updates kept improving overall experience regarding the CCX architecture... I mean they're working, but Intel systems have almost always delivered a plug and play experience for the last decade or so.


Unfortunately AMD is not doing well in the server market at the moment (last time I looked into it)


To be clear: HyperThreading is relevant but not a panacea here. Disabling HT helps for the hypervisor leak, and while the Intel microcode changes now incorporate HyperThreading status into key generation, we don't know if that prevents the SGX leak. Ensuring untrusted code is on a sibling core and L1 flushes should be a mitigation for the SMM attack (it's identical to the VMM attack in that sense).


We're at a stage where to be safe on x86 we need to have multiple microcode and kernel layers to be safe.

At which point do we agree the performance increases over the last 20 years have been built on sand and move elsewhere?


I think everyone can agree that CPU progress has been largely made from sand, yes.


Hah, splendid pun. Well done!


I'm torn if I should be proud or disappointed that my highest voted comment is a pun. On HN and on a thread that is fairly serious considering it's about modern CPUs catching fire (though I heard Intel was independently working on that catch-fire feature with their newest 28-core release)


Very good :D


I'm pretty sure this attack doesn't apply to AMD, being built on the original Meltdown attack that was Intel specific. So what'll happen is Intel will change their chips to not do prefetching without also doing a permissions check like MAD does. Meltdown solved.


It's not built on meltdown. It's not about violating permissions, it's about treating the contents of a page table as valid even if the page is not present or (in the case of EPT, which is worse) also treating the physical address on the guest as a physical address on the host.

However, unlike meltdown it cannot access data that is not already in the L1 cache.


I mean, the V bit, and the other contexts are just PTE permissions. It's literally the same root cause as meltdown, that page table faults occur particularly asynchronously on Intel hardware and speculation occurs past those faults.


The root cause is the same but it's a different kind of page fault, and the effect is that you cannot read data that is not already present in the cache. On the other hand, meltdown doesn't break through the guest-host barrier when EPT is active.

Yes, deep down they happen for the same reason, but then so does Spectre as well.


I mean, no, they're way closer than Spectre is.

They're both around how page faults are asynchronous at a uArch level on Intel, and not any of the other vendors. This and Meltdown don't apply to AMD or ARM.


Yes, it's true they're close. The lazy FPU state restore bug is similar, too. Though, Meltdown did apply to some ARM 64-bit processors.


There is a lot of legacy cruft in x86, but it's the devil we know. After decades of use, we are still discovering vulnerabilities, in a platform thought to be well-understood.

The closest alternative would be ARM. In any case, it's a massive undertaking.


the problem is that hte platform is not well-understood. The design is convoluted and often not well-principled.


It is not obvious to me that it's the design warts of x86_64 (wholeheartedly agreed--I don't like x86_64 either) are the cause of the security problems we're seeing. Other architectures also have speculative execution and multiple rings. It's a lot easier to avoid vulns that are a consequence of increased performance demands when performance is not the primary reason people pick your platform.


The closest alternative would be ARM

On the contrary there's SPARC, MIPS, PA-RISC, POWER and a whole heap of others that perhaps were written off prematurely. Need to move quickly tho' while some vestiges of expertise still remain.


Sparc, Tilera and Parallela are unfortunately gone. I had high hopes into the latter two, esp. since Grid CPU's are perfect for machine learning. Much better than GPU's.


Fujitsu still have SPARC on their roadmap http://www.fujitsu.com/global/products/computing/servers/uni...


Roadmap yes, but Linux support no. https://wiki.debian.org/Sparc64


I'd argue that POWER is still alive and well.


We just bought a brand new POWER 7. The vendor has no plans to move to another architecture (that I could get out of them anyway).


Not all needs are driven by security.


Move where exactly? At this point x86 is the least of all evils that have respectable performance.


And this is why computing monocultures are bad, because the dominant architecture may have painted itself into a corner.

That's why alternatives like Open POWER are important.


No, these sorts of faults can occur in any high performance chip. Intel is bearing the brunt only because they're the fastest. But Meltdown also affected an Apple core if I recall correctly and Spectre nailed them all.

This isn't as simple as "Intel/x86 sucks, let's go use SPARC". The causes run much deeper and the necessary fixes may or may not be architecturally elegant or simple.


RISC-V is far more promising than OpenPOWER ever was.



Last I checked, the Mill is vaporware. They have some interesting patents, a private simulator, parts of a compiler toolchain, and that's it. At least with OpenPOWER and RISC V, you can buy processors and even complete systems.


Are they alive? No news in the last six months…


It's probably time for a new architecture that isn't so convoluted with decades of optimizations and iterative improvements.

It's also time for a computer system with one and only one general purpose processor (no tiny CPUs in storage or "system management" or every other device)

Probably something like a programming language/OS/computer system written new with a CPU based on current GPU designs.


You won't make any CPU of reasonable performance without speculative execution and all the rest. Your limited by data dependencies and the only way to break them is to "cheat".

Unless your willing to run on the equivalent of a Cortex-M0 then you have to live with it.


I hear delay slots aren't so bad, and compiler optimizations have gotten really good since the itanium days when they last (half-hearted) tried vliw


Delay slots are merely the pipeline of the uarch peeking through, its bad practice because you'll probably want to change the pipeline depth at some point. Other uarchs have them but they hide them from the public API.

VLIW only removes the logic to detect data dependency - it doesn't workaround the actual need to wait for data to be ready.

None of this has much to do with speculative execution which is guessing which way a branch will go. You simply can't have what would be considered a modern computer without it.


There is nothing particularly convoluted in either meltdown or this new attack. This optimization could have happened on any other architecture.

The legacy parts have either been disabled in 64-bit mode, or they are implemented in microcode. Other architectures are not simple either, ARM64 has incredibly complicated paging for example.


> no tiny CPUs

That is far more a constraint than you think. Probably quite hard to have even a gigabit Ethernet subsystem without it.


Done that, not very hard to have just a dumb ethernet MAC with hardware rx and tx ring buffers. You could map that over PCIe just fine.

Of course no offloading, but you wouldn't notice any performance drops, if the ring buffers are large enough.


Or use the off-brand x86 (AMD) which has had very few of these vulnerabilities be useful on it.


The off-brand that invented x86-64? :)


Getting the SGX attestation key would permanently break SGX-based blockchain (Hyperledger Sawtooth?) mining, if I understand correctly. It's amazing that (if this is correct) this vulnerability has permanently broken a large software project.


SGX Remote Attestation was built specifically to deal with events like this. Intel starts to reject attesting to vulnerable microcode revisions after some period following disclosure. In this case, they even postponed disclosure until patched microcode revisions were available and those revisions already required for successful attesation.

If said SGX application wasn't built around this model then it's probably not a valid use case of SGX..


Isn't this pretty much a no-go for any large public software project, given that microcode updates often depend on the OEMs, which are notoriously bad about supporting devices older than about a year?


I think that is mostly the case for BIOS and platform firmware. CPU microcode can be loaded by the OS (if the OS allows you to, as Linux does - https://www.cyberciti.biz/faq/install-update-intel-microcode...).


BIOS updates are required for most SGX-related microcode updates, as the microcode has to be up-to-date before enabling the SGX feature via a MSR (which is usually done by the BIOS). This is so you can't start an enclave with old microcode, exploit it, upgrade microcode, and still pass remote attestation.

Also, the more major spectre-related microcode updates have to be applied very early (in the BIOS) probably for technical reasons. For this latest microcode update, for example, Intel didn't even include it in their downloadable microcode package as you linked to. On my v6 Xeons, I was able to get to revision 0x84 with the latest OS microcode package, but 0x8e with a BIOS upgrade.


It looks like Intel is pushing these security-related patches pretty hard on vendors, as this latest patch was available in late May on one IBM board Softlayer uses, and early June on another Supermicro board they use.


Fuck SGX. Fuck it long and hard.

Breaking anything that enables DRM is a win in my book.


Even if it can steal all your private keys?


I'm tempted to just buy the cheapest 8th gen intel cpu and play with that to extract widevine keys from sgx


Since pirates have most likely broken widevine (to the best of my knowledge - I don't have direct confirmation of this), this was also my first thought. I wonder if they've used something like this. As far as I know, it would constitute a complete break of the Widevine DRM model.


What is the net performance impact of all these Meltdown, Spectre, now Foreshadow mitigations?

-10%? -20%? -30%?

Have we gone back 3 CPU generations?


This is something I'd really like answered.

I'm still running an i7-3770k on my desktop at home. I was considering upgrading when the 9th gen comes out in October, but if the Spectre/Meltdown/Foreshadow fixes have a significant performance impact, it won't be worth spending the money. As it is, I'll already need a new motherboard and RAM, since I'm still on DDR3.


What secrets do typical VM hosts (like cloud service providers) have that must be protected from guests?


Private keys for HTTPS certificates. API keys or different credentials for other systems.


Huh, interesting. VM _hosts_ serve HTTPS sites and hold API keys? Those aren't done by other servers?


I think the idea is that a given VM client can more easily trust that their keys and such will not be captured by other VM clients if they know that SGX is working.


Can anyone "explain like I'm 5" this issue?


Modern processors execute instructions speculatively, that is without knowing if the instructions should actually run. If it turns out they weren't supposed to be executed then the processor undoes effects of the instructions. However, not everything is undone. If the speculatively executed instructions access RAM, then they can move data in and out of the cache. By measuring how long it takes to read memory, you can tell what memory the speculatively instructions accessed.

Speculative execution is what allows Meltdown to work. You make the processor speculatively execute an access to kernel memory, then access a memory address based on the value of the data read from kernel memory. Intel processors preform the speculative execution without first checking if the memory access in allowed while AMD processors check before the speculative execution. This is why AMD processors aren't vulnerable to Meltdown.

SGX was thought not be vulnerable to a speculative execution attack because attempts to access SGX memory, without having the necessary access, just yield -1 for reads and writes are ignored, as opposed to causing an exception as with access to kernel memory. However, if the SGX memory is marked as not-present, then attempts to read the memory will trigger a page fault exception. The page fault circumvents the normal SGX protection and allows the memory to be read by speculatively executed instructions.


Interestingly, the upcoming CPUs with built-in resistance to Meltdown (new MSR bit RDCL_NO set to 1) will also be immune to L1TF already.




AWS bulletin: https://aws.amazon.com/security/security-bulletins/AWS-2018-....

Amazon Linux bulletin: https://alas.aws.amazon.com/ALAS-2018-1058.html

RHEL patches are out. CentOS after delay, presumably. Nothing yet for Debian/Ubuntu.

TL;DR: AWS is patched. Go update your kernel (especially if you run other people's code).



On Ubuntu 14.04 (Trusty), be warned that there is a showstopper bug in the latest kernel update to 3.13.0-155 [1].

[1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1787127


404 not found on the AWS link.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: