Hacker News new | past | comments | ask | show | jobs | submit login
ETS protocol does not provide per-session forward secrecy (nist.gov)
65 points by jfreax on March 2, 2019 | hide | past | favorite | 37 comments



The real story here is not about security, it's about markets and profit (as always). Currently, there's a huge market in DPI boxes for inspecting TLS traffic, which are often poorly implemented, tied to expensive support contracts and super flakey.

These boxes can only work with a single static secret, which is shared between the DPI boxes and the actual servers. If the servers are using a forward secret mode, this is no longer enough, you have to share a secret for every session.

This necessitates some kind of software running on each endpoint to transmit these secrets. But wait, the moment you have to have software running on every endpoint, why do you need a special box? Why not do it all in software?

This represents a huge threat to the DPI market. No box means no lock in, no mandatory upgrades, no support contracts. Sure, software can have these things too, but it's inherently a more open, competitive market where you are vulnerable to open source invasion. Solutions like eTLS are just a last ditch gnashing of teeth from DPI box sellers, trying to prevent a lucrative market from disappearing.

Once you move everything to software: a) competition in general gets better and b) open source starts to take over, c) security will improve.


> These boxes can only work with a single static secret, which is shared between the DPI boxes and the actual servers. If the servers are using a forward secret mode, this is no longer enough, you have to share a secret for every session.

Actually, the boxes can also MitM the entire SSL connection. This just happens to be a much more efficient system. It can easily be turned off without affecting the connection, and it doesn't introduce extra latency. Moreover, this system allows for post-hoc DPI rather than requiring that it happens on-line.

> But wait, the moment you have to have software running on every endpoint, why do you need a special box?

There are reasons beyond 'market dominance' for not wanting to do this on the end-points. End-points are numerous, heterogeneous, occasionally and occasionally difficult to access. This makes actually implementing this system on all endpoints very hard. Let alone keeping all end-points up-to-date.

In general, which sounds like the nicer approach to take: "drop in solution" or "solution that affects all endpoints and needs to support all endpoints".

The discussion is a lot more about 'Is PFS an acceptable loss for getting DPI' with a very large side discussion about whether DPI should even be possible.


The passive boxes aren't truly drop-in. You need to extract every single private key that will be used for traffic. This is easier than modifying the software to add logging, but not tremendously easier. Endpoints being numerous, heterogeneous, and difficult to access all apply to existing boxes. And whether the endpoint is up to date doesn't matter to either method.

It's not a big burden to install a MitM box either; most places call it a load balancer.


You can make it less of a hassle by just using the same private key on every endpoint...


> Actually, the boxes can also MitM the entire SSL connection. This just happens to be a much more efficient system. It can easily be turned off without affecting the connection, and it doesn't introduce extra latency. Moreover, this system allows for post-hoc DPI rather than requiring that it happens on-line.

Great point! As you say, it scales much worse and introduces additional points of failure though.

> There are reasons beyond 'market dominance' for not wanting to do this on the end-points. End-points are numerous, heterogeneous, occasionally and occasionally difficult to access. This makes actually implementing this system on all endpoints very hard. Let alone keeping all end-points up-to-date.

Absolutely true, but this does lead to a qualititve advantage for open standard / open source solutions where you externalise the costs of additional implementations.

> In general, which sounds like the nicer approach to take: "drop in solution" or "solution that affects all endpoints and needs to support all endpoints".

I don't think this is quite the right distinction, looking at the deployment issues middlesboxes have caused for TLS1.3 and QUIC... I think it might be better phrased as:

"do you want to deploy some static hardware which has to support all endpoint network protocols correctly and upgrade when new protocols come along or do you want to write/use the software for each endpoint you choose to use?"

My point is that software is much cheaper and more flexible (in the long run) than hardware.

> The discussion is a lot more about 'Is PFS an acceptable loss for getting DPI' with a very large side discussion about whether DPI should even be possible.

I agree this is what most of the discussion is about, but I don't think its the real issue. Here are the NIST comments that were posted a few days ago:

https://csrc.nist.gov/CSRC/media/Publications/sp/800-52/rev-...

Check out the NSA's comments on page 21!

> With respect to TLS it seems better to deprecate all non-forward secure cipher suites, not just RSA key transport

This isn't just "we support PFS in TLS1.3", this is actually "please take non-PFS TLS1.2 modes away from people"!


> Absolutely true, but this does lead to a qualititve advantage for open standard / open source solutions where you externalise the costs of additional implementations.

This subject doesn't seem like the most attractive for open-source solutions. Especially when it comes to supporting legacy enterprise systems. This feels more like a case of a consortium of companies creating a standards body.

> looking at the deployment issues middlesboxes have caused for TLS1.3 and QUIC (snip) My point is that software is much cheaper and more flexible (in the long run) than hardware.

I don't think middle-boxes as ETS intends them need hardware acceleration. As such, they could just as easily be implemented in software. This would give the same software-flexibility as modifying endpoints, with the advantage of only needing to support a few systems in your network rather than every single one.

I'd expect the same ossification and bad behavior in software middleboxes as we have had so far. But honestly, I see the same thing happening by supporting this on the end-points.

I'd summarize my position as follows:

If we want to support inspection of traffic by network owners, I see real advantages to selectively breaking forward secrecy for them. But that is a big if. We might be better off just telling those network owners to suck it up and MitM everything.


From a security perspective, it is better to have the endpoints just share the session secret with a DPI box, instead of running the DPI software on the endpoint.

If the endpoint in compromised, in the first scenario, the most the attacker can do it not share the session secret. This is easily detectable.

In the second scenario, the attacker can pretend that the endpoint-local DPI software is still being run, while completely going around it.


Sorry if my point wasn't clear. I do mean that there should be DPI software running somewhere external, the point is just that you don't need dedicated hardware to do it. I completely agree doing everything on the endpoint isn't going to end well.


Had to duckgo it: DPI = deep packet inspection.


DPI = Deep Packet Inspection


Just remember, if it has the extra word "Enterprise" in it, it's probably an insecure, convoluted, undocumented, slow, etc. version of the original...


I don't get the complaints. As far as I understood (and ietf appears to agree), eTLS is not a protocol, it is a (server-side) implementation variant of TLS.

And it is a universal construction: For any cryptographic protocol, one party can replace its random number generator by a deterministic CSPRNG and store or leak seeds. This is undetectable from outside. There you go, backdoor for later reversal of forward secrecy: Forward secrecy is obtained in the moment you erase from memory the internal state of your CSPRNG, and the server can just not do that, without violating any protocol assumptions.

Specifying how to implement this in practice is worthwhile; it is not a weakening or violation of TLS, instead it is an interesting description of inherent properties of TLS.

The naming (eTLS) might be unfortunate. Better to just make it an RFC on "Cryptographic backdoors for TLS".


As far as I understand this garbage protocol is designed to be compatible with TLS 1.3 clients.

Can clients detect the use of this, and if detected refuse to connect with a scary warning? That should kill this abomination fairly effectively.


Afaik the protocol is merely TLS 1.3 with fixed DH parameters. In that case it's pretty easy to detect: keep a client side list of DH parameters used by servers (hashed, limited to the last n connections), and terminate any connections that shows reuse.


You're essentially losing PFS if you do this, since those keys are now available. This would work, though it would probably have to be at the application level.


>(hashed, limited to the last n connections)



This is hilarious for the sheer irony of a fix to one compliance issue creating a different compliance issue.


LOL they changed the name again. This is eTLS (aka not actually TLS but lets jump on the name).


Hilarious :)


Why do people hate eTLS so much? What do you care what enterprises do within their own networks? They have their requirements and they’ll have to implement them one way or the other.


From what I can see this protocol is compatible with TLS 1.3 clients. It makes clients believe perfect forward secrecy is in effect while in fact it isn’t.

The risk isn’t much about internal networks, it’s when this starts leaking onto the open internet.

Also the fact they call themselves “eTLS” to use TLS’ reputation when actually it’s a voluntarily degraded version of TLS.


There’s no way to guarantee forward secrecy to a client. If regular TLS promises that, the committee is lying and they know it.


There are constructions that provide forward secrecy when both the client and server follow them. This is what TLS aims to provide.

If the server doesn't faithfully implement the protocol, of course it will not provide the expected security guarantees. But then it isn't the committee who is lying, it's the server by claiming to implement TLS and then not.


Forward secrecy is mostly a myth anyway when the “ephemeral” keys used to generate the session are kept in memory for weeks, months, or years already (e.g. HAProxy)


Sure, if you want to pretend that an easily-fixed bug makes security a myth.


It doesn't matter how easy the bug is to fix, if 90 out of 100 sites don't fix it. In this case it's less of a bug than it is a thorn, because rotating the keys requires knowing when they can actually expire, which requires state that the process holding the keys usually doesn't carry.

But my point was more along the lines that PFS was never a guaranteed contract with the client, only a possibility offered by certain key exchange protocols, and even then, easy enough to get wrong that most people did.


Why wouldn't they? That group is trying to standardize a protocol that effectively negates a whole lot of progress and even tried to piggy back on the TLS name. Their stated requirements boil down to snake oil and laziness. If companies or groups thereof want to use security measures that aren't on par with the state of the art and intentionally ignore recent learnings, they of course still have that capability but I don't see why they should be given an opportunity to hide that fact behind a known bad standard. That'd only lead others to be forced to use a broken protocol for reasons like compliance.


Because companies in some sectors are required by law to inspect all traffic, while TLS 1.3 doesn’t prevent it in principle it makes it unfeasible to do so in practice given the number of sessions created in a large organization.

I work for such organization which actually took a fairly reasonable stance and told BOA to piss off when they asked us to join them in petitioning the IETF to make exemptions to PFS in TLS 1.3.

Our current stance is that we dissallow it internally until the vendors that provide us with the DPI and web traffic inspection solutions will have full scalable support for TLS 1.3 or until the regulation would change in a way that would no longer require us to capture, store and be able to decrypt all user traffic within the network.


This was never about TLS. Only a stupid person would go "right, we have to decrypt traffic, we control the clients, lets break the crypto".

Surely your IT department already updates the software on client computers. Time to put on their big boy tech pants and decrypt data where the secrets are, on the clients. Then your industry can stop harassing everyone else for bad crypto.


Decrypting traffic on the client isn’t always possible due to how modern browsers operate.

Decrypting traffic on clients is also much harder due to the multiple types of clients you have and the fact that there is no easy way to MITM every connection the the client.

The security threat model by definition defines clients as untrustworthy hence relying on them for decryption is a flawed approach.

If you are going to be cocky and disrespectful at least be right.


You control the client. There are companies making many many millions patching Excel to do fancier charts, I'm sure whatever vendor you got now desperately trying to steer the consortiums can instead figure out how to hook the crypto library in the one browser you install on clients.

Yeah, it's a hard problem. If you don't know half the things your clients are doing, it's much easier to pretend all the security conscious stuff will be going through TLS and then we break just that. It's also obviously wrong, as we all learned when they started filling USB ports with glue.

The boxes already rely on the client, unless someone signed another CA=yes certificate.


Again you do not trust your clients in this threat model because you can’t.

It’s simple a client makes an external TCP connection if that connection uses TLS the its MITMed on the network level and captured this happens to all connections if the client does not accept the handshake because for example the CA for the MITM box isn’t trusted or the client uses certificate pinning the client can simply refuse to proceed with the connection.

If the connection cannot be captured and inspected for any reason it’s simply terminated and the attempt is logged for future investigation.

There is no reason to break TLS on the client or compromise the browser it’s worse in every way and cannot be trusted.


If this was back in the days I'd everyone running ie maybe. But now they're is less control over clients and their browsers. Mitm is much easier, toy just install a certificate on the browser. Going client side means you need to change and modify every browser and piece of software with internet access. Or install some slow crappy firewall type thing and try and monitor things locally....


Cool, doesn't really sound like we're disagreeing on the question "Why do people hate eTLS?" right? I sincerely hope regulation for your sector will change to reflect changing technical circumstances (though I realize how much of a long process that'd be). Your steps sound like a sane way to handle it, I get that you're currently forced to not use it and appreciate that you motion against weakening the protocol for the rest of the world. (Thanks for that btw.)


> Their stated requirements boil down to snake oil and laziness.

Without knowing the internal structure of these particular organizations at all, that's quite a bold claim. If a company has a half million employees and their technology supports billions/trillions of dollars of transactions, it's quite likely that "laziness" has nothing to do with upgrading the entirety of the IPS & DLP products they support, to say nothing of solutions on the client or server side. They can't just edit some config and make all their technology magically support a new protocol that is explicitly designed to stymie their efforts.


Because I'd like to know that my private information (banking information for example) doesn't travel over insecure networks at my bank or other enterprises.

You could try to assure me that it doesn't, but I do not trust uninformed assurances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: