Hacker News new | past | comments | ask | show | jobs | submit login
Enforcing TLS protocol invariants by rolling versions every six weeks (ietf.org)
278 points by fanf2 on June 12, 2018 | hide | past | favorite | 81 comments



This is part of Chrome's scheme to avoid protocol ossification -- where you realize you can't deploy any enhancements because of widespread non-standards-compliant (but likely, for the original implementer, "good enough") behavior.

The idea is to catch them early by simulating future versions now, which in their belief is best done by perpetuating traffic that contains bullshit versions on the wire. It's fairly clever, in that any implementer who doesn't want to break significant portions of web traffic now must code their implementation in a way that anticipates Chrome's future versions, which is most sanely done by coding their implementation in a way that happens to match what's in the spec. Meanwhile, noncompliant implementations will likely break for a huge amount of users, causing widespread uproar and, presumably, pressure to fix the problem.

Google's clients (e.g. Chrome, Android) and servers originate and receive a high enough portion of traffic that they could pull this off on their own without anyone else's buy-in, and still achieve the intended effect, but they're reaching out to the community to build consensus. It's a hacky idea, but one that backs noncompliant implementations into a corner now, instead of them sticking around forever and causing incompatible changes later down the road.


meanwhile who designed their stack properly enjoy running their software for decades across multiple platforms without modifications.

I guess we just have to relearn all the lesson from the 80's


So if someone deployed SSLv3 when it was the secure version and left that running for decades, you would describe this as proper and an ideal scenario that we should strive for?

I sure hope that's not a lesson we learn in any decade.


only because the current mess forces network encryption to be enrolled within the application at it's layer doesn't mean that's the ideal solution.


Er, one of the lessons from the '80s was the end-to-end principle. Applications are the right place to handle encryption, and network encryption solutions like IPsec are full of layering violations.


One of the other lessons from the 80's in the high-assurance-secure systems was that you can get great ROI from throwing everything you can in terms of assurance at an security-focused kernel and VPN solution that everything else uses. Apps using secure tunnels, mainstream or high-assurance, got hacked way less on crypto part since rolling their own crypto led to more screwups than using solutions by crypto/security-focused developers. You can put the tunnel in network layer, app layer, or whatever. So long as they have a black box with good defaults and documentation on using it.

The first was BLACKER VPN. NSA has their Type 1 Link Encryptors. Mikro-SINA put it in on Intels with more open components. Now, I'm pushing for Wireguard variants in both app and network deployments. There's people laying groundwork for expanded use of it right now. Once that's done, I know some people who might put it in some low-privilege architectures.


Don't you think it ought to be handled by the kernel though? The same way TCP is not handled in userland nowadays. It's a shame that every app has to reinvent the wheel when it comes to crypto (even if most of them de-facto standardize around a few library implementations). I wish SSL/TLS was just one setsockopt away. You'd have one central point to handle certificate authorities, exception, debugging etc...


Absolutely not - there's a ton of complexity in parsing certificates, checking Certificate Transparency signatures, contacting OCSP responders, chasing Authority Information Access references to intermediate certs (the latter two immediately require having a full HTTP client), parsing name constraints, etc. There are already too many ASN.1 decoders in my kernel and I don't want another one. And I can upgrade my userspace libraries much more easily than I can upgrade my kernel.

Importantly, the big reason for TLS to be at the endpoint and not use IPsec/tcpcrypt/etc. is because the important thing to verify in TLS is the hostname, not the IP address, and DNS resolution is in userspace everywhere (as far as I know). You could imagine a design which pushed both DNS resolution and TLS verification into the kernel, but it would be too monolithic of a kernel for even UNIX's tastes.

TCP is a relatively simple protocol (in the sense that it doesn't have too many moving parts, not in the sense that it's uninteresting) and fits fine in the kernel. But even for reliable transport, there are plenty of apps using their own thing over UDP (RTP, Mosh's SSP, QUIC, etc.) that are never going to get into a kernel. And even for TCP alone there's a bunch of potential complexity you can put into the kernel (DCTCP, New Reno, fq_codel, etc.) that have been hard to deploy precisely because they require a kernel patch.


what has certificate management anything to do with communication encryption?

if you keep conflating responsibilities, you'll keep ending up with the same, flawed, "solutions".

besides, everyone and their dog now has a "certificate" from the likes of let's encrypt, so even their role as identity trust is basically moot.


> what has certificate management anything to do with communication encryption?

What's the point of encrypting something if you don't know who you're encrypting to?

> besides, everyone and their dog now has a "certificate" from the likes of let's encrypt, so even their role as identity trust is basically moot.

Yes, everyone should have a certificate, and I'm not sure why you think this means "identity trust" isn't working. Certificates mean that you probably are who you say you are. They don't mean that you're a good person.


Userland TCP/IP stacks are in fact a thing once again, for the sake of performance and scalability in high end systems. It's not common to most systems, but to say it isn't done anymore isn't true. As we move beyond 40Gbps and 100Gbps Ethernet it's only going to become more common.

https://datatracker.ietf.org/meeting/87/materials/slides-87-...

https://shader.kaist.edu/mtcp/

https://github.com/google/netstack

https://openfastpath.org/

https://github.com/libos-nuse/net-next-nuse

http://www.f-stack.org/


Well, layering quic on top of udp means we're going back to userland networking too.


Never expect old code to handle new protocol elements. If it's not exercised regularly, it's not going to work.

Very few developers (outside of Google) are going to write randomized fuzzers to test for compatibility with theoretical future extensions. They're going to test that it works with existing stuff and then ship it.

So, the fuzzing has got to become part of the public ecosystem.


This isn't really the situation. The old code does not have to handle any new elements. It just has to compare (1,3) to (1,2) and send (1,2) the same as before. It has to look in an explicit list of optional items, and skip over all the optional items it does not know, and include only the ones it does know. This is all trivial and many implementations get it right. You don't need fuzzing for this part. Any remotely sane code would work just as well for (1,4) or (2,0).

There were probably some problems with odd dumb custom-coded gadgets. But mostly, the problem is with "value-add extra-security" boxes in the middle that don't _do_ the protocol at all. They are not a TLS client or a TLS server. They are a product which promises to spy on the traffic and "stop the hackers". They really can't do anything useful. They just see that the handshake packet had (1,3) instead of the familiar (1,2) and drop it. For good measure, they also block packets between those endpoints for 24 hours (so fallback retries don't work). Also they didn't add (1,3) to their recognition system a couple years ago when tls1.3 was in the works, because anyone buying or selling these things is just not very on top of things. So, all the middleboxes which are popular enough to matter, need to be tricked with obfuscation, so they're _really_ not doing anything.


Can't tell whether ploxiln knew this and was just using it as an example anyway, but the TLS version indicator is dead anyway, the TLS 1.3 specification (passed last call but yet to be published) says to always treat this as though you're TLS 1.2 and marks it "legacy_version" but then you hide the real version number inside a new extension which they aim to prevent from rusting in place.

This was done because the version code ploxiln describes is so thoroughly rusted shut in middleboxes that TLS 1.3 would be undeployable in practice without such a changes.

Also, "fallback retries" aren't a thing. The reason is downgrade protection. If I have a new protocol (TLS 1.3) with better security, but I'm happy to retry with an older protocol (say TLS 1.2) if that doesn't work, obviously bad guys on the path will just ensure my TLS 1.3 connections all fail so that they can attack the weaker TLS 1.2

TLS 1.3 guards against an attacker who tries to downgrade a TLS 1.3 connection, if both sides know TLS 1.3 and yet somehow the packets when they arrive from the client say TLS 1.2 on them, the Hello "random" value sent from the server will have the message "DOWNGRD" scribbled across part of it, and the client sees this and aborts because somebody is tampering with the connection. If the middlebox tries overwriting the bytes with "DOWNGRD" written in them then the random data doesn't match up and the connection fails.


What I don't get is why Google really bothers with middleboxes, if a business deploys them and then gets shut off from Google services the admins won't take long to find and disable that middlebox. And if they don't they go bankrupt, whatever happens the Internet wins.


"Last change gets the blame".

Raymond Chen wrote some articles about how this impacted Windows 95. It doesn't matter that program X completely ignored the documentation in Windows 3.x and so it "makes sense" technically that in Win95 that program crashes, the user experience is that Windows 95 broke Program X, and they just bought Windows 95, so they will demand a refund and moan to all their friends.

There is a limited tolerance for Chrome versions that don't work, because the lesson customers get is "Don't run Chrome" not "My middlebox is garbage". The tolerance is increased for security problems, so Google is more willing to lose say 0.1% of users because they enabled TLS 1.3 (the remaining 99.9% of users get improved security) than to lose 0.1% of users because they added a cool 3D logo and it crashes on a specific model of video card or version of Windows due to a driver bug. But losing 10% of your users is a disaster, and that was the ballpark for TLS 1.3 in earlier drafts (before it was taught to sidestep more middleboxes).


And how about putting an informative tooltip on google's pages? They could use Javascript to query a test server, which responds with the correct headers. If the answer doesn't get trough, print a message.

Hopefully someone will notice and bring it up with the IT or ISP, especially if the message says to do so, and that Google/internet could stop working in the future.

Actually,get more popular sites to do it, like Facebook, Twitter, Apple and Microsoft (heck, they could do it as a part of the operating system): if a lot of websites say it, users will tend to think that something is wrong on their end.

This was surely already brought up as a solution, so I wonder what the catch was, if any?

Since this would be browser-independent, the browser wouldn't get blamed, and if it's only a mild inconvenience, it shouldn't bother people that much (they have been using the web despite more invasive cookie notices). I would expect it to allow a critical number of non conformant devices to be quickly disabled as a result.


>They could use Javascript to query a test server, which responds with the correct headers.

As far as I know, Javascript doesn't have low-level access to connections, not enough to run even a stub TLS 1.3 implementation.


I was more thinking about querying something like testtls.google.com, that would analyze the results server-side (this might require an extra round-trip), and return them to be displayed.


Presumably these middleboxes have some critical density.

If google deploys 1.3 and 50% of corporate users can no longer reach google's services, that would be seen as google's mistake. Moreover, that would hurt google quite a lot.

The issue is that these boxes are already deployed, and it wasn't noticed just how shitty they were until we tried to deploy 1.3 . The system has essentially rusted shut.


I wish Google, Apple and Microsoft just worked as an alliance here. They are all working on TLS and if together they deployed TLS 1.3 no middlebox on earth would stand a chance.


Sometimes I wonder the same. Maybe they think it's hard to change something in an enterprise? Or they are afraid of losing the enterprise customer (because Microsoft works here!). Or just don't want to use their influence to evolve the protocol?


Yup, good clarifications.

As you say, "fallback retries" are bad. But that is what browsers did, some years ago, when tls1.2 was less common ... and they added the TLS_FALLBACK_SCSV "not-a-cipher-suite" to try to detect attacker-caused downgrades in a dumb-server-compatible way ...


Someone further down mentioned a Raymond Chen quote, so I'll point out here that it might be useful to have a value like the one described here:

https://blogs.msdn.microsoft.com/oldnewthing/20040211-00/?p=...

Summary: a video driver cheated by having its implementation of the "do you support this DirectX feature" API always return true no matter what feature was asked for (and it didn't support everything, obviously). This made things crash and led to Microsoft (who didn't write the driver) getting complaints.

The solution, since DirectX features used GUIDs as their identifiers, was to take the MAC address of a new network card, use it to generate one GUID, then smash the card. Since they knew that MAC would never generate another GUID, they put in a check that would ask video drivers "do you support the <GUID from smashed network card> feature?" And if the driver claimed to support that feature, DirectX would know not to trust the driver's claims of feature support.


> because anyone buying or selling these things is just not very on top of things

Sometimes these things are required by law and/or industry standards. IIRC banking sector companies are required by law to record every communication metadata of the employees... which only works with said middleboxes.


You say that as if there were no alternative. They can just install the necessary monitoring on the end-points or buy middleboxes that do proper TLS MITM by completely repackaging the transport instead of trying to mess with the headers.


I guess this is what's meant by "antifragile" -- not merely durable in the sense of "resistant to stresses", but actively improved by exposure to stress.

Clever! I like it.


Why can't you code you implementation to safely ignore new protocol elements? They aren't asking you to support the new elements, just ignore them.


They tried this. This effort is explicitly because current implementations failed to correctly ignore new features. Since the previous approach was shown not to work in practice they are trying this approach which is intended to make incorrect implementations more obvious.


Some middleboxes treat new protocol elements as a sign of a hack attempt, and drop the connection. It's not a coding error, they're doing it on purpose. Which is why the current push to have as much as possible in the encrypted portion of protocols: so these crazy middleboxes can't prevent protocol evolution.


Treating unknown packet as "hack" is stupid.

Didn't we learn that from icmp blocking break path MTU discovery already?


Actually, they "learned" from the "ping of death" that unusual packets are an attack. They have to unlearn that "lesson".


You can. But who is going to make sure you don't skip this part and make more profit than your competitor?


Crashing on unknown extensions doesn't seem like a profitable differentiator.


Who's gonna notice your miss-handling of not-yet-used-in-the-wild parts of the protocol? If the answer is no one, then it doesn't matter for your profits, and not caring about this case when coding increases your profits. Google now tries to answer with chrome users.


Again, the development costs of "goto drop" and "/* ignore */" are quite similar, so it's unclear where these profits are coming from.


They come from the cost savings made by a pointy-hair telling that smart aleck code monkey that bothered actually reading the spec that his concerns aren't going to delay the ship date or go into the budget for developer time -- but you know maybe in six months (every six months, it'll be six months out) we can totally revisit that.

Then it ends up in production in too many customer sites with IT that avoid updating stuff (because the updates frequently break things, which is a pain) that it's too late to fix it now. The concerned forward-thinking developer moves on to (slightly) greener pastures to rinse and repeat and the project gets handed to an off-shore team who don't comprehend the difference between current behavior and specified behavior of a product - yet another bold cost-saving measure for pointy-hair to use to argue for his bonus.


Not spending time thinking about handling those, not debugging anything related to them, etc. The code might look almost the same, but it's always a cost of how to get there. And the code in these middleboxes not rarely smells.


If there is no unknown extensions in the wild for years it can be.


Google, apparently


So Google is the gatekeeper for tech industry profits now?


If those companies can't achieve basic protocol detection, I'm all for someone shoving it down their throats. Why not Google?


Anybody that doesn't want to introduce crashing/fatal bugs that disrupt productivity? Skipping checks[1] and making assumptions about input[2] is an irresponsible disregard for basic security.

This is about basic programmer competence, not time a consuming feature that might impact your development costs relative to your competitor. You are not going to make more profit by leaving out the "default:" case to your switch/case statements that skips parsing for unrecognized elements.

[1] https://archive.org/details/The_Science_of_Insecurity_

[2] https://media.ccc.de/v/31c3_-_5930_-_en_-_saal_6_-_201412291...


You can trust in basic programmer competence when there is a certification the programmer has to loose if he displays incompetence, like done for other engineers and also doctors and lawyers and many more.

Until then, you have to make the financial incentives in the short and long term such that they lead to desirable behavior, e.g., producing non-barfing middleware in this case.


The problem isn't leaving out the 'default' case. The problem is coding it as

  default:
      return DROP_CONNNECTION_AND_BAIL;


Yes, which is why I really like the idea of proactive enforcement with random expected-to-be-ignored tags/parameters. I'm arguing against the idea that leaving out the last part of this

    for (item = params->head; item; item = item->next) {
        switch (item->type) {
        case KNOWN_PARAM_TYPE_FOO:
            // do normal stuff
            break;

        /* ... etc ... */

        IGNORE_KNOWN_PARAM_TYPE_BAR:
            // fallthrough - BAR explicitly uses default handling
        default:
            continue;  // skip unknown parameters
        }
    }
is evidence of incompetence, not a strategy that will "make more profit than your competitor".

Also, as the BAR constant suggests, you probably already have code that skips unrelated fields. While the difference in programmer time is almost always trivially small, sometimes it might be zero.


Unfortunately incompetence can sometimes lead to increased profits in the short term. Why else is there so much horrendous software out there?


"Basic programmer competence" is not something you can consistently expect from people in the industry. Be it a bad day, general carelessness, or business pressures - there are many reasons to cut corners.


If the standard says a message can be up to 4096 bytes long, but for 15 years the message in practice is always under 250 bytes, there are going to be some implementations that just use a fixed size buffer smaller than 4096 bytes.


I think that middleware boxes will still find some way to make our lives miserable next time we try to upgrade— this seems like another instance of "build an idiot-proof system, and the universe will just supply a better idiot"— but this is still a great idea. This is so good I want to start doing it with my company's internal stuff.


Company internal stuff generally isn't facing as hostile an environment as HTTPS.

The famous "Alice and Bob After Dinner Speech" mentions

"Now most people in Alice's position would give up. Not Alice. She has courage which can only be described as awesome. Against all odds, over a noisy telephone line, tapped by the tax authorities and the secret police, Alice will happily attempt, with someone she doesn't trust, whom she cannot hear clearly, and who is probably someone else, to fiddle her tax returns and to organize a coup d'etat, while at the same time minimizing the cost of the phone call.

A coding theorist is someone who doesn't think Alice is crazy."

HTTPS is like Alice. In cryptography a theoretical attacker is often given seemingly outrageous abilities, like they can send you huge numbers of arbitrary messages to see what happens, they can time everything, they can see messages you were sending and try sending other messages that are just a tiny bit different, they can collect your messages and re-send them later, and so on. In many systems a real attacker would struggle to pull these things off, but in HTTPS thanks to things like cookies and Javascript it's actually not difficult at all.

Your internal stuff almost certainly doesn't have arbitrary clients running code from arbitrary other participants like the Web does. It also almost certainly doesn't have a dedicated reliability team who can go change everything every six weeks to keep up. If you do such changes every six weeks for a few months, then get bored and stop, the last set rust shut and you've gained nothing. Google is essentially promising their teams would undertake to carry on indefinitely.

Google essentially proposes an artificial Red Queen's Race, with the goal being to tire out middlebox vendors and/or their customers and have them choose to exit the race.


I hadn't heard of Red Queen's Race before, and your use of it in a somewhat novel way was awesome. That was 15 minutes well wasted.


A fun example (apparently inspired by a related TLS proposal):

https://community.letsencrypt.org/t/adding-random-entries-to...

(Let's Encrypt creates random protocol extensions on every connection in order to ensure that clients are tolerant of protocol extensions that they don't understand. Breaking changes to the protocol, on the other hand, will be served from a separate API endpoint.)


They also insert random entries to returned JSON, this is documented in the ACME spec.


You have the same kind of idea in the JDK to be sure that a user code does not work if it relies on the iteration order of an immutable Set, the JDK use a SALT [1] so the iteration order of the Set is different each time you start a new VM allowing the code of the JDK to be changed in the future.

[1] http://hg.openjdk.java.net/jdk/jdk/file/f36d08a3e700/src/jav...


Salt is mostly there to prevent attackers that control program input from exploiting the hashing algorithm to construct pathological worse case datasets (e.g. ensuring every input hashes to same bucket).


go maps work like that too.


Context:

1. So called "Security" companies (middlebox vendors) advise customers to write what are effectively firewall rules that bake ossification into their systems using, I kid you not, regular expressions

https://www.fidelissecurity.com/threatgeek/2018/02/exposing-...

This type of nonsense is why one of the optional TLS 1.3 features is certificate compression. It seems like a no-brainer to offer this for earlier versions, but it turns out that middleboxes snoop the certificate and make decisions about it so compressing it causes them to freak out. Why can we (hopefully) fix that with an optional extension in TLS 1.3? Because in TLS 1.3 now the certificate is encrypted, so the middleboxes can't see it in the first place.

2. In about 2016 when it originally looked like TLS 1.3 was almost finished, the middlebox vendors finally noticed something was happening and began yelling about how their products had "legitimate" ‡ uses that would be blocked by these improvements and it all needed re-thinking.

Fortunately (or perhaps inevitably) the middlebox vendors have no idea how the IETF works, so they spent a lot of effort on trying to "win votes" which should have anybody who was somehow unaware of this drama but involved with the IETF smiling since there are no votes and you can't win. They also, like typical business people, figured they could fly in to meetings in say, Singapore or London, and spin up at the meeting, but of course those meetings are just a temporary physical incarnation of the IETF, it exists all the time as mailing lists, so if you aren't following those lists you're basically always a kid who wandered into a room where people are having grown-up discussions you can't understand.

3. After a surprisingly long time the middlebox vendors got the hint and went to ETSI to make their own alternative. ETSI is a traditional SDO which is perfectly happy to work on a standard without any messy ethical considerations. It is also, like most traditional SDOs, closed door, so we have only limited visibility of what they're up to. So far it looks like TLS 1.2 (so, bad) but with the ability for an arbitrary number of middleboxes to interact with all the packets on path (so, worse).

It's OK though because under their ETSI proposal the user will "Consent" to this. If you've spent the last month mindlessly clicking through GDPR-inspired boxes on US web sites you regularly visit, or indeed if you're the new hire who has just this moment realised why the big boss decided he needed her with him on this trip, and is now calculating whether to say "No" and risk losing everything or to close her eyes and pretend this is happening to somebody else you have a very good idea what "Consent" means in this context.

‡ "Legitimate" is a word you use when it's important for people to accept that what you're doing is OK, without them thinking about what you were actually doing because they might throw up. "Bribe officials to ignore flagrant safety violations" makes you angry but "Legitimate facilitation payment" sounds very respectable.


Personally, I understood when the ClientHello was changed to deal with TLS version intolerant servers, but disliked the middlebox changes in the ServerHello.


[flagged]


This isn't "the standard mirroring whatever Google feels".


"In short, we plan to regularly mint new TLS versions (and likely other sensitive parameters such as extensions), roughly every six weeks matching Chrome’s release cycle."


Those are internal TLS versions used only by Chrome; they're not being promoted as a "standard".

I don't know if this is an instance of it, but there's an annoying phenomenon where people on message boards like to pretend that standards have near-statutory authority; that once a standard is issued, every vendor has to adhere to it no matter what their purpose.

Were that to be your argument here, you'd in effect be arguing that Google can't deviate from the TLS 1.3 standard even to help TLS standardization efforts.


I was trying to point out how middleware vendors' implementations of standards will always monkey with somebody else's client/server model, and that the idea of "well let's just add some features to this standard and expect things to keep working" is a fool's errand. They can monkey with the standards all they like, but the way they're doing it is doomed to failure.

Separate from that I was just kind of chuckling at how their proposal is literally "we're shipping out changes with Chrome so you guys can either get on the bandwagon or figure something else out".


Nobody else needs to get on the bandwagon. The whole point of the experiment is that standards-compliant software won't care about what Google is doing; the only things that should be impacted are middleboxes that violate the standard.


Interesting idea, but why inflict this churn on all Chrome users? Just make a standalone client that runs through all possible variations so that server developers can test against it.


Companies like Symantec implementing devices like Bluecoat proxies are notorious for not doing their due diligence on their network stack. This ensures that it’s actually tested against devices and has the support of users to be fixed rather than a few developers who won’t be able to move the needle. At least that’s how I read this


I believe the point is to make it sufficiently widespread that an ignorant server developer can't ignore it.

(edit: or at least will start getting bug reports as soon as a Chrome/Chromium user enters the population)


We had tools for years that supported TLS 1.1 and 1.2 while large amounts of the web broke with them; merely having such tools doesn't make server developers care.


It's not about tooling or clients, it's about creating widespread usage by real users so that proper support can no longer be avoided or mishandled.


I'm surprised that nobody is concerned that Google (or whoever) is spamming the Internet to make everyone do what Google / TLS devs want them to do.

First, it's coercive. Second, it's in principle spam, even if the cost is affordable in this case. Third, it sets three bad precedents:

* Google can do whatever they want

* Hijacking user computers for ulterior purposes (by utilizing Chrome) is ok

* Spam and coercion are ok. And if you argue it's ok in this instance, you aren't thinking past your nose.

Imagine if Symantec or Microsoft tried it; how would people react? Well you don't have to wonder, because they will.


You have to really want an anti-Google story to find it in this, the bit where Google is using its market position on the server and clientside to help make sure vendors don't break the TLS standard.


What about the second precedent in the GP, and what about my point, if you argue it's ok in this instance ...?


>what about my point, if you argue it's ok in this instance

in this instance, they're doing a good thing. are you arguing that google shouldn't ever do anything, no matter how good it is, because if google is allowed to do things then eventually they might do bad things? that seems absurdly defeatist.


Would you argue against Chrome sending out random LLMNR requests at boot to detect attackers on the network? They aren't "hijacking" computers of users who voluntarily installed chrome - in part due to the promise of a safer browsing experience, which this is all about.


> Would you argue against Chrome sending out random LLMNR requests at boot to detect attackers on the network?

Good question.

No, I wouldn't, because it serves that user directly. Google could argue that the TLS 'GREASE' benefits users, but that benefit, and let's assume it exists, is very indirect: That user won't see any benefit that day, that week, and maybe never; Google is using their users' computers to advocate something that Google thinks is a good idea. To demonstrate GREASE can be taken too far, imagine the extreme case where Google has Chrome send messages to U.S. Congresspeople advocating against some policy - Google could argue that they believe it's in the users' interests, but that would be highly disingenuous. Again, that's extreme and not going to happen.

More realistically, imagine Microsoft had Windows 10 insert something in network traffic to compel compatibility with some proprietary technology of theirs. Imagine vendors disagreed about some standard, and different products used GREASE to compete for different outcomes.

Where do you draw the line? I think GREASE violates end-user control or autonomy. They become pawns in a standards competition. It's very arrogant of Google and support of it betrays an arrogance perspective here at HN: users are pawns, and their computers are ours to use as we see fit. (It's also intentionally bad engineering, which makes me very uncomfortable.)


I disagree with the idea that users won't see benefits - the benefit is that upgrades to a critical protocol can be rolled out more smoothly in the face of compliant middleware. This benefit will be significant for end users.

Sending messages to Congress would be very different - that's squarely outside of the purview of a browser. TLS is not.

As for Microsoft having Windows 10 insert things to support a proprietary technology, again, this is a flawed analogy - it's simply not what's happening here.

Why draw the line if we know that we haven't crossed it? You have put forth two cases that seem obviously on the other side. This case seems obviously on the 'we good' side.

Would you say A/B testing means 'users are pawns'? I really disagree with that. Again, this is all in the interest of end users who have voluntarily installed this software, at least in part because of the secure reputation.


> That user won't see any benefit that day, that week, and maybe never;

I will benefit from this even though I don't use Chrome (or other Google product/services). Reducing traffic manipulation by middleboxes and eliminating passive traffic snooping are important goals, and preserving forward compatibility should increase interoperability with future versions. I benefit from the internet moving back towards the end-to-end principle (smart hosts connected to a dumb network that only routes packets).

The additional bandwidth cost is trivial and by definition will not affect anything that follows the spec.

> imagine the extreme case ... Again, that's extreme and not going to happen.

Speculating about hypothetical problems you admit are not going to happen is a distraction and waste of time. It's possible to make up extreme hypothetical problems afflicting any plan.

> imagine Microsoft had Windows 10 insert something in network traffic to compel compatibility

I spent a lot of time in the late-90s/early-00s fighting against Microsoft (and others) trying to embrace, extend, and extinguish[1] open standards. That strategy relies on extending an existing protocol with features that are not supported by existing implementations. The goal is to create the wall around a public garden by creating interoperability problems.

The only people that might be affected are middleboxes that want to manipulate traffic (good; that was the goal) and incompetent implementations of the spec. The latter probably needs to be fixed (or replaced) anyway because not following the spec that is important for security is a sign that the software probably has other serious bugs and vulnerabilities.

I have spent decades fighting for Free Software and against proprietary control of data formats, which are usually a form of rent seeking. Encouraging implementations to follow the spec and ignore unknown elements is unrelated to those concerns.

> different products used GREASE to compete for different outcomes.

How, precisely, is that supposed to work when (according to the RFC[2]), "Servers MUST correctly ignore unknown values in a ClientHello and attempt to negotiate with one of the remaining parameters."

> It's also intentionally bad engineering

This is very good engineering. The software industry has been negligent in learning good engineering practices like the importance of designing in tolerance[3] and failing safely.

[1] https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

[2] https://tools.ietf.org/html/draft-ietf-tls-grease-01#section...

[3] https://www.youtube.com/watch?v=9AYuXNz_0Fc


It doesn't set the precedent that Google can do whatever they want. Either they already could because of their size and that's already a problem the world should solve even if they haven't used the power yet, or they can only do this because people think it's a reasonable idea and don't object.


Products putting random data into a few fields is neither coercion nor spam.


I'm very concerned by it.

The day that a programmer can insert additional data as "optional fields" in TLS that can't be inspected without breaking spec? Why can't they be inspected? Because they're "unsupported"

Sounds like a great plan. Definitely a great plan. What could go wrong with clients writing out more data than they need to? What could go wrong with clients writing out "garbage data" specifically to make sure that random unsupported data is supported and "ignored"? Nothing wrong at all! /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: