Hacker News new | past | comments | ask | show | jobs | submit login

What's the reason for using a firewall?

Assuming that services which shouldn't be accessible to the outside only listen to localhost not the network (e.g. MySQL on a LAMP stack), isn't that sufficient?

(Honest question, I don't have much experience with syadmin.)




Ideally, you do both: bind your services to the correct interfaces and ports, and set firewall rules as a safety net. This prevents users (or exploits yielding forked processes) from listening on other ports (well, they can listen, but it will be pointless), and if a package update comes along that unexpectedly changes a service's listener configuration, you'll be protected. It also protects you from buggy or broken services that may provide bind/port options in their configuration but end up listening on all interfaces or random ports anyway.


So there's a couple of reasons to add a firewall.

1. If an attacker gets unprivileged access it can slow them down (if properly configured) in getting new tools onto the system or adding a shell.

2. If a configuration error results in a service being started on a network accessible interface by accident the firewall gives you a bit of defence in depth protection against unauthorised connections to that server.

3. you can also use it for logging activity to feed into other systems.


>1. If an attacker gets unprivileged access it can slow them down (if properly configured) in getting new tools onto the system or adding a shell.

That only works if you have an outbound firewall. Which is very onerous - you'd either have to whitelist destinations (package repos, but what if you want to validate arbitrary certificate's CRLs?) or whitelist applications (but not wget etc.)


1) An attacker getting access will only be slowed down/detected if your firewall filters outgoing traffic, which practically no one does because of the inconvenience and maintenance costs. You also need to lock down outgoing traffic to port 80/443, which is how many intrusions download their payloads and calls home for instruction. If you however accept the cost and do use a outgoing filter, it's quite effecting in detecting and stopping attacks, and it is something I recommend for defending assets with high security demands or high risk.

2) As for configuration errors, it depends on what kind of practices you use as a sysadmin. Do you download and run random scripts found on blogs, use experimental versions, and do not spend time reading manuals? Or are you someone who will only run a Debian stable, has verbose settings in aptitude and reads patch notes? It's been a long time (i.e., almost 20 years) since the last time I saw a program that allowed vulnerable interfaces to be accessible on the network without significant warnings in the manual, comments in the config file and readme. Projects and package maintainers have significantly stepped up their security practices, that by the time something reaches stable it should be matured enough that shooting yourself by accident is difficult.


Many services listen to all available interfaces in their default configuration. Many also auto-start right after installation. So additional layer of protection won't hurt.


Firewalls are required for compliance in IT in regulated environments that are common in the Fprtune 500. Sometimes an IDS is necessary to supplement it.


Suppose someone gets access to the box. They shouldn't be able to curlbash http://evilscript.sh into the system.

So you really want to lock all outgoing and all incoming except for very specific channels and protocols to controlled endpoints.


Layered security. You always add redundant security so in case another layer fails you have a fallback. It's the better be safe than sorry version of infosec.


Honest question, why does nftables get so little love vs iptables?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: