It includes the equivalent of fail2ban, manages iptables, watches log files, can email on failed logins, processes gone mad, and a few other things through a very simple config file.
It's that simplicity of the config file that I like, it means I can hand it to someone and know that they're unlikely to break something due to misconfiguration or failure to actually apply something.
This is on top of obvious things like running ssh on a non-default port, turning off password auth and restricting ssh to named users only (never root).
The install script is easy enough: http://www.configserver.com/free/csf/install.txt so it's really the case that I can do whatever work I'm doing for someone (hello Wordpress) and this can be my simple security solution which at least means I'm sure that I haven't left every window open.
To reduce noise in the logs so that I can focus on the more determined attackers rather than script kiddies just hitting the default ports on every IP.
It does nothing for security, but does help to reduce noise which in turn helps to reduce the time it takes to manage this stuff.
Always run SSH on a privileged port. An attacker could potentially run a rogue SSH instance on a non-privileged port and capture logins for other accounts.
If you run it on a privileged port (1024, or lower), it means that it can be trusted to root-level.
I know. I'm taking the risk with it for a few more days because I was already in the process of moving those apps to fresh boxes (the other app is http://mondrian.io). I'd rather not take them down because they get considerable traffic. At this point I'm just keeping the server alive until I can fully replace it - not using it to do anything new.
I did immediately change www's password again and de-privilege the ssh key that that box had been equipped with. In fact I've recently started always ssh-ing into servers with the -A flag so I don't have to give them keys at all.
So far I don't see any more mysterious activity. Thanks for the word of caution.
Another word of caution. If someone can gain root access on a machine you log into with the -A flag they can "borrow" your authentication agent and use your keys. It's only borrowing because the connection to the agent disappears once you log off. But by then they could already have added new keys for themselves on your other severs.
In this case, if I read the article correctly, only a single account was compromised. I'm assuming the www user doesn't have sudo privileges (though I might be wrong as it shouldn't have a password either and it clearly did..).
Sure there might have also been a local privilege escalation vulnerability, so rootkits are definitely something to check for. Depending on the situation, re-installing might be less work, so then it'd make sense.
But especially if you're not the only one using the machine, having to reinstall the OS every time a single user fucks up and has 'password' for a password, it's going to get really tedious really quickly.
> But especially if you're not the only one using the machine, having to reinstall the OS every time a single user fucks up and has 'password' for a password, it's going to get really tedious really quickly.
This is what automated installs are for. Kill the box, fire up a new one, and run the provisioning scripts. As a bonus you've got actual documentation showing how the box should be set up.
If an attacker can get elevated privileges through an user account, the users themselves can do so as well.
And you shouldn't trust them either, so you need to assume they have if they could have. That means definitely reinstall when you find out you've been vulnerable to e.g. a local root exploit, regardless of user accounts have been hacked.
I made sure. www has no privileges. It can't read root's home directory or touch any of its processes. If it could, I'm sure I would be busy cleaning up a much larger mess instead of blogging about it.
I'm not trying to imply you didn't check thoroughly because I don't know the whole story and your background. But for me personally I wouldn't take the risk - even if the compromised account was 'nobody'.
I certainly don't know enough about computer forensics to make sure that a system hasn't been altered (and I dissect malware for recreational purposes). So unless it's a system that hosts something completely unimportant I'd take the weekend free to wipe + reinstall the stuff.
Even recently a linux local privilege escalation has been discovered[0]. So who knows what the hacked www user really had access to.
For anyone wondering how to properly troubleshoot in this manner without breaking things:
Run 'sudo su www -s /bin/bash'. using '-s /bin/bash' will override the usual nologin shell, and running 'su' as root will mean a passwordless account can be su'd too.
This will allow you to try accessing files and directories as if you had the user 'www's privileges without having to make the 'www' account regularly usable.
You should never set a real shell or password for any accounts that a real user will not be using.
Probably an administrator troubleshooting, www user should also have a nologin shell by default which cannot be changed by a normal user via an uploaded script.
Maybe default settings are production settings after all!
Also, heads up. Don't ever run Elasticsearch open to the world in iptables. This is actually very common because users want to use Kibana (http://www.elasticsearch.org/overview/kibana/). Dynamic scripting is allowed by default (< version 1.2.0) and be exploited easily. It is especially nasty if you run Elasticsearch as root (also very common).
yeah, "special" people at my work did this. Along with installing / automatically starting elasticsearch on every ami so that every box could be rooted, not just the search cluster. And putting the fucking admin pam on the boxes instead of learning about agent forwarding. And not securing processes in any way -- why would you use chroot or docker, that would throw up obstacles in front of the nice chinese people who want to use your ec2 boxes.
I once had a box "bot-netted" by some automatic scanner after I set up postgres on it, being in a hurry after another box crashed unexpectedly. It is stunningly idiotic to me that SSH seems to be enabled by default for all users (at least on Ubuntu server).
Or more importantly, once in they insert a public key in there so if you naively disable password access they'll still be able to get back in. Change the password AND remove any keys in ~/.ssh. Also a debsums or rpm -qV is a good idea. A complete re-install is my preferred method though.
I've been considering moving an app off of Heroku and to Digital Ocean so I could learn the ropes of administrating a Linux server but I feel there's so much I don't know that I should know if I wanted to effectively manage my own server.
If I got hacked with m64.pl, I'd get as far as running top and killing the process. His immediate assumption was that someone got a file that file on to his server and so he went on to check the logs and found all the failed login attempts. I would have wondered what the file was, googled around and sat around confused.
Luckily deploying a new instance on Digital Ocean sounds simple enough so I would probably just do that but that approach would leave me with a sense that I don't know what I'm doing at all. There was a time when my solutions to problems on Windows machines was to restart and/or format but I don't do that anymore but still meet people who do. I'd rather not start from square 1 when I make the leap to Linux.
What are the things one should know to legitimately say they can manage their own Linux server? I could try to just wing it and solve every problem as I encounter them but for right now I'd just like to identify the gaps in my knowledge.
The m64.pl executable seems to be a vanilla version of cpuminerd. By default it connects to http://127.0.0.1:9332 and mines using the scrypt algorithm. It could have been started with different parameters, or the port was tunneled to a pool. Would be interesting to know the ip.
off: does anyone know which font he uses in the terminal? Looks awesome.
Assuming you're referring to the font of the blog's text...
Firefox: Right click, inspect element, click the "fonts" tab on the right.
Chrome: Right click, inspect elmeent, look at "Computed" styles on the right, find Rendered Fonts.
Ironically, the answer is he's not using a font. It uses Times New Roman on my Chrome and DejaVu Serif on my firefox because those are my default system fonts, not because he specified them. If you look at the source of the webpage, you'll see he has zero stylesheet links, all his styles are inline in <style> blocks... So searching font in that one source page is sufficient to find he only styles code blocks, the rest is default.
If you're referring to some other font, I have no clue, sorry!
While someone could set the password on the www user manually, I get the impression that the author is the only admin and would remember that. Since the only authenticated user is www, it's likely someone just exploited his application / webserver and set the password on www for later ssh logins.
The author will soon know that if the situation repeats without the ssh access.
The problem with fail2ban is that you have another possible vulnerable program that an external user can feed information to (although very restricted).
I have always been surprised that fail2ban is so popular, since iptables can do rate limiting, etc. So, it's easy to block most attacks with the in-kernel firewall:
My understanding of fail2ban is that it watches logs and then creates iptables rules to reject IPs, not dropping them itself. Since it blocks after a number of attempts it does a rate limiting of sorts, and automates the process. I agree that using the firewall without an intermediary is more powerful and linux admins should know how to use it.
I know perfectly well what it does, and it does not change my point: extra parsing is extra scope for vulnerabilities. You also don't address my main point: you can use iptables rate limiting to block brute force SSH attempts. I used this for years and it works perfectly fine. Connecting to often within a certain timespan and you get DROP'ed.
But you probably thought that I was suggesting to block IPs by hand. I wasn't.
No, fail2ban watches logs for repeatedly failed login attempts, and then blackholes given IPs using firewall rules. I prefer denyhosts, which is more ssh-specific.
You should also run logwatch - it will show you all the attempts that keep happening on any public servers running ssh.
I once had to look at a box compromised, which was then used to brute force other servers, and it contained a file with 100-200 passwords for other hosts it brute-forced.
A couple funny things; the intruder changing the www password, and running a mining script which is going to generate basically zero satoshi before getting shut down.
I guess change the password so someone else can't break in after you?
I can't explain the mining script. Maybe they want to alert you the box is owned, as a means to find only targets who don't wipe their system after being infected? What's the point...?
Seeing a bruteforce in the logs isn't at all uncommon. I see it every time I have a new server. Turn off password authentication and disable root login via SSH. That way, you're golden!
I have an ssh-access group, then the ssh server config is limited to that group. It makes it very easy to keep track of, and also works pretty well for maintenance with ansible.
Other options: Run it on a different port, use certificates or YubiKey for authentication, firewall it to your home computer, use Fail2ban or Denyhosts.
Running it on a different port is only protection against random attacks (sweeping ip addresses looking for common passwords). This seems much more like a targeted attack, someone took the time and effort to try to get in. A small port scan isn't hard, and will get around changing the port. Of course, the other things you've listed are solutions for this.
I was just posting a single option to complement the other comments.
It includes the equivalent of fail2ban, manages iptables, watches log files, can email on failed logins, processes gone mad, and a few other things through a very simple config file.
It's that simplicity of the config file that I like, it means I can hand it to someone and know that they're unlikely to break something due to misconfiguration or failure to actually apply something.
This is on top of obvious things like running ssh on a non-default port, turning off password auth and restricting ssh to named users only (never root).
The install script is easy enough: http://www.configserver.com/free/csf/install.txt so it's really the case that I can do whatever work I'm doing for someone (hello Wordpress) and this can be my simple security solution which at least means I'm sure that I haven't left every window open.