Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
M64.pl – how I learned that the default settings are not production settings (artur.co)
80 points by artursapek on June 9, 2014 | hide | past | favorite | 63 comments


On the cheap Linode instances I setup for others, I always install this: http://www.configserver.com/cp/csf.html

It includes the equivalent of fail2ban, manages iptables, watches log files, can email on failed logins, processes gone mad, and a few other things through a very simple config file.

It's that simplicity of the config file that I like, it means I can hand it to someone and know that they're unlikely to break something due to misconfiguration or failure to actually apply something.

This is on top of obvious things like running ssh on a non-default port, turning off password auth and restricting ssh to named users only (never root).

The install script is easy enough: http://www.configserver.com/free/csf/install.txt so it's really the case that I can do whatever work I'm doing for someone (hello Wordpress) and this can be my simple security solution which at least means I'm sure that I haven't left every window open.


If you disable logging in with the root user, ...do you allow Sudo for another user? Then, what's the point?


Because in order to do that you should need to provide a password.


Why do you run ssh on a non-default port?


To reduce noise in the logs so that I can focus on the more determined attackers rather than script kiddies just hitting the default ports on every IP.

It does nothing for security, but does help to reduce noise which in turn helps to reduce the time it takes to manage this stuff.


Brute-forcers only hit the default port. If there is no response on the default port then they just move on to greener servers.

Moving off the default port is a good way to avoid your logs getting filled up with failed login attempt messages.


Though something like fwknop would be a nicer way of resolving that.


Why not? You can always create an entry in .ssh/config for the host with the port if needed.

Host example.com

  IdentityFile ~/.ssh/id_rsa_example

  User foobar

  Hostname 127.0.0.1

  Port 1234


Always run SSH on a privileged port. An attacker could potentially run a rogue SSH instance on a non-privileged port and capture logins for other accounts.

If you run it on a privileged port (1024, or lower), it means that it can be trusted to root-level.


I hope you have killed and reinstalled the box because once a system has been compromised you can't ever trust it again.


I'm always amazed at the number of people who simply re-use servers that have been owned. It explains a lot.


I know. I'm taking the risk with it for a few more days because I was already in the process of moving those apps to fresh boxes (the other app is http://mondrian.io). I'd rather not take them down because they get considerable traffic. At this point I'm just keeping the server alive until I can fully replace it - not using it to do anything new.

I did immediately change www's password again and de-privilege the ssh key that that box had been equipped with. In fact I've recently started always ssh-ing into servers with the -A flag so I don't have to give them keys at all.

So far I don't see any more mysterious activity. Thanks for the word of caution.


Another word of caution. If someone can gain root access on a machine you log into with the -A flag they can "borrow" your authentication agent and use your keys. It's only borrowing because the connection to the agent disappears once you log off. But by then they could already have added new keys for themselves on your other severs.


In this case, if I read the article correctly, only a single account was compromised. I'm assuming the www user doesn't have sudo privileges (though I might be wrong as it shouldn't have a password either and it clearly did..).

Sure there might have also been a local privilege escalation vulnerability, so rootkits are definitely something to check for. Depending on the situation, re-installing might be less work, so then it'd make sense.

But especially if you're not the only one using the machine, having to reinstall the OS every time a single user fucks up and has 'password' for a password, it's going to get really tedious really quickly.


> But especially if you're not the only one using the machine, having to reinstall the OS every time a single user fucks up and has 'password' for a password, it's going to get really tedious really quickly.

This is what automated installs are for. Kill the box, fire up a new one, and run the provisioning scripts. As a bonus you've got actual documentation showing how the box should be set up.


If an attacker can get elevated privileges through an user account, the users themselves can do so as well.

And you shouldn't trust them either, so you need to assume they have if they could have. That means definitely reinstall when you find out you've been vulnerable to e.g. a local root exploit, regardless of user accounts have been hacked.


I made sure. www has no privileges. It can't read root's home directory or touch any of its processes. If it could, I'm sure I would be busy cleaning up a much larger mess instead of blogging about it.


I'm not trying to imply you didn't check thoroughly because I don't know the whole story and your background. But for me personally I wouldn't take the risk - even if the compromised account was 'nobody'.

I certainly don't know enough about computer forensics to make sure that a system hasn't been altered (and I dissect malware for recreational purposes). So unless it's a system that hosts something completely unimportant I'd take the weekend free to wipe + reinstall the stuff.

Even recently a linux local privilege escalation has been discovered[0]. So who knows what the hacked www user really had access to.

[0] http://seclists.org/oss-sec/2014/q2/467


I don't think the www user has a password in any distro, by default.

Meaning one of his administrators must have set one (likely for temporary troubleshooting), kicking off the whole issue...

edit: if SSH access was indeed the first cause. Running any upload-and-run script as `www` would let them set the password themselves, I think.


For anyone wondering how to properly troubleshoot in this manner without breaking things:

Run 'sudo su www -s /bin/bash'. using '-s /bin/bash' will override the usual nologin shell, and running 'su' as root will mean a passwordless account can be su'd too.

This will allow you to try accessing files and directories as if you had the user 'www's privileges without having to make the 'www' account regularly usable.

You should never set a real shell or password for any accounts that a real user will not be using.


Probably an administrator troubleshooting, www user should also have a nologin shell by default which cannot be changed by a normal user via an uploaded script.

Maybe default settings are production settings after all!


Disabling password logins is good, but there are other steps you could/should take. These are all pretty simple: https://library.linode.com/securing-your-server

Also, I would guess the attack is fully automated.


I'm scared noobs are going to blindly turn off passwords in their SSH configs without correctly setting up keys.


Linode has "console" access through the admin. But yes, good point. One should follow the linked document steps in order.


That's a legitimate fear. I added a few words pointing that out in the takeaway section.


Yeah, probably. But mistakes are how we learn.


Also, heads up. Don't ever run Elasticsearch open to the world in iptables. This is actually very common because users want to use Kibana (http://www.elasticsearch.org/overview/kibana/). Dynamic scripting is allowed by default (< version 1.2.0) and be exploited easily. It is especially nasty if you run Elasticsearch as root (also very common).


yeah, "special" people at my work did this. Along with installing / automatically starting elasticsearch on every ami so that every box could be rooted, not just the search cluster. And putting the fucking admin pam on the boxes instead of learning about agent forwarding. And not securing processes in any way -- why would you use chroot or docker, that would throw up obstacles in front of the nice chinese people who want to use your ec2 boxes.

it was a very long week.


I once had a box "bot-netted" by some automatic scanner after I set up postgres on it, being in a hurry after another box crashed unexpectedly. It is stunningly idiotic to me that SSH seems to be enabled by default for all users (at least on Ubuntu server).


All users with a password set. So 'setting the password' has perhaps an unclear side effect.

I guess if you disable password logins, someone would need to get a pubkey into a user's ~/.ssh directory.


Or more importantly, once in they insert a public key in there so if you naively disable password access they'll still be able to get back in. Change the password AND remove any keys in ~/.ssh. Also a debsums or rpm -qV is a good idea. A complete re-install is my preferred method though.


I've been considering moving an app off of Heroku and to Digital Ocean so I could learn the ropes of administrating a Linux server but I feel there's so much I don't know that I should know if I wanted to effectively manage my own server.

If I got hacked with m64.pl, I'd get as far as running top and killing the process. His immediate assumption was that someone got a file that file on to his server and so he went on to check the logs and found all the failed login attempts. I would have wondered what the file was, googled around and sat around confused.

Luckily deploying a new instance on Digital Ocean sounds simple enough so I would probably just do that but that approach would leave me with a sense that I don't know what I'm doing at all. There was a time when my solutions to problems on Windows machines was to restart and/or format but I don't do that anymore but still meet people who do. I'd rather not start from square 1 when I make the leap to Linux.

What are the things one should know to legitimately say they can manage their own Linux server? I could try to just wing it and solve every problem as I encounter them but for right now I'd just like to identify the gaps in my knowledge.


Beware of this: http://unix.stackexchange.com/a/8886/11172 To fully lock down SSH you have to check PAM config as well.

My 2 cents about defending against scripters:

* SSH: non-standard port, no passwords, special user-group who is allowed to login.

* WWW: redirect everything to https. For some reason bot-attacks dropped from hundreds to none over night. :D


The m64.pl executable seems to be a vanilla version of cpuminerd. By default it connects to http://127.0.0.1:9332 and mines using the scrypt algorithm. It could have been started with different parameters, or the port was tunneled to a pool. Would be interesting to know the ip.

off: does anyone know which font he uses in the terminal? Looks awesome.


Assuming you're referring to the font of the blog's text...

Firefox: Right click, inspect element, click the "fonts" tab on the right.

Chrome: Right click, inspect elmeent, look at "Computed" styles on the right, find Rendered Fonts.

Ironically, the answer is he's not using a font. It uses Times New Roman on my Chrome and DejaVu Serif on my firefox because those are my default system fonts, not because he specified them. If you look at the source of the webpage, you'll see he has zero stylesheet links, all his styles are inline in <style> blocks... So searching font in that one source page is sufficient to find he only styles code blocks, the rest is default.

If you're referring to some other font, I have no clue, sorry!


Thanks, but I was referring to the screenshot :)


It's http://www.fontsquirrel.com/fonts/M-1m. I don't think I'll ever switch from it. Very narrow and efficient font.


While someone could set the password on the www user manually, I get the impression that the author is the only admin and would remember that. Since the only authenticated user is www, it's likely someone just exploited his application / webserver and set the password on www for later ssh logins.

The author will soon know that if the situation repeats without the ssh access.


Using a tool like fail2ban is also favourable. Most of the bots will give up and look for other targets even if you ban them for a hour.


The problem with fail2ban is that you have another possible vulnerable program that an external user can feed information to (although very restricted).

I have always been surprised that fail2ban is so popular, since iptables can do rate limiting, etc. So, it's easy to block most attacks with the in-kernel firewall:

http://www.debian-administration.org/articles/187


My understanding of fail2ban is that it watches logs and then creates iptables rules to reject IPs, not dropping them itself. Since it blocks after a number of attempts it does a rate limiting of sorts, and automates the process. I agree that using the firewall without an intermediary is more powerful and linux admins should know how to use it.


And traditionally anything like fail2ban, or denyhosts, has had issues of their own. For example mis-parsing malicious login attempts like this:

    ssh -l "root@ 1.2.4.5" ssh.example.com
Allowing you to lockout the specified IP.


Here is the site: http://fail2ban.org

Since clearly you are confused what it actually does.


I know perfectly well what it does, and it does not change my point: extra parsing is extra scope for vulnerabilities. You also don't address my main point: you can use iptables rate limiting to block brute force SSH attempts. I used this for years and it works perfectly fine. Connecting to often within a certain timespan and you get DROP'ed.

But you probably thought that I was suggesting to block IPs by hand. I wasn't.


I actually did find fail2ban in my Googling. Seems like a decent tool, have to do more research before I decide whether to use it from now on.


fail2ban is a log prettifier, doesn't actually add anything to security.


No, fail2ban watches logs for repeatedly failed login attempts, and then blackholes given IPs using firewall rules. I prefer denyhosts, which is more ssh-specific.


Could you elaborate on that? fail2ban seems quite popular for blocking malicious IPs.


from the user's profile https://news.ycombinator.com/user?id=kbar13

"I work for Linode, but everything I say is me being an idiot." so..


...so what exactly is it you're trying to read into his self-deprecation? Because it's not there.


Given Linode's security record being an idiot or not seems irrelevant to me.


You should also run logwatch - it will show you all the attempts that keep happening on any public servers running ssh. I once had to look at a box compromised, which was then used to brute force other servers, and it contained a file with 100-200 passwords for other hosts it brute-forced.


I was hoping for a more detailed account of how he managed to log in as www.

That looked like a pretty amateur attempt yet he managed to log in as www.

Was there no password for www?

Why was it even available to SSH into?

This leaves a lot of questions unanswered.


A couple funny things; the intruder changing the www password, and running a mining script which is going to generate basically zero satoshi before getting shut down.

I guess change the password so someone else can't break in after you?

I can't explain the mining script. Maybe they want to alert you the box is owned, as a means to find only targets who don't wipe their system after being infected? What's the point...?


Even if the script only manages to mine very little it would still be worthwhile considering this is probably 100% automated.


The script appears to be be at least part minerd, a program for bitcoin/litcoin mining.

Details of minerd at https://bitcointalk.org/index.php?topic=55038.0


Seeing a bruteforce in the logs isn't at all uncommon. I see it every time I have a new server. Turn off password authentication and disable root login via SSH. That way, you're golden!



And if you do want to use password authentication, simply whitelist the allowed logins. This assumes you are using a strong password.


I have an ssh-access group, then the ssh server config is limited to that group. It makes it very easy to keep track of, and also works pretty well for maintenance with ansible.


Other options: Run it on a different port, use certificates or YubiKey for authentication, firewall it to your home computer, use Fail2ban or Denyhosts.


Running it on a different port is only protection against random attacks (sweeping ip addresses looking for common passwords). This seems much more like a targeted attack, someone took the time and effort to try to get in. A small port scan isn't hard, and will get around changing the port. Of course, the other things you've listed are solutions for this.

I was just posting a single option to complement the other comments.


Duh. Always lock SSH down




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: