Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Anonymous person sent proof of SSH access to our production server
450 points by throwaway0727 on July 27, 2016 | hide | past | favorite | 231 comments
Anonymous person (under a nickname) sent screenshot as a proof that they managed to gain SSH access to our production server. The screenshot is legit, information displayed in it could not be faked without actual access.

Just a proof, not some ransom request or anything equal.

What would be a smart next step? Other than checking if there are any security updates for all the software in our stack.

We are a small company and don't have any security experts, etc.

Thanks!




In addition to the wonderful technical advice already here for how to deal with the server, there is the question for how to deal with the anonymous person. If the proof contained the method of exploit I suggest something along the lines of:

"Thank you for bringing this problem to our attention! We are taking steps to resolve the problem now, but would like to reward you for your work. If you let us know how you would like to receive if, we would be happy to donate $X to your efforts."

Where $X is something you think you can easily part with, $100, $250, $500, $1000? This both primes the sender to be more generous, if they were on the fence as to whether to do something nefarious, and establishes some small trail to them (depending on method) in case of major problems with them later.

If the proof was not included in the email, I think it's much more likely you just received the opening email in a blackmail campaign. It's highly unlikely that server is even the entry point in that case, so cleaning it will not resolve the problem. It's just the sacrificial lamb for them to prove they've got leverage and let you stew, and they can contact you again after you think you've cleaned out the problem to let you know they still have access, and the only way to be rid of them is the pay them.


If you're going to offer a reward, I suggest making it conditional on revealing how they got in.

(This also assumes you're able to contact the person who sent the e-mail.)


> making it conditional on revealing how they got in

Do you mean obligatory? If you really mean conditional, could you elaborate why?


You agree with your parent comment. Making A conditional on B means B is required (though, depending on exact semantics, not necessarily sufficient) for A. A requires at least B.

So: B would be obligatory, given A.


I mean don't pay someone for breaking into your system and not telling you how they did it. Pay them only if they provide you with some useful information that can help you fix the problem.


In this case, the payment is conditional on revealing the method: that is, unless the method is revealed (condition met) the payment will not be made.


Didn't you read the LastPass thread? If the hacker can take down your company, you have to offer them a reward equal to the value of your company.


Apparently not, since I'm not sure what you are referring to (which means I also can't tell if it's sarcasm, which it looks like it might be?).

That said, I only mentioned payment in the case where the anonymous sender seemed to be helpful, in that they provided the way the server was infiltrated (presumably in an effort to allow it to be fixed). I didn't mention how much money to offer, or what to do at all in the case where it appears to be the beginning of a shakedown, other than to note that it's naive to assume you can rid yourself of them by just dealing with the one server referenced.

Edit: Ah, found the LastPass submission. Definitely sarcasm. ;)


You are forgetting some externalities re: value of being legitimate vs. criminal (e.g. contracts vs. ransom, legit money vs. tainted money, morality, fame, etc).


Not really. The hacker wants to get the most money from you possible, but also an amount that you are realistically able to provide otherwise he/she gets nothing. The maximum amount a company can realistically pay is probably much less than the total value of the company.


Unfortunately, it's the later - no details of exploit, just a proof.

If this comes to ransom, rather than unethical/unexperienced gray hat thing, are there any good steps to take? Or hiring an expert consultancy is probably the only good option here?


I can't comment on the correct approach in that case, I'm under qualified. I would urge you to make sure you have good backups in a location that can't be compromised (as in, you won't wake up tomorrow to fine them all deleted). If your system already supports this, all the better. Keep in mind the worst case scenario here is that every production server is wiped, which is essentially close to the situation of a natural disaster at the site they are housed. If you don't have a plan on how to deal with a situation like this (disaster recovery/business continuity plans), such as redeploying to the cloud or to a different cloud, or a different datacenter, then that's a thought for the future (and the present if you have time).

I assume a professional computer security firm could help, but I don't know enough about the incentives at play to know whether that's good in practice (if they often deal with situations like this and not just hardening/forensics, I assume they would have good advice). I have no idea what that costs, and whether your business can afford it.


Total data loss isn't the worst case scenario, in my opinion. Quietly interacting with your site, contacting your customers to abuse their trust in you, etc.


I would recommend taking your oldest backup offline and storing it indefinitely, in case later backups are corrupted. Make sure you turn on verbose firewall logging as well.


Actually it's reasonable that this person haven't given you the details. If he disclosed specific way he got in you'd probably patch it and carry on. Then he'd probably find another way to get in, disclosed it too, you'd patch it and it could turn into full-time (low/un)paid job for him. Not to mention that all of the holes found by him could earlier be exploited by someone else who could left something on your server.

By sending just the proof he forces you to reconsider your approach to security and start from clean state.


Does the e-mail (appear to) give you a way to contact the person who sent it?

I'm not saying you should or shouldn't, but several comments have suggested contacting this person; it's not clear that that's even possible.


Please listen to this advise. Having the proof or maybe only some hints is very important. That might sound far etched but this anonymous person could very well be an insider employee trying to blackmail your company for personal reasons.


"or, if you prefer to remain anonymous, to your chosen 501(c)3 charity."


If it's blackmail then I doubt paying them would effectivly deal with the problem. Hire an expert / contact authorities depending on the circumstances.


Key here is that you need to figure out how they got in. Then negotiate terms to have them back off. Either way, they're probably being nice about it if they haven't simply 'rm -rf /'d you.


> if they haven't simply 'rm -rf /'d you.

Or dd if=/dev/urandom of=/dev/sda1 bs=8M


Side topic: Have you ever done this just for fun on an old system or vm? It actually stops pretty early on once it starts into /dev - removing everything actually takes a little more work.


This has been covered elsewhere (like on serverfault: http://serverfault.com/a/107346/2557 )

But it comes down to:

- Take existing server down immediately. I'm assuming it is not on an isolated network -- so this should really be a priority.

- Prep a new patched server (with a smaller attack surface and updated security credentials)

- Postmortem the old box on an isolated network. Try to understand how the attacker got in. If necessary, get security professionals involved.


100% agree. Disclaimer I own a data center and have dealt with customer collocated equipment breaches. In addition to the above steps:

-> disable root being able to login inside your sshd_config file. Make sure PermitRootLogin no

-> rename the root account too so if they are using an exploit based on user authentication then perhaps they won't be able to elevate to root.

-> disable password based logins and go to cert based auth. This will shutdown brute force attacks.

-> lower MaxAuthTries in sshd_config to something like 1 or 2 to help slow down attackers.

-> change the port you listen on. A little security through obscurity while not very effective might slow future casual port scanners that are testing a single port. In practice I've seen this really eliminate a lot of reconnaissance or farming type activities.

-> Make sure you're openssl and openssh are at the latest stable releases.

-> If the perp is coming from the same IP, you can use an iptables rule to block the netblock. Again, not perfect but may help slow things down.

-> grab shell history of all the user accounts on the box. Example ~/.bash_history. This is more reconnaissance but may be helpful if they are sloppy - you might see what they were doing on the box.

-> look for any modified files or new ones. Obviously logs will show up in the list but you're looking for things that should not be there. Example: find / * -mtime -60 -print where 60 is how many days ago you want to go back.

-> look at chron file to see if any timed delay bombs exist.

-> look at ps -aux output for any running processes that don't make sense

-> look at iptraf for any suspicious traffic to IPs you can't reconcile.

Good luck!


just a note: intelligent exploiters hide their files inside of yours, so the -mtime is useless in many cases, they will set the mtime of their upload to match the rest of the folder they hide in.

command history is also easy to alter if you know what you are doing.


So the only way to detect exploits is to scan the server regularly and log the mtime and the file size, and look for changed files that shouldn't have changed?

Re the command history, is there any way around them wiping it - e.g. piping all .bash_history entries to an append-only store?


RootKit Hunter [0] works pretty well on most distros to check hashes on files and other potential problems.

You can also use a global bash configuration that would log all commands [1] entered into any bash shell to a central log, which could be shipped off-server simultaneously.

[0]: http://rkhunter.sourceforge.net

[1]: http://askubuntu.com/questions/93566/how-to-log-all-bash-com...


Does RKHunter scan userspace for mtimes? I haven't used it in years, so I'm honestly curious. Back when I did, it was customary to install it side-by-side with Tripwire, which essentially does only that; scan userspace and categorically log changed files based on a configurable severity depending on location (eg, /root/ is high, /var/log/messages is low)


They really serve two different functions. Tripwire and similar tools like aide (I use aide now rather than tripwire, but that's my personal preference, I'm not an infosec domain expert) are file integrity checkers that check files for property changes (including mtime). Tuning out false alarms can result in an admin just turning off the reporting functions of the tools, that's the downside as I understand it.

However, rkhunter has additional logic to specifically seek out rootkits and malware-like behavior, and is more specifically targeted to system file modifications. Combining it with unhide (to compare actual processes running with those visible from userspace) provides a reasonable assurance that nothing nefarious is going on.

They're all part of a spectrum, however. I use scanners like aide alongside rkhunter as well, but I'm the sort of guy that will spend a day tuning the config of aide to avoid constant false alarms.


yes, excellent point, agree. I run the scan regardless in case I'm dealing with the non-intelligent exploiter type ;)


good point as well. might as well run every bit of security you can, helps stop the "script kiddies" and lower quality automated stuff.


> Disclaimer I own a data center

I'm curious why you think owning a DC makes you less qualified to respond? Presumably because someone that senior is less in touch with day-to-day security operations?


The "disclaimer" prologue is often humble bragging. It's often less about flagging a conflict of interest and more about claiming to have authority or status.


That would be a "disclosure".


I think they meant 'disclosure'


Possibly, but that would suggest a conflict of interest which is apparent here. More of an appeal to authority perhaps?


...it's a Windows box.

(Just kidding.)


:-)


Also, you thank the reporter profusely for doing the right thing.


Ask for a BTC address and send a tip.


Alternatively, ask them for their preferred payment method and don't force your FOTMcoin on them.


I assume he suggested Bitcoin because it's still a good way to handle money anonymously. If the person that breached the server wants to remain anonymous but get paid, that's pretty much the only way to go.


Except for the fact that the transaction becomes a permanently public record.

You could also get looped into tax evasion if the person you paid doesn't report the bug bounty income. You can still maintain anonymity by not announcing the person who found it, but it's not a good idea to give money to anonymous people for work.


I assume he suggested asking them because Bitcoin isn't the only cryptocurrency.


I'm pretty sure the hacker won't want dogecoin or forked coin.


Oh I don't know, depending on the person they might want some fringe criptocurrency that they swear is the best (for non-economic reasons)


Bitcoin is flavor of the month now? Do we live on the same planet?


If only the planet weren't so big.


If you're sending an even slightly significant amount of money, there are potential tax implications too. You might need to do a 1099.


Haha! I don't know how you intended your comment, but I just had a funny thought of filling out a 1099 for blackmail.


If the box is on AWS, make sure to make an AMI snapshot of it for later postmortem. Also create a security group just for boxes built using that AMI that specifically only allows the minimum amount of access required to do the postmortem.


also remember to do the same on every server that could be reached by sharing the same credentials or having the same vulnerability once discovered, and at the end of the process trigger password change for every user if applicable


Also, wipe clean the existing server after the Postmortem. Even if fairly sure that the server can be cleaned it's never safe to re-use a box which was compromised.



"You mean like, with a cloth or something?"


No, using a towel could lead to contamination. Use a disposable paper towel. Also, Security Experts (TM) have not proven that Windex helps, but I'd use it just in case.


Great advice, I'd also add a couple more things about checking the desktop environment because its much more likely a windows or osx box was compromised and keys stolen or keystrokes captured than someone brute-forcing sshd or finding some sshd zero-day.

The few times we've been in this position, its always been someone's desktop that's been compromised. If you don't have a strong desktop security policy, a UTM, traffic inspection, key encryption policy, etc and you store keys or passwords locally, then this becomes a risk.


You still don't know how they got in, and since they're not doing anything nefarious (that OP knows of) it's probably easier to pay them off to inform OP how they got in.


Step 1) Image the system. Don't log into it, don't pull anything off of it. Take a snapshot of it. If your hosting provider doesn't provide you a direct way to do that, contact them and get them to do it. You want a clean image of it for investigating later.

Step 2) Hire a security expert / forensics company. Give them the image, ask them how to proceed.

Things to keep in mind:

- You don't know now much you can trust the person who has contacted you. It's possible they think they're a good samaritan, though logging into a system as a proof of concept is pretty far into grey-hat.

- Anything you say to them may one day be public record, attached to your company forever.

- It's possible they've compromised far deeper than this, and they just haven't said so

- If they've gotten in, it's possible that they aren't the only ones, so even if they cooperate and help you close the hole, you still want to do steps 1 and 2 above.


In terms of hardening against SSH attacks, the principles are quite simple. Your business case might mean that some of the following cannot be applied, but there's plenty of measures below that you can use to harden SSH.

1. firewall - only allow SSH connections from trusted static IPs

2. Use SSH keys then disable password logins. Lots of guides online to create keys, so I'll just cover the 2nd point: as root or sudo, edit /etc/ssh/sshd_config

    PasswordAuthentication no
    ChallengeResponseAuthentication no
    # restart sshd
(edit: make sure the SSH keys have passphrases. That way you have an extra bit of security in case any workstations get compromised)

3. Disable root access. edit /etc/ssh/sshd_config

    PermitRootLogin no
    # restart sshd
4. Limit SSH access to specific user accounts. This prevents users creating their own key (in the case of mountable home directories) or other machine accounts with passwords (if you've not done #2):

    groupadd sshaccess
    # add all users to the sshaccess group. lots of different ways to do this. The following will work on some flavours of Linux but not all:
    usermod -a -G sshaccess $USER
    # now edit /etc/ssh/sshd_config and add the following in (it wont already exist)
    AllowGroups sshaccess
    # restart sshd
5. Install auto-firewalling for failed SSH logins. I personally favour fail2ban as that covers other scenarios too, but I've also used denyhosts and that's worked well for SSH.

6. Lastly, and by far the best option, don't enable SSH on any internet facing IPs.

If you need SFTP enabled, then let me know and I'll post some details on how to harden SFTP so attackers cannot gain an SSH shell.


Keep in mind though that while securing SSH is a good approach, SSH itself is very unlikely to be the route of compromise unless an extremely insecure account were present with weak password auth.

It's far more likely that the attacker got legit credentials via another means, web application vulnerability, social engineering or malware attack on company machines, etc. I'd look at the less common applications you run, particularly anything that doesn't particularly look like it was designed to run facing the internet. For example, the elasticsearch guys decided that it would be a great idea to allow anyone who can access it to run java code on the server at one point...


I was wondering.....

If someone in the datacenter can image the VM and mount it some place in their own machine, reset the ssh rsa key, etc.

Is it good enough to "produce" the proof of the hack?

If so, than no amount of "clean up" can fix the issue, right?


Indeed, however some of my points still secure against that:

1. firewalling to only the sysadmin's IPs,

2. SSH keys + disabling password logins

6. and disabling SSH on internet facing IPs altogether (if possible).


You assume the breach happened over SSH. This is valuable information to securing SSH, but it's entirely possible the original breach happened over some other service, and there were some other steps involved in the breach before the SSH screenshot was taken.


True but I'm working from the angle that If the breach happened via some other means then they'd need some way to remotely execute code to enable SSH, create valid login credentials, and disable the firewall; in which case they already have a more convenient shell access so gaining access to SSH becomes redundant.

However it's possible that the attacker's screenshot was of a remote shell initiated via some other means and the OP assumed it was via SSH.

Edit: why was this downvoted? If there's an error then I need to be educated. I've spent enough years of my professional life hardening servers to have some idea what I'm talking about, but I'd be an idiot if I didn't listen to the expertise of others. So please correct me rather than downvote me :)


Best not to ask why downvoted. Those people's responses will rarely teach you anything. The kind that would will usually reply instead of downvote. Plus, a few already explained to me it's common for a post to get hit with a few negative votes followed by corrective action as other, open-minded people show up. Happens all the time with mine.


True. I've scratched my head over why some of your posts I've seen have been down voted.

Probably doesn't help I've been working long hours this week so a little on edge to begin with.


Would this be something like the ability to execute arbitrary code and obtain a passwd file or something like that?

I'm curious how to go from obtaining X info/access to an SSH session.


Consider setting up 2FA for SSH: https://wiki.mozilla.org/Security/Guidelines/OpenSSH#Multi-F...

The rest of that article is very helpful as well!


We set up yubikey. Cheap, fast, easy.

Foxpass also offers a nice audited, limited duration 2fa ssh auth service.


IMHO you can skip most of this except #2. Key-only login is very important, blocking access with fail2ban not so much. But you definitely need to send your logs to another box so you got a copy (that is unreachable for the attacker) of the logs that this guy definitely removed from the hacked system.


I do agree but a lot of it depends on the business case. Sometimes the business requirements dictate you have password authentication enabled. :(

You're right that key only logins are the easiest quick win, but it's also worth remembering that they can still be vulnerable if the box needs passphraseless keys (eg lazy automation) or if you cannot guarantee the security of those keys (eg they're generated by third parties and/or stored on third party systems). In those situations it would pay to firewall SSH and specifically whitelist the third parties IP address.

Good suggestion about the logs too. That's an often forgotten step yet invaluable when it comes to forensics.


One extra tip (if you aren't confident in the above process, and are performing it over SSH, and don't have physical access to the server):

Test the new access before closing the current session! If you made a mistake in any of the steps you will still be able to fix it. If you close the SSH session before testing, and then find that you made a mistake (e.g. forgot to `chmod 600` the private SSH key or something similar) you're stuck!


Also, to get rid of 60% of the script kiddies, change the port to something high 1000+ (or even better 10000+). Its not a great option, but it does lower the amount of automated attacks that you get.


I'd be very cautious before moving SSH to a non-privileged port (over 1024). Any user on the server might start their own SSH server on the port assuming the real SSH server is dead. While this is hard to exploit (needs access to normal user, needs to kill real SSH server, need to get around SSH server key checking), it still is at least a theoretical reduction in security.


Besides local iptables you can forward the port at the firewall level. Many people (myself included) have observed failed ssh logins going from many thousands daily to on average zero just by changing the port on a netfacing server. Of course a determined hacker who is after you can trivially portscan. But why not block all that noise and a huge percentage of shotgun attacks? If someone is out just to find servers to root with a new zero day they're liable to spray the net searching on the target port rather than portscanning all IPv4 space. Why not buy yourself some free time?

It is basically zero inconvenience to add an extra argument or shell setting.


A user can always start their own SSH server. Just because you've decided to move it to a different port doesn't really encourage them. I suppose you could make this a bit more difficult for them by removing compilers (no really, you don't need compilers everywhere) and making sshd owned by root, mode 700...

However, proper ingress filtering or local iptables/pf rules would stop any unwanted inbound traffic from reaching your server, and you should definitely be using ingress and egress filtering on your network.


removing firewalls has effectively no benefit; a non-root user can trivially download and run an arbitrary distro or package manager (e.g. nix from nixos, portage from a gentoo prefix, etc) and effectively do a chroot + package management without root.


The point was that you could more easily replace the SSH server with a malicious one and e.g. hijack your agent when you connect to it.


I'm not a fan of that option because it's causes an inconvenience for legitimate usage without offering any real security benefits. If you've hardened sshd then the biggest benefit you get from changing the port number is reducing the size of your log files. If you've not hardened sshd then you're just one nmap (et al) scan away from being in the same boat as you were previously.

If you really want to hide SSH from script kiddies then you're better off setting up firewall rules to whitelist trusted IPs. Heck, even port knocking[1] would be a better option than changing the port number.

[1] https://en.wikipedia.org/wiki/Port_knocking


How is port knocking not, I dunno, at least 10x more inconvenient for users?

FWIW, I've never seen a brute force attack on SSH other than on port 22. Most likely, a targeted attacker would realise if you change the port, you're probably not going to have a trivial password.


I agree, I set up port knocking and it's a huge pain in the butt (so much so that I never moved it out of testing). It also lacks cryptographic verification. It's a lot of trouble for a not quite ideal solution.


The Single Packet Authorization variant of port knocking has built-in authentication.


Port knocking is more inconvenient but its also more secure. I'm not advocating it though, just example how low down the scale I consider non-standard ports as a sensible recommendation.

Random IPs are unlikely to be port scanned but any reasonably popular site will.


Better yet, don't expose SSH to untreated networks. Setup a VPN/jumpbox and only allow connectivity from that jump box.


Do you have a way to reply to the person? I don't see any harm in thanking them and asking for more details.

But in the meantime I'd have to assume everything is compromised: save a copy or an image of the server for analysis, but take it offline and build a new one. Rotate all passwords and credentials. Assuming you're not doing something strange with SSH, they probably got legitimate credentials from a compromise somewhere else or password reuse or a compromised development machine, etc. There are guides online for doing this: https://support.rackspace.com/how-to/recovering-from-and-dea...

It sucks. Sorry.


I had something similar thing happen several years ago when I was a grad student. Me and a couple friends were putting together some Facebook apps (when they were a new thing), and one guy didn't escape user input correctly. Some teenager from an Eastern European country (I can't remember which anymore) ran a script to figure out that we were vulnerable to SQL injections. He was a nice enough guy and didn't want anything besides some experience "hacking". We patched up the code and told him thank you.

I understand you're running a business which makes it that much more scary. If he's not asking for ransom, you might ask him how he'd fix it. I know it might seem like blackmail, but you might even offer him a "consulting fee". He's probably just someone looking to try new things out and not malevolent.


This is a fair point. Maybe the hacker is evil and hell bent on destroying your server / company.

But if we treat every hacker like that by default then what kind of world do we create? Certainly take prudent safety action, but then practice what many here claim to value: knowledge sharing among curious individuals.


So once he had access to your box he lulled you into a false sense of security so you didn't go looking for his proxy? Nice.


Do not trust random HN or serverfault answers.

Cleanly shutting down the server can trigger rootkits that might wipe evidence: talk to a professional. Pulling the plug can still remove the ability to observe the behavior of the attacker: talk to a professional. Touching the disks can expose you to the risk of being accused of tampering evidence: talk to a professional.


The irony here is that your comment is a random HN comment from an account less than 10 days old :D

In reality, I think random answers on HN (or any answer/advice anywhere) shouldn't be trusted, but rather taken with a grain of salt and think about whether the answer really helps you.

>Touching the disks can expose you to the risk of being accused of tampering evidence

I don't understand this. What do you say touching the disks is? Like physical touch, or logging in and looking at the logs? I don't think both of those can be attributed to tampering of evidence, like criminal tampering since you use the word 'accused'

On a lighter note, do you always end your sentences with 'talk to a professional'?


> I don't understand this. What do you say touching the disks is? Like physical touch, or logging in and looking at the logs?

Is this sarcasm?

If not, hypothetical situation for you:

OP works at a company that processes card information of customers. A hacker demonstrated gaining unauthorized access to production servers. Hacker pulls a db dump as well as any keys used in encryption of data (some bad practices here, but this is common). Hacker does not tell OP of his additional actions, only demonstrates unauthorized SSH entry.

OP does the logically correct thing of wiping his db servers, and "cleaning" the machine, because, well, mitigation of future damage.

Hacker pissed he/she was not given reward for demonstrating his proof of vulnerability, uses this production data for ill will. A third party audit (which will happen) finds that OP has done a full wipe of the server - logs for who pulled vulnerable information is now unknown.

With no finger to point (the hacker contacted him "anonymously", remember) OP is then implicated.


>Is this sarcasm?

Yep. I was going for pointing out the fact that he obviously logged in so he 'touched' it and the OP is not so stupid as to wipe the whole drive when 90%~ of the comments of this post say no to. And I really doubt the 'logical' decision of anyone who manages to post to HN for advice will be to wipe the drive without getting a snapshot. And since it is a prod server, the same server has to be used unless they use AWS or some other cloud service.


Thanks for clarifying.

I believe the point the parent was getting at is that there could be other unintended consequences to taking relatively good advice.

Honestly, I'd even argue there is some better advice on server fault/HN than some professionals - but the difference is getting the professional has a paper trail that you can't say "well, some DBA on server fault told me!"


> The irony here is that your comment is a random HN comment from an account less than 10 days old :D

10 days old and a throwaway. No irony here: I'm recommending the reader not to trust random HN comments including my own.


> What do you say touching the disks is? Like physical touch, or logging in and looking at the logs?

There is a reason why foresics data capture devices are so expensive and certified never to touch a bit.

> I don't think both of those can be attributed to tampering of evidence, like criminal tampering since you use the word 'accused'

There has been various cases of people accused of destruction of evidence for wiping (allegedly) compromised hosts.


Rely on a single answer? That sounds like a bad idea. Considering advice from a bunch of accounts that are active, not 3 days old, and many of them overlap or say the same thing? Or an answer with many upvotes and no odd comments on serverfault/security.stackexchange.com? I think that's as good advice as you're going to get from any security firm.


OP here. Thanks for all the responses.

I took action and updated firewall settings (which were too loose), ensured that offsite backups are in place if worse comes to worst, rotated all api keys etc, meanwhile trying to contact the anonymous person. Will rebuild the servers asap as well, super glad that we have properly maintained ansible scripts.

Also will try my best to convince the CEO to allocate some money for professional audit/consultancy since we are no experts in security and to reduce the chances of future incidents.

Trying to do our best and avoid things like SQL injection, XSS, etc but no one is secure after all.


This incident isn't over. You must fully scope the incident, and the only way to do that is to hire outside help. If the breach leaks and it's found that you haven't properly responded, it will destroy trust with clients and possibly expose you to legal risks.

Hire an incident response firm. They can usually react next day if you can sign a contract today.


None of these things you've done will remove access for this person.


Contact me if you need a recommendation. I can point you to good security consultants probably within your budget.


Thanks, will have that in mind, as its' not up to me to allocate funds for consulting, etc.

Any idea how much can such a service cost, assuming web application with a very common stack (such as Ruby on Rails + PostgreSQL)? Is it something like $5k, $10k, or $20k+? Or it really depends? Sorry if it's a very amateur question, I have no experience in dealing with such companies so have no clue how much can it cost.


It really depends, common stack don't mean that much. Depend what app is doing, how much dependencies it has, with what external services it talks, etc. Also 5k, 10k - this would price per day. And then you hire guy/company for day. It depends from app, but it might take some time.

Security is not cheap.

Disclaimer: worked for such company.


From your post here, you clearly have only rudimentary security knowledge at absolute best.

You need to bring an expert and/or firm on-site.


Hi,

If you need security professional I can help you. Is the box on AWS, if so that would be a perfect use case for us? You won't have to worry to much about costs since we're starting up we're willing to work with your budget if you provide a testimonial for our website.

Send me an email: contact@cloudhawk.io and we'll get started quickly.


I noticed IR services are not listed on your website.

I hate to be this guy, but you don't want to offer IR/forensic services if you don't have experience doing exactly that.

Your client can sue you if you get it wrong. (http://arstechnica.com/security/2016/01/security-firm-sued-f...)


We do but it's not part of our MVP ;)


1) Isolate that server. Treat this server as your ground zero, but assume that other systems might be compromised (including servers and devices of employees etc).

2) Rotate / delete SSH keys on all other servers, that have the same keys installed as the compromised server. Private SSH keys may have been compromised in your company. Inform all employees who had a public key on that server that their access is revoked and that their private keys may have been compromised.

3) Log all (established) SSH connections on these servers. If there are unexpected connections, handle these servers the same as the compromised one. (better: inspect SSH connections through network devices). Interrupt these connections.

4) If you cannot trust that the isolated server is the only compromised one, you should isolate the network and start investigating for more breaches. The person coming forward may have dug deeper to prevent you from shutting him out.

If there are signs for more breaches, you probably should bring experts in. Or, if you can afford it, rebuild your infrastructure on a green field. The latter one is the safest one. If you don't have complex systems and use cloud services and automation tools, this should be doable.

I hope you survive this one.


One addition in regard to the perpetrator: Do not trust this person - these persons (always assume its more than one person). Don't commit to anything. Since they cannot be trusted, don't waste time and money on them.


Give the person a small reward for pointing out the vulnerability and offer another small reward for suggesting how to fix it.


Maybe hire that person?


Hire someone because they exploited a single vulnerability?


God forbid. No hire if they cannot reverse binary tree on the whiteboard using angular react e6.


The guy that couldn't "reverse a binary tree" came from a team that turned out to be incompetent (in operations field; they didn't see anything wrong with distributing all the packages with software using HTTP without SSL/TLS nor any cryptographic signature).


Are we talking about Homebrew guy? Was this a flaw in Homebrew or something else he worked on?


Yes, the one. It was (is? don't know, I don't use Apple products) a flaw in how Homebrew and its homepage operates.


Didn't they fix it?


As I remember, they argued that this is perfectly fine and there's totally no problem with that.


To be fair, some major Linux distros also distribute using http e.g. go to https://www.debian.org/ and check the Download link at the top of the page: it is http.


Yes, but there are some important differences:

- Debian provides SHA-512 checksums of those ISO images

- the checksums are signed cryptographically

- web server is not the only distribution point

- Debian packages are distributed signed, so once you have your OS installer somehow verified, you're much safer than with Homebrew

Granted, SHA-512 checksums and their GPG signatures are not exposed very prominently on Debian's homepage. You need to go to the listing of directory with ISO instead of clicking "download ISO" direct link.


Maybe offering a reward is a good way to reduce the anonymity of the person, since rewards can be tracked.

Speaking only for myself.. If I were to go the trouble to anonymously tip somebody off about a security vulnerability, it would be because I cared that they were secure and that I did not wish to be identified. I would neither expect nor accept a reward.


Yes but a white hat like you would probably give at least some hints on how to fix it or what vulnerability you have discovered, not just a screenshot showing that you have/had access.

I think just the fact that they didn't do this points to malevolence. I would expect the ransom demands to follow shortly.


I've heard of much worse hiring practices than to base on a single display of competence.


In the late nineties/early aughts, I worked for a company that hired a pair of guys after those guys showed up holding basically all the data from one of the company's services. The company started using these guys' custom software instead.

The server running this software was in our main server room although it was technically not our department. So one day, when a fairly serious bug was found, I had to chaperone the non-technical person into the service room. She was on the phone with these guys. For political reasons within the company, all I could do was watch. They instructed her, over the phone, symbol-and-letter by symbol-and-letter to add to live production code. What they added? A giant "if false" block around the buggy code.


True. Then again, I've heard of many better hiring practices as well.

Even odds it's a current employee, anyway.


why small ? the reward should be in accordance with the criticity IMHO.


So a startup with 100k total funding got messaged a major company crushing bug. What should they offer for such a big find, 50k? Half their runway?

It's not that rewards should be small for big finds, but if you are legit poor, you have limits on what you can do.


Rewards should always be in accordance with made-up words IMHO.


Step 1: Try to get in contact with the person and see if he/she is willing to help you share details on entering your systems. Thank this person and see if you can provide a reward.

Step 2: Next step is setting up new systems, and start from scratch. Install the systems, start with basic system hardening and up-to-date software packages. Use https://github.com/CISOfy/lynis to validate your configuration.

Do not have any interaction or data exchange with the old (compromised) systems.

Step 3: Save all running systems to learn from the event. See if you can find the main cause why this happened.

Step 4: Learn about security, hire someone on your team with security knowledge.

Step 5: Do regular (technical) audits.


"Thank this person and see if you can provide a reward"

This should be:

"Thank this person and provide a reward"

Looking at all the other steps you'll have to go through to remedy the situation, this is the least of your costs. (Provided they cooperate and are not malicious)


+1 for link to lynis. I had never heard of that before!


You are replying to the author of lynis, might as well thank him for writing it :)


You are welcome (even without thanking) :)


In addition to all the really great advice already submitted, I really like using Lynis[0] for scanning my servers to get an idea of obvious vulnerabilities and a baseline for hardening ssh. It's absolutely not a substitute for a security expert, but in about 10 minutes of setup you can get an idea of what action items you need to add to the top of your queue.

[0] https://cisofy.com/lynis/


Many people tell you to hire an "expert". Be careful, many such security experts are experts in fud and taking the money of clueless frightened people. For a lot of money they'll run their attack bot on your servers and send you you a twenty pages auto generated report which you will need another expert to read and understand. And security is not a one off task, you need to either not care at all, or make sure everybody cares all the time.

While you doing yourself the emergency backup and logs evacuation task, and the password flips, I would suggest to mission your best two hackers on the task of quick learning the basics and making sure there's no obvious hole in the wall.


[Edit: not-OP]

As the 2 comments so-far have suggested getting security experts, where would be a good place to source security experts? I'm envisioning 2 kinds:

* Consultant, working for a fee (with retainer?);

* Independent, may be consultant, but could also be someone currently looking for a new permanent role and would bring welcome diversity/expertise to a small team - potentially illiquid / poorly matched hiring market that could be nice for many smaller companies to tap? Working remotely could work too - no borders/boundaries.


> where would be a good place to source security experts?

There's no universal good answer for this.

I spend a lot of time on ##crypto (irc.freenode.net), and a lot of smart folks hang out there. Some are very well connected to other security experts in their own isolated communities.

However, there are undoubtedly silos of security expertise that remain untapped if you rely on just IRC.

You could also find folks who talk about security here on HN and follow their Twitter accounts (if public).

You could try "[development stack] security expert" in a Google search, as a last resort. (My company's currently at the bottom of the first page for PHP, although that's likely only true because of our filter bubble.)

A diverse approach is probably most likely to succeed here.


It's the same concept as auditors, there's the big 4 that you've probably heard of, and a ton of other, smaller firms with varying quality.

You could go with a known firm like iSec Partners, Matasano (now NCC) or Mitnick Security. They won't be cheap - at worst they may be able to refer you to some other reputable firm if your budget is limited.


> or Mitnick Security

Please no. Not Mitnick.

I'd rather funnel clients towards my competitors than Kevin Mitnick.

He's a skilled social engineer, and his greatest social engineering success was manipulating the media into believing he speaks for hackers in general.

He is not a programmer, his opinions on cryptography aren't insighful, etc. His only skill is deception.


A nice example is when I used to point out OpenVMS security benefits. Better architecture, higher quality focus, and few things running by default led to fewer vulnerabilities and more containment than competing systems. At least one person always had a recurring counterpoint: "but didnt Mitnick hack all kinds of VMS systems?"

If by hack, you mean use conned or password-guessed credentials to get in... Not my definition of the word.


Regardless of your opinion of The Man Himself, the company employees people that are good at things beyond social engineering. I've seen two separate engagements with them (one as a 3rd party and one as technical contact), and both found significant non-trivial vulnerabilities that needed to be patched.


I can't speak towards the efficacy of his staff, but that testimonial is generally true of any engagement with most security teams.


To be fair, are we assuming op's server wasn't compromised through deception? Smart play is usually the easiest path to a goal...


Addressed some of this in my main response. A lot of companies in this space will have cyber coverage that includes: breach coaches, privacy/security lawyers, forensic specialists, security operations consultants, and companies to handle mandatory breach notifications.

I tend to work mainly with small businesses and professionals in healthcare, law, etc to avoid and prepare for incident response, but I also work with clients in the middle of a breach.

In lieu of someone like me or a dedicated breach coach/resolution company, there are a handful of good attorneys who understand tech/startup culture and even enough of the technical intricacies to provide a good response.

Our new website will include resources, but it hasn't launched yet. I'm often available to do a short call at no cost to discuss level of effort and get a company or startup moving down the right path. If you want to take me up on that, hit me up on https://www.linkedin.com/in/clintonjcampbell now (or quirktree.com starting next Monday).


Enable two factor authentication on your servers, or else two factor on a bastion server and disable external ssh other than from your bastions on all other servers.

I strongly recommend YubiKey, it is convenient and cheap and extremely secure.

With ssh it is very easy for someone to create an ssh key that does not have a passphrase. With that, it is possible to log into the server with just the key file and nothing else. At that point all it takes is someone to lose a laptop or leave a computer unlocked and unattended and someone can get access to your machines.


Came here to say +1 to this, definitely employ a bastion host and make sure that's the only way to SSH to your servers. This can be a little tricky to do correctly if you don't have someone on your team, but it's a valuable way to reduce your surface area to monitor.

Installing fail2ban is also a very basic / smart way to discourage brute force SSH attacks on your boxes. Also you could try piping your SSH logs into something like papertrail / slack, so you have clear visibility into who's logging into your servers, etc.


On fail2ban, I have had more success in being able to stop attacks quickly by using SSHGuard. Quicker easier setup, easier to understand, etc. Is there a significant reason to use fail2ban over sshguard?


fail2ban still doesn't have IPv6 support.

If you use IPv6 (and you should, if possible), it's better to use an alternative that supports it (e.g. SSHGuard).


To people focusing in on securing SSH: just because the person has SSH access doesn't mean that they got it through SSH. It's possible that they brute forced the password or whatever, but there's a ton of attack surface on a website and many ways they could have gotten access. If they got it through for example an XSS attack and got the SSH password/keys, securing SSH doesn't stop them from doing the same thing again.


I was one of the people offering advice on hardening SSH. I mentioned firewalling sshd to a subset of trusted IPs - which would still secure you against the above attack. I also suggested SSH keys should have passphrases, which would also mitigate against this attack (providing the passphrases are complex enough). And if anyone has root permissions to disable the firewall or change user login credentials, then they don't need to enable SSH (much easier ways to gain interactive shells).

I also love how you can take genuinely helpful posts - after all, it's better to harden SSH regardless of whether this specific attack initially came directly from SSH - and somehow turn those contributions into something negative. God bless internet messageboards.


> I mentioned firewalling sshd to a subset of trusted IPs - which would still secure you against the above attack.

No, it doesn't secure you against the above attack. If they can use an XSS to get full access to the server and get the SSH key, then hardening SSH does pretty much nothing, they just whitelist their IP and continue accessing the server. We don't know that they got SSH access via SSH. Until we understand how they gained access to the server, blocking off other means of access that might have nothing to do with how they gained access does nothing.

> I also love how you can take genuinely helpful posts - after all, it's better to harden SSH regardless of whether this specific attack initially came directly from SSH - and somehow turn those contributions into something negative. God bless internet messageboards.

Sure, hardening SSH is always a good idea, but until we actually understand how the SSH access was obtained, we don't know that it fixes the immediate problem. It absolutely is negative to give people information that persuades them their problem has been solved when it hasn't been solved.

I don't see any reason for you to take my post personally. It's not an attack on you, it's just pointing out that we need more understanding of the problem to actually solve it. Don't shoot the messenger.


> No, it doesn't secure you against the above attack. If they can use an XSS to get full access to the server and get the SSH key, then hardening SSH does pretty much nothing, they just whitelist their IP and continue accessing the server.

Someone else suggested that and my reply was that if they already have access to remotely execute code as root then they can easily gain root shell access with much less effort than having to do the above workarounds to enable the systems default OpenSSH server (and there are plentiful other ways to execute remote shells without needing SSH)

> Sure, hardening SSH is always a good idea, but until we actually understand how the SSH access was obtained, we don't know that it fixes the immediate problem.

Since you agree that hardening SSH is a good idea, then it doesn't matter how the attacker gained access to SSH, you'd recommend they review the security of their SSH server regardless. So your latter statement becomes irrelevant to the former statement.

> It absolutely is negative to give people information that persuades them their problem has been solved when it hasn't been solved.

I never once suggested this would fix their problems. In fact my language was very clear that my advice would harden against SSH attacks, specifically. However they have asked for next steps and while other people have rightfully focused on the forensics side of the investigation, I have complimented their advice with tips on hardening SSH. One recommendation doesn't have to override another :)

> I don't see any reason for you to take my post personally. It's not an attack on you, it's just pointing out that we need more understanding of the problem to actually solve it. Don't shoot the messenger.

The server is compromised thus it's already too late to "solve". However that doesn't mean people cannot offer advice on hardening against potential future attacks on new or existing infrastructure in conjunction with analyzing the point of attack on the compromised (and hopefully now isolated) equipment.

The snarky tone of my replies are because you have not offered tips that take precedence over my own recommendations, which if you had then I would have taken your criticisms seriously. But as it stands you're currently just disagreeing for the sake of disagreeing. Which is something I see far too often online and often just from people who want to seem knowledgeable but without imparting any actual knowledge and thus mitigate the risk of themselves looking stupid. Which is also why so many experienced individuals tire of contributing to public forums.

You say my advice doesn't solve the OP issues, well neither do your posts. So what was the point in posting them? I just see it as an odd kind of cyclic logic.


Okay sure. They can figure out how the attacker got access and fix that, AND harden SSH. They should also audit their logs, install updates, use a linter on their JavaScript, use version control, use a library to sanitize SQL inputs, force HTTPS, do code reviews, and 100 other good development practices that have nothing to do with the problem at hand. I'm glad we agree on that.

Given that the OP doesn't know how to address their immediate problem, however, posting a bunch of random good practices is probably not very helpful.

I contributed something constructive: I recommended figuring out what the vulnerability is and fixing that over fixing random things and hoping you fix the problem by chance.

> I never once suggested this would fix their problems. In fact my language was very clear that my advice would harden against SSH attacks, specifically. However they have asked for next steps and while other people have rightfully focused on the forensics side of the investigation, I have complimented their advice with tips on hardening SSH. One recommendation doesn't have to override another :)

You said, "I was one of the people offering advice on hardening SSH. I mentioned firewalling sshd to a subset of trusted IPs - which would still secure you against the above attack."

If you want to claim you didn't say anything wrong and you were intending to suggest your solution in addition to the solution that actually solves the problem, that's your prerogative, but people can read the post history which shows that isn't true, so it would be more dignified to just admit you made a mistake. Nobody cares that you made a mistake--I'm not attacking you for that. I'm just trying to put up the correct information, since you didn't. It's not about you, so there's not much reason to take it personally, and you're not making yourself look good by claiming you didn't make mistakes that everyone can read.


My point was correct to the specific attack you broadly described. Your example required a web server attack that allowed arbitrary code execution and privladge elevation; which is a hugely specific attack and it's pretty fair to say it's unlikely (in the case of gaining root access and then choosing to enable SSH, which I'll get to). Furthermore, and at risk of sounding like a broken record, if an attacker can remotely execute code as root then they have absolutely no need to enable SSH for they already have far easier methods of firing off a remote shell. (To be honest they don't even need root to accomplish this).

This is not a mistake, this is something I've done in practice when auditing security at work. (Preventative pen testing rather than post breach forensics such as this situation calls for. But as I said before, that doesn't mean you cannot take lessons from the former in conjunction with the latter.

Anyhow, we really are just arguing about arguing now, which is an utterly pointless waste of both our time


> My point was correct to the specific attack you broadly described.

But you didn't correct it, you proposed a solution that didn't address the attack I described.

> Your example required a web server attack that allowed arbitrary code execution and privladge elevation; which is a hugely specific attack and it's pretty fair to say it's unlikely (in the case of gaining root access and then choosing to enable SSH, which I'll get to).

The XSS attack is just an example of a vulnerability that wouldn't be addressed by hardening SSH. There are plenty of other vulnerabilities that wouldn't be addressed by hardening SSH.

> Furthermore, and at risk of sounding like a broken record, if an attacker can remotely execute code as root then they have absolutely no need to enable SSH for they already have far easier methods of firing off a remote shell. (To be honest they don't even need root to accomplish this).

There's a good reason to demonstrate SSH access even if they have root access: they might want to show their capabilities without exposing how they gained those capabilities (because knowing how they gained those capabilities would allow the OP to fix the problem).

> This is not a mistake, this is something I've done in practice when auditing security at work.

Just because you've made mistakes in practice when auditing security at work doesn't mean they aren't mistakes.

This isn't even time for a security audit. OP first really needs to do some forensics. A security audit should happen, but it can wait until the vulnerability has been found and fixed.


> The XSS attack is just an example of a vulnerability that wouldn't be addressed by hardening SSH. There are plenty of other vulnerabilities that wouldn't be addressed by hardening SSH.

Well durr! We could be here all night listing things that SSH hardening wouldn't secure against. I never suggested it was a silver bullet to fix all security needs (which seems to be the faux argument you're accusing me off).

I'm loving all the personal attacks too. You cannot compete on an intellectual level so you make baseless accusations about my professional capabilities instead. God bless Internet message boards....


> Well durr! We could be here all night listing things that SSH hardening wouldn't secure against. I never suggested it was a silver bullet to fix all security needs (which seems to be the faux argument you're accusing me off).

That's not what I accused you of--people can read our previous discussion and see both what I actually accused you of saying, and also that you said what I accused you of saying.

> You cannot compete on an intellectual level so you make baseless accusations about my professional capabilities instead.

I did call your course of action a mistake, but that's not an attack on your professional capabilities. Everyone makes mistakes. I'm sure you're reasonably skilled at your job.

I think you'd enjoy this conversation a lot more if you didn't take my disagreement with your strategy as a personal attack. But if that's what you want to do I can't stop you.


Given that no threat or demand was made, this sounds like they were just notifying you of a vulnerability.

Does your company / product have an official responsible disclosure policy?


I did this a few times in the late '90s when I was a script kiddie eager to learn. I was naive enough to believe that I couldn't get in trouble that way. (I actually once got a job offer as a result.) From a security point of view the internet is a slightly more scary place these days though, with criminal gangs and governments being major threats.

If it's just a nice kid you got lucky, and I would hire someone with a clue on security to thoroughly check your infrastructure at least once. But if it turns out to be a hook for something more nasty, you better get some serious help. Don't trust anything on that server anymore, and if you have other, similar servers running, suspect them to be compromised too.

Also consider if you have any private (i.e. customer) data on that server that might get you in trouble if a third party has/had access to it. You can (re)install servers in seconds, but data is out there forever once leaked.

PS: Did they obtain any SSH keys, or did you have password authentication enabled in the SSH configuration? (Don't do that!)


If you and the people who work there don't know what to do, you most likely won't solve your problem by asking this generic question here or elsewhere. I'd suggest making an honest assessment of the value of your business and information, and depending on the numbers, considering the possibility of hiring a professional who can help you secure your systems.


A lot of the comments here mention what to do with the server, but it's often more probable that a workstation controlled by a user that happened to have access to that server was compromised.


I think this is what happens your service does not have a responsible disclosure and reward policy. Every fairly important service should have a security page and you should think of what you should be rewarding for each scenario in advance. Because it will happen and you better give people good reason to stay ethical.


Can't you just ask them how did they do it and patch it?


Ask your employees which one is trolling you? Maybe you will get a giggle and find out it was a prank.


This was my thought too.


Why would you have and production system exposing SSH to the public?

If you must, at least do these steps:

- Disable password SSH login - Install root kit scanner, like rkhunter and check if your networked systems are infected. s/he might gained access to other instances in your infra. - Use port scanning on all your instances and check if there is any suspecious rpc port is open that you are not familiar wtih. - Enable unattended security upgrades. - Check for the vulnerabilities listings for your internet facing services, like nginx, apache, HAproxy, etc.. - forward all your syslog logs to remote system so the attacker can't cleanup her/her traces after establishing the attack. - enable automatic blockers like fail2ban.


There's nothing inherently wrong with exposing SSH on your production servers to the Internet. It is one of the most secure services that can run on any given host. Surely it's more secure than your web server or application daemon(s) which handle the other publicly-facing functions of your production host.

If you have the infrastructure and capability to put it on a different network by all means make it inaccessible but for most businesses there's really no other option anyway.

Simpler (better, IMHO) advice would be to make key-based authentication mandatory for your production servers. That way a brute force attack is unlikely to ever succeed. It also rules out stealing passwords since the attacker would need to obtain the entire SSH key before they could login.

Having said that, we don't know how the attacker got in. They could have created an account for themselves or changed the root password/system configuration via a vulnerability. If that's the case they could modify sshd_config so that it listens on the public IP which would make "don't expose it to the public" moot (firewalls notwithstanding).


There's nothing in the OPs post suggesting SSH was exposed to the public, or that the breach happened over SSH. So it's important to secure that, but it's also important to think holistically about the attack surface.


I am a forensic investigator and security consultant working for a well established organisation.

I'd recommend engaging a forensic consultant from a reputable company. The mish-mash of advice here is mildly useful if you know what you're doing, but since you don't the only way to be somewhat confident that you're no longer compromised is to perform proper scoping and investigation.

If you've already wiped/rolled/overwritten logs then this instantly becomes more difficult. I would want to see a saved copy of your firewall configuration too, for analysis in Nipper or similar.

I'd recommend the free Redline forensic tool from Mandiant if you're unwilling to hire a consultant.


>Anonymous person (under a nickname) sent screenshot as a proof that they managed to gain SSH access to our production server. The screenshot is legit, information displayed in it could not be faked without actual access.

Could you upload a redacted version of the email? Get rid of anything that identifies you or the company - the community here might be able to help.

I agree with the majority of the advice here - get the server down immediately after preparing a new, security enhanced version.

Assuming the following:

  * You use a key for access and not a typed password.
  * You haven't done done anything to your distro to compromise security.
There are probably two likely routes of thinking here:

  * The key leak was internal to the company and you need to figure out who it was.
  * The software your company wrote introduced a security vulnerability.
For the first one, you need to devise a method of catching them out - you at least want to know if it was an internal or external source. Assuming this person is smart, they'll probably be using a VPN or TOR so tracking their IP will be useless. Try the following:

  * You could set SSH to only accept incoming connections from your company (and your home IP as backup). Allowing SSH connections from arbitrary places is generally not a good idea.
For the second one, it would be good to know what sever software/libraries you are using as well as versions. `sudo nmap -sA <IP>` your own server so we can see what you have running and possible entry points.

Also:

  * Patch SSH to send you all commands typed - if you've been compromised you at least want to know to what extent.
  * Assume whatever data was on the server is now compromised - databases, passwords, usernames, emails, bank accounts, etc. You need to inform your customer base if their details are leaked. Internal or external they've already copied everything (I would have).
They are just some initial ideas - no doubt somebody will shoot them down but hopefully you can find some use in them.


I think your best bet is to follow the advice here to offer a bounty, but also start setting up honeypot servers for each of your public network facing services. This way you can do surveillance as this hacker (or others) are gaining access to your system. Good honeypot monitoring software should tell you where they got in from, what directories they accessed, and keystrokes being used.


So, to summarise the thread -

1) talk to them and reward them

2) don't talk to them or reward them

3) you're an idiot

4) hire somebody expensive

5) don't hire somebody, they're a rip off

Thanks, HN, that clears everything up.


Call an Incident Response company like Mandiant.


Many comments say "find & hire an expert" which is kind of vague and would take a few days probably. This comment is actually really good I think, way too low in the thread. An incident response team (of which there are plenty, and kudos for naming one, that makes it even quicker and more concrete) will know what step number one is to do right now (e.g. "make an image and bring it down"), before even officially hiring them to do further incident response and analysis.


You should ensure that (1) password authentication is disabled, (2) existing ssh keys are recycled (if only password authentication was used earlier, generate those [2]).

Subsequently, refer to "Essential Security for Linux Servers" [0] and "7 Security Measures to Protect Your Servers" [1].

[0] https://plusbryan.com/my-first-5-minutes-on-a-server-or-esse...

[1] https://www.digitalocean.com/community/tutorials/7-security-...

[2] https://www.digitalocean.com/community/tutorials/how-to-set-...


Since this is most likely a script kiddie assuming nobody cares enough about you to deliberately target you, If it was my company I would 1) lawyer and 2) Ask him how he did it, most script kiddies would love to tell you how awesome they are, 3) fix the problem 4) give him a couple BTC under the condition that he blog anonymously


I agree with the individuals who say that you need to rebuild your server. You really have no idea what they've done while they were logged in or how long they had access. So take the time to be sure a replacement build is secure and then cut over.

By the way, unauthorized access to a computer is a crime in every jurisdiction that I'm familiar with. It is not advisable to "test" someone else's computer and then provide them with proof that you accessed it in an unauthorized way.

If you communicate with the person and think that they deserve some sort of reward, that's your business. It is a nice thing to tell a neighbor that their door is unlocked -- but it's illegal to step inside the door and take a look around.


That's true, but it's also a reason people who have done so are wise to send an anonymous proof rather than "Hi there, my name is Dale McGuyver and I hacked your server..."


My suggestions: If your server has confidential/money related info. take down the server. If not , wait because even if you bring up new server. How do you know , he won't crack it again?

- Check your /var/log/messages & audit.log & ssh/d.log

- Check lastb & last command outputs

- Take dump of network connections. (netstat)

- find out his 'tty' and spy on him! with something like sysdig or using strace (http://serverfault.com/a/423666)

Most importantly do these after turning off bash_history.So that attacker won't see you are gather information.

I assume you are running Linux server.


> - Check lastb & last command outputs

I wasn't aware of lastb, thanks.

I'm impressed that my home server has logged over 650,000 failed logins since 1 July, and a couple of machines I administer in a university over 300,000 each. That's every three-four seconds for the home server.

That's quite a lot of bandwidth, worldwide.


Some kind of auto-bot is attacking, fine-tune the firewall settings :)


You should consider what information was available on that server. Did the code contain any passwords for other systems (such as an internal DB, or another production machine?). If so, those systems should be considered compromised now too.

On the other hand, if you are just getting started it may be that production doesn't have much on it yet, and you can just nuke the thing and start over.

I understand recommending security experts is easier, so that if you're wrong, you can just blame the expert, but you may be able to make the decision yourself if you are aware of everything that is on that machine.


Talk to him, find out his motivation. You may find out that he's not out to get you and breached the server just for fun and recognition, in which case it'd be easier to sort out if you just talk.


I have had to clean up similar in previous job helping tracking down a larger intrusion. Call the police and ask for advice

Here is some server advice, given that the server does not have ransomware.

  1) Save the memory of the machine if possible /dev/mem to disk it may have proof how the hacker got in.
  2) Save process list of the machine
  3) Save the netstat -plunt TCP/UDP output to file, you want to see which TCP/UDP connection the machine have before its turned off. 
  4) Check for any strange process in ps list
  5) If you see any strange process visit /proc/PID and check CWD current working directory and start commands.
  6) List kernel modules with lsmod and dump to file
  7) Power off the machine with the power button, reason you do not want to run any of the normal powerdown scripts
  8) Take a system image of the disk to an external hard disk for proof. This is important that you do not tamper with file access/last modification,change time.
  9) Pull out the network cable
  10) Power on the server again, backup all data files to external media
  11) Read log files
  12) Wipe the hard disk and reinstall the operating system with latest version
  13) Change root and account passwords
  14) Use an SSH key
  15) Lock down ssh port 22 to known IP numbers
  16) Apply all security patches, operating system and applications, make sure applications are running latest patched release
  17) Restore data backup
  18) Deploy host intrusion detection system, HIDS
  19) Send logs to an external machine
Optional steps

  20) If authoritative suggest it try contact the person and ask how he/she got in.
  21) Make a copy of the system image from step 8
  22) Examine system image and logs with something like Autospy from Sleuthkit, you can check last access times, read web and ssh logs.
  23) Ask upstream ISP providers if they have ssh connection logs. Ask if you can get them.
Additional resources Mozilla how to secure SSH https://wiki.mozilla.org/Security/Guidelines/OpenSSH Debian 5.1 how to secure ssh https://www.debian.org/doc/manuals/securing-debian-howto/ch-... Sans top twenty security controls https://www.sans.org/media/critical-security-controls/critic...


Thank the person. It wouldn't hurt to ask their suggestion. Enlist an expert on your team. Not every software developer is a system administrator and not every manager is an expert developer. YouTube videos doesn't make one an expert after a few hours.

I am certain you will receive very good advice here in the comments.

Technically speaking - you should give a little more data on your setup. A high level view would suffice. That way folks here can narrow their suggestions down.


They got SSH access? Steps I would take are:

1. Isolate the machine.

2. Rotate your keys.

3. Set up ssh via ssh keys and remove ssh passwords.

4. Now do whatever you want with the isolated machine.

If you already are using ssh via ssh keys then either one of your employees has been hacked or "the call is coming from inside the house".

I would really suggest, as others have, trying to get the mysterious individual to tell you how they did what they did.


Others have covered on possible ways you can tackle the situation , hopefully you will get through this.

since you have stated that you don't have a dedicated sysadmin , its better to use a platform like Heroku to host your app.

I am not suggesting that it will make your website/app inherently secure, but at least it removes a lot of pain points.


Consider hiring a Security Engineer for future issues. As for the current issue, are you a target for any known reason. Is their anything of value on the server?

If not, it's highly possible someone in your group who already has access could be involved.

Otherwise, maybe the Security Engineer serendipitously found you.


More likely than not he/she is probably reading this thread.... his nickname might even be on HN, ha!


Get professional help.


Don't forget the legal obligations you may have to your customers, depending on what kind of data transits through or is stored on your servers. Different states have different requirements, so you'll want professional legal advice.


Whoever sent it has ego issues (common among hackers). Play on that. Act impressed with their skills, and concerned with how you may have exposed the company to attack, and ask for their advice. Email them as the sysadmin, not the CEO.


More and more I get the feeling this is a result of some giant illusion of 'serverless fairy tale, ops skills and competent infrastructure people are no longer needed' a huge part of IT market tries to live in...


Proof shows it's not included to html5. But including to something. I have show proof about who owned HTML 5 company and logo. It's William who owns it! The other one!


There is some missing information here - the most important thing is the intentions (and possibly identity) of this person. Did he leave any clues as to what those might be?


I wonder if if that person just emailed them vulnerability, then it would just get patched within a day and he got only $1000.


You sure you have no disgruntled employees or any whose individual endpoints may have been compromised (and had keys)?


Try to "hide" your servers. Simple things such as vpn's and changuing ips


Re-install the server

On new re-installed server:

1. Change SSH service port to non-default one. 2. Do not allow root user to remotely connect (change sshd config) 3. Create new user which you will be using for administration to login as root. 4. If possible restrict which IP addresses are allowed to connect via SSH using firewall.


I would also add to this to block any password authentication and use SSH keys.


I would worry less about this, and more about where did he find the root password and ip. Change other passwords too, like email.


Came across your question moments after facilitating a security incident dry run for a SaaS with a 6 or 7 person core team (and a lot of sensitive data). Since many of the responses are focusing on the technical and reward/ransom aspect of this, I figured I could offer you some parallel thoughts on how to handle this from the business and liability standpoint.

There is quite a bit in this response, so please don't hesitate to ask if you want to talk this through directly. I can stand to offer 30min to provide some connections and make sure you feel equipped to go it with your current team. No cost of course and no nagging to get you on as a paying customer!

1. Insurance - Assuming you have a policy geared toward technology companies, you may have coverage for "cyber" incidents. Look at your policy or call your broker to find out how to initiate coverage.

a) You probably need to tell them sooner rather than later that you are investigating an incident. b) Ask for a referrals to the following specialities: tech/privacy law, forensics or security operations, and breach resolution vendors. c) Find out whether the policy requires you to obtain a referral to use paneled providers (most do from my experience). d) Find out any other requirements that might determine your ability for coverage down the line.

2. Legal (assuming American or similar legal system) - Given what I know about your situation, I'd suggest that you retain a lawyer experience with security and breach response. This doesn't have to break the bank. The key point is to establish privilege and get help from someone who can navigate the statutory/liability landscape (it's a shit show). I can recommend one or two that are sensitive to the needs of small business clients.

3. Forensics - If you have insurance coverage that will apply to this scenario, there is little reason not to contact a forensic or security specialist to validate your cleanup and ensure that the infrastructure is totally buttoned down going forward. Definitely follow the rest of this thread and digest the recommendations you're getting.

4. Customers/ Stakeholders - Your customers and stakeholders will appreciate clear communication, whether during the investigation/response or after the matter. The lawyer I mentioned above will be able to help you think about communication with these parties as well as timing. If the incident turns into something more than it is right now, you'll be thankful for thinking this dimension through carefully.

5. Intruder - Don't react carelessly. You don't know their motives or further plans. Is this a gray hat who is just wanting you to shore up your system. I'd be inclined to work with them and possibly even offer a reward. But I wouldn't go there without consulting legal counsel. Is this more about taunting or warming you up for a ransom request? Then get some folks with experience in your corner from the start. And be aware, your insurance might even cover a ransom request.

6. Law Enforcement - Most of us prefer not to go here, but keep it as an option. Talk to your attorney about it.

7. Documentation - Start keeping detailed notes. What are the steps you're taking to assess the situation? Who are you consulting? How are you measuring the risk? How are you preventing further damage? A breach coach can help you ask the right questions and record the information most needed to protect you and your customers.

8. Notification - If you're in the US or EU, you very likely have breach notification requirements. Unless you are certain the intruder did not take or view any protected data (pretty much anything personal or payment related), I would treat the incident as a breach. In the US, you have 47 different state laws dictating notification requirements plus a few federal. Breach resolution services to the rescue if you have insurance. Breach coaches and lawyers may handle the little cases themselves.

9. Follow up - a) Summarize the process in a short report and keep it in case you discover related damage down the line (or get sued). b) Close the incident out with insurance and provide any documentation they need to process the claim. c) Put a basic, written, practical plan together. d) Build a relationship with a knowledgeable insurance broker (I can recommend a few). e) Review, or update your insurance policy (or buy one). f) If you can afford it, keep a tech attorney on retainer. g) Set up a bug bounty through an established program (and if this intruder is well-intentioned, encourage them to use it in the future)


FYI, if you do want to chat. Our new website will include resources, but it hasn't launched yet. Hit me up for now on https://www.linkedin.com/in/clintonjcampbell. Our twitter account, @quirktree is operational though not launched yet. And our website, quirktree.com, should be ready by next Monday with online contact options.


Could this anonymous person be an employee with access to the server?


Thank to him and reinstall the server from scratch.


Don't be a cheap ass and hire a security guy.


They would not go buy all of this in proof.


What account did they log into?


Hide it. Look for ways to "hide" your server , simple things such as using a VPN or change IPs


Treat is as the worse case scenario, even if you think that guy who got in was a "good guy", some one else might have also visited your box with other intentions.

0) Check your own policies (if you do not have an incident response policy it's a good time to start one), compliance and regulation/legislation specifically around due diligence and notification/disclosure requirements, consult with a legal adviser especially around dealing with who ever informed you about the breach, I would not advise you to deal with the notifying party without doing so, and do not provide them with any compensation until you have spoken to a lawyer.

1) Revoke all credentials which tied to that system, revoke all credentials of all users and services that have accessed that system, revoke all secrets that were stored on that system or the system had access too like certificates and encryption keys.

2) Do not turn off the system unless you can preserver memory, do a full snapshot/image but keep it running keeping the memory intact is important for forensics and a reboot can erase a lot of evidence.

2.1) Isolate the system from your main network with minimal interruption, if you can prevent the NIC on the link from going down do so, if at all possible to mirror the port on your switch do it and enabled pcap for the next 24-48 hours at the least.

2.2) Identify any additional logs from other systems (load balancers, routers, firewall etc.) that could potentially have additional information regarding the breach and preserve them.

3) Do a full integrity analysis and inventory on one of the clone images/snapshots and compare it to your build policy/template and identify any discrepancies.

3.1) Check your current build template/configuration against your own policies (if they do not exist it's a good time to start making them) and best practices and identify any gaps.

3.2) If you received a detailed explanation of how the hack was done check if and what in your policies could or should stop it, if there was nothing implement a new control and add it to your build/config template.

3.3) If you have successfully identified how they got in review any other systems that can be accessible via the same vector.

3.4) Attempt to verify if any other systems were or could be compromised during the breach / vulnerability window and based on your risk assessment make a call to do a full review/rebuild of those system too.

3.5) If this was a common vulnerability get a vulnerability scanning tool (e.g OpenVAS) and scan all your systems.

3.6) If this was more social engineering/i found your SSH creds on GitHub then policies and awareness training should not be taken lightly.

3.7) If this was some sort of super duper NSA grade 0-day (unlikely ;)) notify the maintainer of the SSH software you are using about the breach.

4) Based on the outcome of step 3 either rebuild any machines that were compromised, if the vulnerability cannot be fixed immediately implement mitigating measures (restrict SSH access, implement 2FA, implement a jump box, perform active logging on all connections to the vulnerable machines).

4.1) If you are required or wish to perform a full forensic analysis of the entire incident.

Depending on the outcome of step 0 you may be required to have a qualified 3rd party perform the incident response and a forensic investigation, especially if you do not have in house "certified" people to do it, some EU countries are especially strict about this like Germany. You are also might be required to notify your customers regarding the breach, even if you do not I suggest you do it otherwise you might see "how i hacked XYZ" in a few weeks on HN and realize they are talking about you.


And ask him how he did it. Reward accordingly.


Meta:

Why is this post flagged, and why is the usual "vouch" option missing?


It looks like the [flagged] label was incorrectly displayed—users haven't flagged it.


Strangest thing I've seen on HN in a while. Anyways, it's gone now.


It's a bug. The post was originally killed by a spam filter. We turned that off and marked it legit, but some traces of spam-filter disapproval remained. Should be fixed now.


Just saw it as well. Gone now, though.


The part they left out is that they most likely have full access to one of you or your employees laptops. They probably just used your own keys to access the server. Which means they probably have access to all your personal info.


If your company got compromised, and your next step is to ask HN for advice, maybe your company should not be in business.


>We are a small company and don't have any security experts, etc.

Find one.


Or contact the person if you can and ask them how they got in maybe even offer him a financial reward.

Since he contact you Anonymously and is not trying to extort you he's just trying to point the issue out so there's no point in over reacting.


Sure, getting information on the particular vulnerability and its fix is useful.

However, doing anything less than clean reinstall of the tainted system and implementing the fix there would be underreacting. Verifying if that system was/wasn't backdoored takes ten (if not hundred) times more effort than nuking it from orbit and reconfiguring a new one.


How much would you trust that person? Enough to potentially risk your business on them?


It's not about trust it's about information gathering the goal is 2 fold

1. figure out the intentions of the individual

2. quickly finding and fixing the affected system

To be clear it doesn't matter if the information is true or false because if it's true you can find evidence on the system to confirm it and if it's false it could still prove useful.

You can nuke it from orbit later that could take hours or even days depending on how much stuff you have on it plus if the entry-point was trough the new app you just created nuking it won't fix the issue.

The moment you put the app on the new server you opened yourself up to get hacked again.

We all know a constantly updated system with nothing happening on it is incredibly hard to hack vs a system that has a lot of things happening on it the more things you're doing on a server the higher the attack surface plus it's a small company we're talking about here so they probably want to keep costs down by doing everything on as few servers as possible.


You are already trusting that person enough to risk your business on them, if that server is still up and running after finding the security hole.


It's not just about that server though, who knows if the other servers were compromised?


This can be a tricky proposition for small companies given the breadth of experience that is needed to adequately address this sort of scenario.

The practical solution for most is to outsource, but it can be challenging to find affordable and high-quality service at a reasonable price.

I'm getting ready to launch in this space. With few exceptions, the services I trust are focused on the glut of opportunities with deep pocketed clients. I can count on my fingers (and maybe toes) the number of colleagues and companies who are willing to pass that gold rush up to deal with the needs of the smaller clients in a meaningful way.


Can't stress this enough. Find one. Stat.


Well to be fair, one has found them. In parallel to doing damage control as mention before i would certainly contact the party and friendly invite them to share more information in exchange for a token of good faith (discount or a gadget). If you feel good about it, offer a job with reward to fix it.

If unsure about how much a reward should be, keep in mind how expensive it could get without.. often a costly chain-reaction of disasters.


This isn't the most insightful or actionable comment, but it is the correct solution for both the short term problem and many long term problems.


There's not much insightful advice to be given, chances are this guy could've easily traversed to every box on their network and set up a plethora of backdoors.

Without a "security expert" there really is nothing they can do.


> There's not much insightful advice to be given

Except some semblance of an idea of where to find a security expert. Even if it's a slight remix of "where to find ryanlol", that's plenty more insight and guidance. :)

(It probably seems obvious to you, but most people don't know where to find one.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: