Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How do you securely self-host a server?
76 points by QuikAccount on March 9, 2022 | hide | past | favorite | 58 comments
Every few weeks or so there is a post on HN pleading with people to consider self hosting their own services. As enticing as that sounds, I'm sure I'm not the only one that has no idea how to secure said services. Spinning up a server is no problem, keeping it secure on the other hand is a feat I have no idea how to accomplish.



I prefer to run Ubuntu machines and at least in terms of provisioning a new secure server I built an Ansible playbook I called 'ANU' (as in A New Ubuntu). I'd expand to other distros, but then I'd have to change the name!

https://github.com/MitchellCash/ansible-anu

It is based on the DevSec OS/SSH hardening playbooks, but I lean closer towards ease-of-use over security where I think it makes sense. For example, I disable forced password rotation and I keep the default umask value of '022' instead of the more secure '027'.

When I come across something the upstream playbooks change that "gets in my way", I will disable it if the security trade off makes sense for me. I'm not running highly sensitive systems, so these trade-offs make sense for me, and maybe they will for you as well!

In terms of ongoing security upkeep, I run the usual `apt update && apt dist-upgrade` when I can, but I’ll be keeping my eye on this thread for additional advice.


Although you didn't state your OS, I'm going to assume we're talking about Linux. As others mentioned, reading hardening guides is a first step. Those will tell you how to avoid the most common configuration footguns and reduce attack surface.

Obviously a good firewall (ufw suffices) is a must. Using a reverse proxy web server back to your web apps (I prefer nginx, but caddy is also another good one). In your reverse proxy web server, also setting up web application firewall config to flag suspicious things (for example, anyone going to a URL with `../` or `/etc/passwd` is clearly up to no good, as is a user-agent that's a known scraping tool). In particular, I like using the custom HTTP 444 response code with nginx, so I can instantly flag the worst offenders.

Then, use fail2ban to blacklist hosts that are up to no good. If you can create a regex for something in a logfile, you can automatically ban almost anything. However, fail2ban is a very powerful footgun, and if misconfigured you can easily ban yourself from your own server! However, fail2ban is probably the best hardening tool, since you greatly reduce the number of tries that someone has to exploit something, and severely slow them down.

Finally, regular monitoring and patching. I swear by check_mk for monitoring, so I can see every suspicious query that's coming through, and instantly identify most ongoing attacks. Fail2ban takes care of 99% of the work if it's configured correctly, but the worst attacks are the ones that you don't know are happening.

I've been self-hosting most of the webapps I depend on for almost 10 years now, and I can say that self-hosting is extremely fun, but does require a a decent time commitment to maintain your infrastructure. However, if you are willing to invest time in automation with some basic shell scripting, you can get this down to less than an hour per week, which is mainly just checking your monitoring console and scheduled jobs.

If this sounds too daunting but you still want to go this route, check out yunohost: https://yunohost.org/#/


Try to use a firewall outside the server if possible. You can easily mess up the firewall configuration on Linux. Or your users can.

For example Docker will ignore the firewall by default when you expose ports.


It might be slightly easier to use sshguard instead of fail2ban for protecting against ssh attacks.

Using passwordless (key only) login is a given.

As soon as your server is provisioned, log in with the password, first setup ssh, and disable password login.

You can use fail2ban jails for different services (like nginx). You need to decide how strict you need to be.

On FreeBSD, using blacklistd might also be a better idea than using fail2ban.

To quote from the internet - "fail2ban and sshguard are both log scrapers. Log scrapers are gross hacks. blacklistd as an integrated solution is what should have happened many years ago."

FreeBSD jails also provide excellent protection. It can be a good idea to run each service in its own jail. E.g. a separate jail for nginx, one for your webserver, another for your db servers. This way you can also limit the resources that are allocated/dedicated to each jail.

Also, while running pf (or whatever firewall you have), you can limit the number of requests (rate limiting) to somewhat protect yourself.

Using Cloudflare or something else on the front can help against ddos. Also, some providers like OVH and Hetzner have ddos protection built in for free. Some like Vultr have it as a paid service, iirc.


SSHGuard looks cool, wasn't aware that existed. Haven't messed with FreeBSD jails, but I use unprivileged lxc containers on Linux, iirc that's the closest Linux equivalent? Those help me sleep better at night.


lxc was rather new when I started looking into this stuff. Since jails were much older, I just went with FreeBSD. Also because I was a little biased against Linux because on my personal computers Ubuntu and Fedora had crashed occasionally.

Now I think lxd is supposed to be a better user experience than lxc. Same backend better frontend. Like ezjail or iocage for FreeBSD makes the management easier than doing it all directly.


> Obviously a good firewall (ufw suffices) is a must

I keep hearing this, but I don't get why. I make sure my services just bind to localhost if they should not be public and I make sure to not run services I don't need. I portscan my server after everything is setup to make sure I didn't miss anything.

Does a firewall really buy me anything in that case?


It gives you a little bit more control over what happens when someone hits a closed port (do you want to drop or reject a packet?), this affects how different nmap scans interpret the port being open/closed/filtered. Also, it's a failsafe in case you don't configure something correctly and something is listening on more than just the loopack (localhost) network interface that you intended. Validating your config with portscanning is great.

Most importantly, you can rate limit and log.

For example, when I have a ufw default deny rule, I get these in my kern.log, and then I can apply a fail2ban rule if someone hits too many closed ports in a certain period (to detect someone portscanning):

kern.log:

    [  494.229628] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=1064 TOS=0x00 PREC=0x00 TTL=59 ID=62111 DF PROTO=TCP SPT=443 DPT=53202 WINDOW=72 RES=0x00 ACK URGP=0 
    [  498.140717] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=135 TOS=0x00 PREC=0x00 TTL=48 ID=8474 DF PROTO=TCP SPT=443 DPT=50851 WINDOW=16 RES=0x00 ACK PSH URGP=0 
    [  500.091643] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=88 TOS=0x00 PREC=0x00 TTL=246 ID=32348 PROTO=TCP SPT=443 DPT=53209 WINDOW=136 RES=0x00 ACK URGP=0 
    [  507.055805] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=135 TOS=0x00 PREC=0x00 TTL=48 ID=8475 DF PROTO=TCP SPT=443 DPT=50851 WINDOW=16 RES=0x00 ACK PSH URGP=0 
    [  524.907273] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=135 TOS=0x00 PREC=0x00 TTL=48 ID=8476 DF PROTO=TCP SPT=443 DPT=50851 WINDOW=16 RES=0x00 ACK PSH URGP=0 
    [  560.619488] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=135 TOS=0x00 PREC=0x00 TTL=48 ID=8477 DF PROTO=TCP SPT=443 DPT=50851 WINDOW=16 RES=0x00 ACK PSH URGP=0 
    [  595.045967] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=79 TOS=0x00 PREC=0x00 TTL=59 ID=14064 DF PROTO=TCP SPT=443 DPT=50131 WINDOW=73 RES=0x00 ACK PSH URGP=0 
    [  595.045985] [UFW BLOCK] IN=tun0 OUT= MAC= SRC=w.x.y.z DST=w.x.y.z LEN=64 TOS=0x00 PREC=0x00 TTL=59 ID=14065 DF PROTO=TCP SPT=443 DPT=50131 WINDOW=73 RES=0x00 ACK PSH URGP=0
/etc/fail2ban/filter.d/portscan.conf

    [Definition]
    # Option: failregex  
    # Notes: Looks for attempts on ports not open in your firewall. Expects the  
    # iptables logging utility to be used. Add the following to your iptables  
    # config, as the last item before you DROP or REJECT:  
    # -A <chain_name> -j LOG --log-prefix "PORT DENIED: " --log-level 5 --log-ip-options --log-tcp-options --log-tcp-sequence  
    # This will place a notice in /var/log/messages about any attempt on a port that isn't open.  
    failregex = \[UFW BLOCK\] .\* SRC=<HOST>

    ignoreregex =
Then if you want to rate limit something (to help prevent DOS), you can do an iptables rule like this (not sure what the equivalent ufw config is):

    -A INPUT -m limit --limit 1/sec -j LOG --log-prefix "INPUT:REJECT:"


Is there some point to blocking portscans? If nothing is listening on the ports then there is little harm to just letting them scan, right?

What I'm getting at is basically, if I have good rate limiting for expensive API's (which I need anyway), and I configure all my services the right way (which I want to do anyway since some software like docker likes to punch through ufw), and I don't care about somebody portscanning a bunch of ports that aren't open anyway is a firewall still a must in your opinion?


I would say so, anything you can to prevent an attacker from gaining useful information is a good deterrent (just like the comment that mentioned hiding version numbers from being displayed). The more deterrents you have, the more you frustrate an attacker and dissuade them from wanting to invest time in exploiting you. So to figure out what open ports you have, if you're using something to block/limit port scanning, it means you're exponentially increasing the time to figure out what ports are open, and possibly forcing them to rotate IP addresses that get banned.

I also don't like wasting CPU or bandwidth, however minuscule, on illegitimate traffic. Unless they are collecting data for a research project or you've hired someone to pentest you, someone port scanning you almost always indicates malicious intent.


If it shows up as blocked in a portscan the attacker will just try again from a different IP, right? think of it as a 403 vs a 404 in HTTP. A 404 says "there is actually nothing here" where as a "403" says "you didn't get to know if there is anything here". I'd rather just tell them that my open ports are 22, 80 and 443. Have at it, since it's cheaper for both of us to let whoever know immediately, and everyone who wants to know will know.

Also, on the CPU/bandwidth part, I don't think sending traffic to a non-listened port is any more expensive than sending it to a firewall. Both are stopped at the kernel, no?

Is there any point to block traffic to a non-listening port any more than is done by just not listening to the port?


Changing IPs is more expensive and requires more effort than being able to scan all ports from one IP, so yes they can certainly do that if they are determined, but the more expensive you make it for them to do that (resources, time, energy), the better a position you are in. Attackers can try anything, but it's defeatist to just let them instead of creating an obstacle.

You have a point that showing you're hiding something could potentially pique someone's interest, but also showing that you have few defenses and obstacles also makes you more appealing to an attacker since it makes you more worth their time. I don't think HTTP status codes are a good analogy, since they are more informative than a port being open/closed. If a port scan shows a port as closed, you don't have any information about whether a service is actually running there, just that you can't access it on a given IP address. If a scan shows filtered for a port then you can infer that it's there, but only accepts requests from a different IP. Also, running SSH on port 22 without any rate limiting is pretty much asking for brute forcing and makes you appealing as a target, even if they have no chance of succeeding it's still wasting resources. I like putting that on a high numbered port if possible, so someone has to port scan (with limited attempts) to even find it.

Depending on how aggressive and thorough you get with nmap (using the -T 5 option, OS/service version detection, and scripting that can actually run active exploits versus just reconnaissance), you can absolutely use up a good amount of CPU/bandwidth. This is a frequent problem for us at work when we are getting pentested, it can be very noisy and disruptive.

Though that resource usage comes more from the volume of traffic versus how it's rejected. Reduce the volume of ports they can try with one IP or in a given time frame, and reduce the resources you spend defending.

However, my understanding is that a firewall would help prevent nmap from doing complete TCP handshake simulations, depending on what type of nmap scan is being done and how the firewall is blocking traffic. So yes, each individual request is almost nothing if that's happening in kernel, but that could easily add up if someone's scanning all 65535 ports and doing an invasive scan. And I haven't even talked about scanning for UDP services, which take more energy on both sides since you don't have a TCP handshake to help you infer information easily.

I think it would help for you to use nmap on your own infrastructure from the position of an attacker: https://www.tryhackme.com/room/furthernmap


I’m building a product that tries to make this easy at https://pibox.io - but “secure” is a vague and tall goal post - although we cover things like service updates, firewalls, and abuse monitoring. Planning on a proper HN launch post soon!


That looks like a neat product, but it appears to be a NAS not a web host—or am I missing something?



How does 3 links to freebsd help op, they didn't even mention their OS.


On the one hand, security guidelines are fairly OS independent. Likewise, it helps whether it's self hosted or not.

On the other hand, I guessed most people would post Linux stuff, so I added FreeBSD content for those who want an alternative or a heterogeneous infrastructure.


Because if you want a secure server, you use something bad based, not Ubuntu or debian.


What about CentOS/RHEL or SUSE?


Why is that?


https://www.quora.com/Is-FreeBSD-more-secure-than-Linux

2nd result for "why is free bsd more secure" on Google. You should see #s 4 and 5.


Don’t plug in the ethernet cable ever and encase in 12ft concrete + faraday cage. You can install a window for viewing your files securely


Look at previous hacks. Un-patched packages and services, bad passwords, unexpected privileges.

Only allow in the service port to minimize your attack surface. Automatic security updates. WAF if you can. Cloudflare has a free tier with WAF. Maybe deploy a SIEM so you can alert on unexpected behavior.


Pretty much anything is vulnerable to a zero day exploit of some form or another. If you have data you wish to publish, or data that you wish to ingest in one direction only, a data diode might be helpful.

In this case, you manage the outside server, hardening it as best you can, and set up monitoring of bandwidth use, etc. through a network switch, etc.

The data diode will only pass data in your preferred direction, and can't leak it in the reverse direction.

In the case of inbound data only, you are protected against any data egress.

In the case of output data only, you are protected against ingress of control. You still have to watch for exploits pushing data out through the diode, but you can be sure any hack won't have been sourced from the internet.


I've been at this for a quarter century, give or take...and have never seen the term "data diode" before

Yet I knew what you meant before I finished reading your post


Same



Your security objectives are important if you are hoping to make any meaningful progress.

Are you concerned about DDOS? If yes, then don't self-host or use CF Tunnels.

Are you concerned about someone hacking your site with a buffer overflow? If yes, then make sure you patch frequently.

Are you concerned about someone hacking your site with any variant of a 0-day? If yes, then you need to air-gap and/or not use a computer at all.

Are you concerned about crappy business logic letting a bad session in? If yes, then you are in the wrong rabbit hole. You should be fixing your software until it provides the necessary degree of confidence. This has nothing to do with the server or hosting.

I don't think any of the above are modulated by self-hosting vs AWS hosting.


Most of the people doing this are on a LAN with a commodity firewall (modem/router combo)

I think their security posture mainly stems from being isolated in this way.

I've seen enough questionable suggestions like disabling the host firewall/SELinux that I doubt there are many layers to their security onion.

To be fair, I do rely on the network gateway too. It's pretty much all on my LAN, on a separate VLAN.

Additionally... most of my services are on a 'mesh' style VPN called Nebula. This lets the things I really need to access outside of home work while not being quite as exposed.


It requires lots of knowledge and elbow grease. Your security concerns also greatly depend on what services you plan to make publicly available.

The most important basic groundwork is proper firewall setup and network segmentation. Your personal LAN should not be directly routable from your public services. Also ensure that security updates are applied ASAP. Start learning about hypervisors and proper VM orchestration.

The best way to get started IMO is via a hybrid approach. Use cloud resources where appropriate to supplement your local infrastructure.


- Apply the principle of least privilege everywhere

- Apply security patches regularly

- Setup automated backups of important data

Following these three points puts you ahead of 90% of servers out there.


Install only software that you use and prefer minimal installations. (Debian has good defaults) Then you can install the unattended-upgrades package and when using ssh don't setup a password but use ssh keys. Also ufw needs to be enabled and then you can use nmap to scan for open tcp and udp ports. (Of course it's also possible to filter outgoing connections, by default only the package servers need to be accessed)

If you install services like email or http prefer software with a small footprint and with few CVEs in the past. Since http services are quite common, it's advisable to put a reverse proxy in front. Separate users for separate services is also a good idea.

Never had any issues with that kind of setup. Of course it's possible to add additional hardening like using an ssh jump host or a VPN to access ssh. (Probably advisable if you plan to put private data on it) Also using SELinux is an option, Fedora has it by default.


Does anyone have a link to a good server hardening guide that they can share here? Specifically and especially when running servers locally, or on things like AWS or DigitalOcean.

So many guides online, and the uninitiated can rarely differentiate between a good one and an outdated one. I’ve used a good one in the past but can’t think of it right now.


1) minimize your attack surface, only service ports allowed, use a load balancer if available, your security groups can be restricted to your IP address. Better yet you can deploy in a private subnet and use tailscale or a bastion host. 2) automatic security updates (unattended-upgrades on Debian based, yum-cron on rhel based) 3) use ssh keys and named accounts. Disable root and default logins (on aws the default account is ec2-user on Amazon linux and Ubuntu for Ubuntu images) 4) on aws trusted advisor recommends some good account hardening steps.


As far as DDOS protection goes, I'd like some tips there. Also, a question - if I have a 1Gbps home connection am I strictly screwed if someone is sending a little more than 1Gbps to me UDP-wise? It's the kind of question that seems simple but I've never been able to make my own simple answer.


Your only option is block the IP addresses but if its a large botnet or a reflected attack it may not be possible


Blocking the IP at your router won't help if your router's upstream is saturated...


Why are you saturating your own link? You control your upstream unless you've already been breached or there is a vulnerability in your router firmware


I don't think you're using "upstream" the way parent does (and the way everyone else I know uses it)

You only control what you're uploading

You have no control of what is upstream of the router (ie your carrier and beyond), since you're downstream of that


If someone is sending me 1Gbps at layer 3, doesn't that just saturate my link to my ISP (and therefore the open internet) anyway? Regardless of whether I block it with my router?


Are you opposed to using Cloudflare or other DDOS providers? I’m not sure what can be done on a small budget


I meant more for the usecase of having a public endpoint. I don't like the idea that my only option is to hide my endpoint as the origin of a CDN in order to prevent it from being saturated with traffic by a digital ocean droplet.


Theoretical max upload is always just below the other side's max download.


What do you mean? Plenty of people are on 30Mbps connections and I have more than triple that in upload.


Reading this topic has made me realize I don't even have a very solid image of what a "server" would be in this case. Is it a machine running an OS? A specific program listening on a port?

Does anyone have any recommendations for "Setting up a server 101" that could help shed some light on this?

Thanks!


Yes. A server is a dedicated machine running an service on a specific port.

Choose an OS.

- FreeBSD - Throws you in the deep-end but documented well to learn. Touching FreeBSD you will enable to manage Linux. Guides for BSD tend to be vague

- Linux - "Everything runs on Linux" easy gui installs. Lots of guides "How do I x on y distro" - Available in many different flaours, dog-food friendly.

Now choose a service(s) you wish to serve.

- Basic Web Server?

Easy Setup - Low Security

- Game Server?

Easy/Advance Setup - Low Security

- DNS Server?

Advance Setup - Medium Security

- Email Server?

Advance Setup - Medium-High Security

Then start with the following beginner steps:

- User Accounts and User Groups

SUDO, Disabling root login via SSH et cetera

- Moving SSH to another port other than 22

- Configuring SSH-Keys and password-less authenication

Now the firewall, pick one.

- Linux has IPTables and UFW

- FreeBSD has PF and IPFW

Once you've chosen your firewall, learn the the basic's of TCP / UDP

> TCP - "Hello, Hi, This is me, Cool. this is you, This is my data, Thanks for the data, good bye, bye"

> UDP - "Hi, so like here is all my data, bye"

then learn how to do the following rules:

- Block all inbound

- Block ICMP

- Block the inbound SSH port and only allow specific IP addresses

- Opening the port of your chosen service.

WebServer - 80/443

GameServer - 27960 for Quake3 Arena

DNS Server - 53

Once you've managed all of that your server is good to sit on the public internet.

Then learn about backups.


You understand perfectly.

OP does not bother to explain what type of server or service they want to run, so this becomes a more difficult question to provide advice for.

Even if it's just a web server they don't say who the audience is or what technologies they will use to run the website.

e.g. - WordPress would be more difficult to secure than static content.

There are many details left out which leaves room for only generic answers.


Depending on what you're interested in, try this out!

https://discuss.pomerium.com/t/this-little-nas-that-could/80


I appreciate everyone dropping in with comments and advice on how to do this. My basic takeaway from this is that it is absolutely not worth doing for the average person. I might give it a go though just to see if I enjoy it as a hobby.


Follow the common security practices (ssh only login, firewall, etc.), only install trusted programs and don't worry if you get "hacked"; make sure you monitor your server and have backups if it happens.


My approach so far is just run tailscale (or some other VPN) and dont serve stuff on the public internet.

Your response is typical of what happens when regular people ask this question - see an insane number of suggestions and best-practices and promptly give up.. The amount you invest in security should be proportional to the consequences you incur in case of a breach. For someone just starting out, don't put anything too sensitive on the server, do the simplest security steps you know how to do and get started. You can scale up the security to insane paranoid nation state levels if/when the consequence of a breach is bad enough to warrant it.


don't host anything crypto related, setup ssh with public key auth using a strong passphrase, restrict access via vpn / only expose the required ports via firewall, ensure your services don't report version numbers to avoid fingerprinting, be cognizant of the software running on the box so if/when the next log4j happens, you're able to react accordingly.


All great advice. I would just add that I think it's OK to host crypto things, as long as they are not public facing or only accessible via a VPN. For example, it's possible host your own private bitcoin node and electrumx server to privately manage a bitcoin wallet, and the only connections to it besides your own clients would be fetching bitcoin blocks from other nodes. But, that assumes that you don't keep any bitcoin wallets on the server itself, and configure it to not advertise itself on the network.

I would also add that containers are great too, so if something does get exploited that it limits the damage a single exploited app can cause.


crypto requires a mindset, both in securing your server and protecting your online identity. i'd avoid it for someone trying to learn the basics.


Ask yourself: Does it need to be in the cloud, always available?

If not, then your problem may be eased via solutions that can give you better peace of mind.


Run the cis benchmark tool.


You have to start with: What is secure?


I think "What is secure" is a bit too philosophical of a question. Perhaps a more practical approach would be to ask "What is your threat model?" and to frame the questions around the impact of a breach.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: