As sad as it makes me, blocking large parts of the world that you don't expect to connect from via a list of CIDR blocks is an incredibly effective way to secure anything and reduce logspam.
I personally use nft blackhole [1], which I can recommend for its ease of use.
1) Many attackers "hide" behind cloud services. So whatcha gonna do when your primary attack vector is AWS EC2 instances ? Block AWS CIDR ranges ?
2) With IPv4 exhaustion increasing numbers of people will be using IPv4 allocations across geographic boundaries.
My logs and firewall greatly disagree. Please avoid comments on HN like this where it boxes solutions into a "my way or the highway" result, when it has been demonstrated to you here it's not "Absolutely pointless". Security is the layering of multiple solutions and defenses to protect your assets, there is no magic bullet or single solution.
"My logs and firewall are less cluttered" is not at all the correct metric to measure the security of your box.
IP address spoofing is a thing. Blocking CIDR ranges might protect you from low-effort, drive-by botnets that constantly scan the entire internet (which all should be completely mitigated by using certificate based auth anyway), but blocking based on IP address is absolutely not an effective control against a determined hacker.
You must consider your threat model. For your personal instance that you host hobby things on, you probably won't be targeted via IP spoofing. For any type of company, you should not be relying on CIDR blocking as part of your security layers. CIDR blocking is only effective at reducing the clutter of your logs, which is a convenience, not a security control. The real security control is using proper auth methods, which are so easy to do at this point that it's ridiculous for even a hobbyist to not do them.
my understanding is that spoofing only works for sessionless protocols or situations -- eg a single udp packet or a series of packets that do not rely on any kind of response, since the response (like a tcp ack, or a dh handshake) is routed to the spoofed address. this would not apply to ssh. what contexts are you thinking of?
Why are you assuming that a determined attacker doesn't control your L4 stack? MITMs are a threat, your network could be compromised, routers (especially consumer routers) are rife with vulnerabilities. This is the entire reason "zero trust" is pushed.
In any serious security design, "the attacker probably won't do that" would and should be shot down immediately. If your security strategy is hoping that an attacker will be kind enough to not exploit your open vulnerability, you've already failed at threat modeling and at security.
If an attacker can do it, you must assume they will do it. Because they will. That should be the starting point for any threat model.
that's cool man, i'm still going to block the 99.9999% of attackers that don't own my isp. you are conflating "bad idea in extremely exotic scenario" with "counterproductive"; ever heard of defense in depth?
So I know this is all probabilistic and there's probably some benefit, but are there really a meaningful number of attackers who could attack a server which only allows public key authentication and who will be slowed by country-granularity IP blocking? If you're counting blocked attempts in logs, you're counting attempts that were never going to succeed.
(Note: This works only if 1. Your IP blocks are very coarse, and 2. You disallow password authentication)
This isn't about deflecting a targeted attack, it's about reducing the amount of noise in your logs so that it becomes easier to detect a targeted attack.
For the same reason, assuming you must expose SSHD to the world, it makes sense not to expose it on port 22.
If you set up a machine on AWS EC2 and give it a public IP address, you can watch in near-realtime as the "admin/admin" login attempts come in (last time I tried this on a whim it took less than a minute).
> If you're counting blocked attempts in logs, you're counting attempts that were never going to succeed.
By your own logic, if the problematic traffic shares the same geographic origin and you only happen to spot failed attempts, blocking all traffic from the same geographic origin would also block potentially successful attempts.
And I agree with the OP: there's an awful lot of malicious traffic sharing a common geographical origin. If you ever felt curious, just launch a cheap VM and setup a web server to listen to traffic and log any connection attempt. You don't even need to host a site. In no time you'll start to see your VM being hit by all sorts of web scrapers and security vulnerability scanners.
I do, actually. There is zero reason for anyone to use EC2 instances to contact my personal public machines.
> With IPv4 exhaustion increasing numbers of people will be using IPv4 allocations across geographic boundaries.
And there will continue to be tracking of who is using which blocks, so there should be no reason for error rates to creep up too much.
You need to think about the specifics of your situation, not just blindly follow "best practices". For my personal public machines, I do know, with a high degree of specificity, who should be doing what, and from where. I exploit that to place limits on who can talk to them, and it provides a lot of benefit.
If you're running a large public service, you're in starting from a very different stance. For my $dayjob, there are good reasons why customers might contact us from a country where we don't yet offer service, so we can't wall them off. But we do occasionally use geoblocking more selectively when we're under attack.
If you use certificate authentication and disable password authentication then you've killed off 100% of unauthorised attempts right off the bat. There really isn't much need to do more than that, its the world's easiest security fix.
Sure, I don't think we actually disagree too much.
Leaving password auth on is simply negligent.
That said, blocking all countries that you don't expect to talk to is the world's second easiest security fix, and protects other processes you might have running, and other unknown vectors that might be worse, such as heartbleed.
> That said, blocking all countries that you don't expect to talk to
All fun and games until you find yourself on travelling in Asia and you need to connect back home and forgot you blocked half the world.
As for "other processes", we're talking about SSH on this thread. If you've got other processes then that falls into on-host/upstream filtering via firewall area of security. Regarding "unknown vectors", as I said, patching, no amount of IP blocking will help you with that.
If you really want to talk about "world's second easiest security fix" for SSH, that would be running a super-hardened bastion host and using SSH ProxyJump.
> All fun and games until you find yourself on travelling in Asia
Leave a cheap droplet running somewhere, whitelist its IP, and SSH through it. I'm in Asia and I do exactly that with my U.S.-based servers. Actually, I've blocked all of the rest of the world as well, except the proxy droplet. It's like a global bastion host. There's no legitimate reason for anyone else on this planet to try and SSH into those boxes.
I know this is an oft-repeated trope, but I disagree. If you are whitelisting users for ssh and use secure passwords, you're really quite safe. Whereas if you lose access to your device with ssh keys, you're locked out with no hope of getting back in.
In what sense is it "negligent"? I feel like this is just an example of people constantly repeating popular advice without really considering it, like happened with bad password expiration policies. Like, I get that an ssh key probably has more entropy, but there is such a thing as good enough. My ssh password for my server is mixed case with numbers, and 20+ characters. Good luck cracking it.
I’ve seen a compromised machine whose sshd had been replaced with a trojan that saved all passwords entered into it. Some investigation revealed that sshd modification to be part of a standard script kiddie toolkit.
Are you 100% sure that every machine you log into remotely with your very strong password isn’t logging that password?
Perhaps I'm missing something? I don't understand the issue. If the machine you're logging into (running sshd) is already compromised, it's... already compromised. I don't recycle passwords, so what is the risk?
And if the device on which you're running your ssh client is already compromised, it doesn't matter whether you use a key or password, its the same thing.
That’s good. I think you’re probably in the minority though.
There are other ways unique passwords can be compromised. I see passwords accidentally entered into IRC windows about once a month. And even if you have perfect discipline at using unique passwords, that’s not something you can enforce on anyone else logging into your machine.
Maybe someone will chime in with strategies to run ssh from a wrapper that loads a unique password from a password manager with no risk of reuse or entry into the wrong window, or something. But at that point, your complaint about being locked out without the necessary files would apply—might as well use a key, which is simpler and provides strong security with no rigamarole.
> And if the device on which you're running your ssh client is already compromised, it doesn't matter whether you use a key or password, its the same thing.
Whoa there sunshine.
Put your SSH keys on a USB HSM (Yubikey or Nitrokey) and nobody is ever going to be able to extract the private key.
Added bonus, put it on a USB HSM with touch auth (e.g. Yubikey) and nobody will ever be able to use the key without you knowing it (because you have to physically touch it).
>Put your SSH keys on a USB HSM (Yubikey or Nitrokey) and nobody is ever going to be able to extract the private key.
Except you. To run through a compromised machine... Perhaps I don't quite understand how it works, but I don't see how this setup negates that issue. Once you plug it into the compromised machine and allow access to it with whatever touch-authentication or w/e, I can't imagine you could keep it secret from the attacker on the compromised machine. But maybe it's encrypting the key on the device?
> Whereas if you lose access to your device with ssh keys, you're locked out with no hope of getting back in.
And if you lose access to your password manager, you're equally locked out. If you're not using a password manager, you're either 1. Dealing with a tiny number of servers (possible and legitimate), 2. Reusing passwords, or 3. Using insecure passwords. ... Okay, fine, 4. Or a world class memory/mnemonic device expert. Just back up your keys.
> you're locked out with no hope of getting back in.
Fortunately, most VPS and dedicated server hosts have a side channel that allows you to regain access when needed. It might be an automated dashboard feature to reset the root password, or you could open a support ticket. With colo, you can actually drive to the DC and reboot into single-user mode. In any case, you won't be locked out permanently. :)
Attackers, of course, can also social-engineer those side channels to gain access if they really tried. Much easier than cracking long passwords or 2048+ bit private keys.
> Fortunately, most VPS and dedicated server hosts have a side channel that allows you to regain access when needed.
Fortunately, but of course it means you now need to consider this side channel as well. Maybe you have strong ssh keys all across, but your cloud service has a web admin UI that can bypass them and someone has a 8 character password on it.
Yeah, the hosting company is usually the weakest link. I use 2FA on any web admin UI that supports it, but who knows how well it will hold up against a determined social engineering attack on the CS department?
> I know this is an oft-repeated trope, but I disagree.
Agreed. Sometimes in these discussions it is forgotten that password and keys are both instances of a shared secret N bits long.
Now, yes, passwords tend to be shorter and have less entropy per byte if a human generated them and keys don't have these limitations. So in general it is nearly always wise to remove access via passwords. Certainly wherever general users might be creating those password since it is guaranteed some will be weak.
But any threat modeling exercise needs to consider availability as well. Using the STRIDE model, the D is for Denial of service. One case of that is not being able to access something important.
For my infrastructure there is (only) one ssh entry point which can be accessed via password. Limited only to very few select userids and the passwords have >=128 bits of entropy. Nobody will be brute-forcing those in the lifetime of the universe. It's a bit of a pain to memorize them, but it is possible. It has saved me a few times when I'm traveling and have access to nothing other than myself and my memory and need to get in.
On the downside, definitely need to be careful about operational security. If you are traveling, where are you entering this password? Can it be captured? Be wise. But there is a use case.
Private keys aren’t shared though. You never have to worry about leaking a secret when you authenticate with a key because the private key never leaves your machine.
When this topic (always) comes up, somebody points out that moving the ports isn't security...but it kinda is, because if you suddenly see an uptick in logging, you know someone cares enough to FIND your port, and then POINT SOMETHING AT the port, and in the meantime it reduces heat and power and disk wear.
Moving the port reduces security. Port 22 is a privileged port. Standard users can’t listen on port 22. If you move the port to 2222 or wherever, then if an attacker with local access can get sshd to crash, they can run their own sshd instead. If you left the port as the default, they wouldn’t be able to do that without chaining it with a privilege escalation attack. But because you changed the port, you disabled this security feature and it could become a privilege escalation attack.
A firewall config can block listening? What would that firewall config be? The firewall can block packets by owner uid. But I am not sure who is the owner in legitimate sshd case. Root or the user logged in?
Yes, but that port needs to remain "open" for the legitimate sshd traffic. Can you see a difference in ownership as the firewall sees it between sshd and some user daemon? Sshd drops root partially when login succeeds.
Sure the listening one remains owned by root. But the connected one? If you limit packets from/to e.g. 2222 to uid 0, will legitimate ssh traffic work? I don't say it won't, genuinely unsure. Haven't tried and today is a holiday. Maybe tomorrow :)
In practice, people won’t do that. Case in point: the article doesn’t mention this mitigation at all. It introduces an additional attack vector and tells you it’s safer.
I am not sure it reduces security. You have a valid point. I adds an attack vector. However "only" if you have the attacker in your system already.
Having all the log spam from failed attempts on 22 might the make the admin negligent on carefully following their logs at all. Having it pretty silent on a higher port usually will make you notice occasional scanning. At least I have noticed it and made me tighten the firewall a bit.
> and in the meantime it reduces heat and power and disk wear.
If drive-by SSH attempts - even a large number of them - are enough to have a noticeable impact on heat/power/wear, then you should probably consider, you know, not putting hardware from 1997 on the Internet. Really doesn't take that much energy on hardware built during the current century to reject an authentication attempt and log it.
When will people realise that removing login attempts is meaningless security theatre? It's like when some CIO or politican says "we've been attacking 29 Million times today".
This is why companies pay for https://www.greynoise.io/ so they don't need to worry about meaningless stuff.
Its not about stopping brainless botnets from actually logging in with root:toor 9 million times, its about removing clutter from your logs so you can more easily tell when something actually dangerous is happening.
This is the wrong number to look at. The relevant number is the login attempts that would have otherwise been successful. And if you’ve disabled passwords, that will be 0.
If failed login attempts go from 0 to 1000 in a single day, that’s very interesting and likely means you are being specifically targeted. You’d notice that if your ssh port is, say, 7731.
With exactly the same targeting, on port 22, you will see a rise from 100,000 to 101,000 which will likely not notice, despite it being as dangerous.
Changing the port does not make a targeted attack any more or less likely, or any more or less successful. But it does make it much more visible - and that’s useful for security as well.
A fault in your reasoning assumes that you know exactly what the attacker does - a credential attack; indeed this is the most common. However, maybe they are trying to exploit a zero day timing attack, requiring multiple attempts? Or some reconnaissance that lets them figure out valid usernames?
The different port won’t stop these of course, but will make the attempts stand out - which may allow you to stop them if noticed in time, or at least understand them in retrospect.
Yep. It has been a long time since I ever needed to provide a wide-open SSH service. By far the normal case is to allow login from one or a handful of IP addresses, or maybe a /24. Do this and your SSH log noise drops to zero.
It’s amusing that the people who howl loudly that blocking foreign IP ranges is an absolute failure have zero problems with blocking all IPs but a certain few.
If public key authentication is used with secret key in a hardware key/TPM/secure enclave, most other suggestions made don’t help further.
Fail2ban is certainly not needed (unless there is potential that some users may use very weak passwords, which password policy shouldn’t permit that, or logs are preferred to be cleaner).
Firewalls, public key authentication (verify host keys, also rotate), hardware keys, using SSH over Wireguard, and a secure bastion host provide real security. Preventing SSH agent and X11 forwarding is good too.
A bit late, but I feel it's important to clarify it exposes something like "authentication capability" not the actual secrets. It's temporally bounded.
The article says to use two-factor-auth (obviously a good idea) but says nothing about HOW you add 2FA to SSH. Does anybody have pointers? I'd love to add 2FA to my bastion hosts, but don't want to put a ton of effort into doing so.
> a password-protected SSH key is also two factors
Not really. By default SSH keys are kept in a private subdirectory of $HOME. An attacker who has access to it is very likely able to modify the user’s $PATH, trojan the passphrase prompt, and so on. Thus in many real‐world situations it collapses to one factor.
Contrast that to an SSH key tied to a WebAuthn token with "ssh-keygen -t ed25519_sk"—even a fully trojaned machine would not be able to freely initiate sessions with the compromised key.
I set up something like this [1] in the past. It relies on pam letting in the user if and only if that user provides the generated number. Do note that messing up pam is easy and may lock you out, so have a backup/snapshot, rescue shell or an open session when you are configuring it.
It's possible to configure sshd to require both password and key based authentication. That is, you have to have access to the private key and know the password.
You can set this up via the RequiredAthentications2 setting off you're just using protocol V2.
It depends what you mean by 2FA. Older definitions allow password protected keyfiles. If you mean active 2FA, you need a PAM plugin and usually a provider, but you can host your own. Look at e.g. Yubikey.
I would take password-protected keyfiles - I would love it in fact. But AFAIK there's no way for the server to know if the keyfile was password protected or not. Is there? It seems like it wouldn't be reliable - you kinda have to trust the client to tell you the truth about whether or not it decrypted the keyfile before using it.
Why does that matter? There's no way to ensure your employees aren't writing down their passwords. If you don't trust someone to follow procedure then don't employ them.
Security always exists in layers. Policy is a layer - e.g. asking people to do something. Another layer is to have a system to remind/enforce that policy. Writing down passwords is (generally) an obviously stupid thing to do, perhaps worthy of termination. But forgetting to put a passphrase on one SSH key when you regularly use a dozen of them for different purposes - this is absolutely not an offense worthy of termination.
If that's what you need it's possible to generate and issue keys from a centralized host just like with TLS. You can even set up login with SSH/TLS certs, which can be password protected.
What happened to single packet authentication? As someone who has casually run hosts, I've been disabling password and setting up keys for most of the 10+ years. Always been curious about SPA though, it seems like a decent way to protect a service, better than firewalling IP ranges and changing ports, no?
Do this experiment: Start a vm on any cloud with an open port 22, and watch the logs of sshd service. You will be amazed at the number of requests with bad credentials that will hit your machine within minutes.
I watched logs to different ports, and ssh wins first place
There _is_ harm. At one of the places where I worked previously, we used a few dedicated servers in different cities, and periodically synchronized data from the central one to all others using rsync-over-ssh.
Sometimes we got a warning from rsync that the connection was unexpectedly closed. We have traced this warning to SSH credential bruteforcers (yes, completely futile) that exhausted MaxStartups. So, we installed fail2ban.
I've ran into this myself. I couldn't ssh into my server until I disconnected the router (home server, and I was in the LAN). Turns out it was an extreme case of ssh bruteforce attempts that maxed out connection count or maxstartups or something like that. Don't remember exactly which resource was the bottleneck.
Not mikesabbagh, but depending on your sshd_config's MaxStartups you can easily be locked out of SSH access if the probers have too many active unauthenticated sessions holding up your own from connecting.
Having a backup sshd behind something like Wireguard is an inexpensive insurance against this kind of DoS.
I usually set up a bastion host that has Tailscale installed on it, with my private key stored on a yubikey.
That way you need to be on the Tailscale network, and have my Yubikey/PIN - makes it nice and easy for me to get on from pretty much anywhere if I need to.
Around the time it was released I tried to interest people in Kuhn's https://www.cl.cam.ac.uk/~mgk25/otpw.html for those travelling and wanting SSH access to the site from potentially-dodgy systems. Still, without your own device you may not be able even to use hardware keys. If the client is untrustworthy, you do still might worry about the channel being open from it to the other end.
The article fails to recommend turning off "KbdInteractiveAuthentication", which used to be called "ChallengeResponseAuthentication", which is another password protocol.
That's because most two-factor authentication methods rely on keyboard-interactive auth to prompt the user. After using a different auth method (such as public-key), the SSH daemon then presents the need for additional authentication, with keyboard-interactive as the available option.
This works by specifying `keyboard-interactive:pam` as the authentication method, and modifying SSH's PAM stack to call whatever PAM module handles the second factor. Notably, any PAM module which asks for a password (for example, "pam_unix") is removed from SSH's PAM stack.
If you can, you should do this via FIDO Security Keys instead. You do need a modern SSH (client and server) and of course money to buy keys for whoever is authenticated, but for employee systems in particular that's a very small expenditure and is re-usable for other problems (e.g. web site authentication with WebAuthn).
With Security Keys you can get a physical device you've decided you trust to authenticate that its authorised user is present remotely.
Although the keys aren't very bright, they do understand a handful of bitflags they're signing, and two of those bitflags are "User Present" and "User Verified", by requiring "User Verified" the physical device you trust must have verified the authorised user (e.g. local PIN, fingerprint sensor) before signing the message.
This approach is more robust, because it's all happening in the SSH public key authentication layer, not in ad-hoc PAM code, and it's simpler because there is no "second factor" data living on your SSH servers, the second factor is a problem for the authenticator only, yet it also re-uses an authenticator your employees can use to e.g. authenticate to your local gitlab install, or your Google docs, or even to decrypt their laptops on startup.
I wrote a simple bash script which fetches my current ipv4 address , and then uses aws cli to add that ip address to my whitelist for the ssh port on all my instances.
I have a cron job which autoclears all the whitelisted ip addresses at the end of the day.
If youre a team, you can always make a similar script and share it with everyone, since aws cli is configured with your team members iam access, you can be assured that they can only whitelist themselves on instances which they have access to over iam.
If you dont use aws, just expose an api on your server, protect the endpoint with an api key and use that endpoint to send the whitelisted ip to update your iptables(/whatever firewall you're using).
If all of this sounds really complicated to you, you can always just setup wireguard on one of your machines, then make all your team members connect to that vpn, and only whitelist the ip address of that machine across all your instances. That way only people who can authenticate with your vpn can even access your ssh ports.
SSH blocked by firewall for any connections except incoming from VPN IP address. VPN server is mine and located on separate server.
This has proven to be working solution for years. Any monstrous "security" constructions with keeping it open or partially open will backfire once one attack vector will be discovered.
At work I have an access tool for writing automation on groups of servers. Basically orchestration without a server.
I used to be SSH only, but the framework is built around simply delivering CLI commands and enabling file transfer.
So I abstracted the command request/response and now I can do it over AWS-SSM, or docker run, kubectl, salt daemon, teleport, or even AWS-SSM to a "bastion" and then ssh from there.
AWS-SSM is basically a polling mechanism, you can easily roll one of your own.
What I don't like is two factor authentications that require manual steps. Then you can't automate anything.
Another option is not actually expose SSH at all and proxy shells through a web server via WebSockets, fronted by xterm.js or hterm.js. There are some limitations here (like ctrl-W will get captured by the browser rather than the shell) but it is relatively easy to implement and fits a lot of use cases without the nightmare of fitting Linux PAM to your organization's evolving IAM needs.
Won't work for everyone, but definitely something to consider if you are offering "shell-as-a-service" internally or externally.
It's not about trusting the security of SSH vs TLS, but rather the ergonomics of deploying either technology. All the Big Cloud providers offer both, last time I checked.
Point is, you can offer a secure & functional remote shell without touching Linux PAM or the Linux authx stack beyond `useradd` and a locked-down `sshd_config`. Orgs that already offer web services over HTTPS may find this route desirable.
If you are exposing SSH to the world or your customers, you have to do a lot more than "touch" this stuff to achieve reasonable security. That's the point.
I have done ssh-over-websocket before (so that is TLS + SSH). I did it do get around a restrictive work proxy. Don't tell IT. I even patched socat to support websocket so it was easy to use with ssh's ProxyCommand. I plan to use this scheme instead of having a bastion server to access containers.
I think that if you follow the recommendation to disable password-based authentication, then fail2ban is downgraded from a near-requirement to a defense-in-depth tactic. It's not nearly as important to restrict the number of retries if asymmetric key-based authentication is used, because there is a much larger keyspace to search through than if passwords were used - assuming that the cryptography works as intended.
As others noted, probably because once you disable password authentication you're cutting out the ability for most bruteforce attempts to work.
However, I still think it's valuable for the following reasons:
1) It can slow stupid attackers down, e.g.: those that don't abort a bruteforce attempt when they see password authentication is disabled. Judging by my logs there's lots of those still out there.
2) It keeps the SSH logs less cluttered.
3) You can use it to build a set of hosts to consider blocking permanently.
I'm a fan of pam_shield, because it's tied right to the auth part versus tailing log files. And you can trigger whatever action you want. I believe the default is just null routing.
1) The very first configuration change anybody should be making to an SSH server should be disabling password-based authentication. Once you've done that you've rendered fail2ban obsolete, because the only real way in for attackers from that point in is via software security vulnerabilities, and fail2ban can't help you with that, that's your job to keep yourself patched up.
2) In an IPv6 world, fail2ban is pointless. The ranges are so vast.
I've tried it, and the number of failed logins is still significant, and it can only have gotten worse now, given the ease of scanning the entire IPv4 range.
If you only have {2fa,key,certificate} auth the number of alerts you should have from SSHD itself is (almost) zero, failed logins are (almost) _all_ _noise_. Higher level systems that monitor origin/destination/heuristics of successful logins are where its at.
I like passwords because I can remember them so I don't have to put my key on someone else's computer (cloud). So on my server I have one user (non root) with a long password so I don't have access to my keys I can still login.
This advice is not very good. Probably the best thing in this list is using the “AllowUsers” directive (which makes limiting root access a moot point) and using a good, strong key. There’s not much benefit to using a cert over a password-protected ssh private key.
They really push ssh certificate auth vs key auth. Each has their tradeoffs. If you're going to go through implementing a pki for ssh, I'd throw in something like kerberos also to look into.
I switched to SSH certificates on all my personal machines nearly four years ago. Compared to plain keys, there are three main differences I’ve noticed in practice.
First, I started giving certificates expiration dates, so a compromised key would only be valid for a short time even if I were unable to revoke it manually right away. It provides some confidence that my keys weren’t exfiltrated once and subsequently used behind my back for years and years.
Second, when generating a new client keypair, I only have to copy the public key to my certificate signing machine (one copy) rather than every host I plan to log into (many copies). This more than makes up for the minor certificate configuration necessary on new clients.
Third, host authenticity warnings are now a thing of the past. Since I switched to certificates, they only appear when something’s actually misconfigured, never simply because I connected to a new host. As a result, I’ve lost the habit of blindly accepting the fingerprint (and in fact I’ve now turned on strict host key checking in the config file so accepting it isn’t possible). Of course, you don’t need certificates for that… if you’re diligent at checking fingerprints. I tried to be, but sometimes I fell short. Not anymore.
Why? Putting SSH behind a WireGuard VPN provides stronger security guarantees than port knocking and is built into the OS on both my server and my clients.
I personally use nft blackhole [1], which I can recommend for its ease of use.
[1] https://github.com/tomasz-c/nft-blackhole