Hacker News new | past | comments | ask | show | jobs | submit login
OpenSSH introduces options to penalize undesirable behavior (undeadly.org)
401 points by zdw 7 months ago | hide | past | favorite | 287 comments



Having written an SSH server that is used in a few larger places, I find the perspective of enabling these features on a per-address basis by default in the future troubling. First, with IPv4 this will have the potential to increasingly penalize innocent bystanders as CGNs are deployed. Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network. With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.

From my experiments with several honeypots over a longer period of time, most of these attacks are dumb dictionary attacks. Unless you are using default everything (user, port, password), these attacks don't represent a significant threat and more targeted attacks won't be caught by this. (Please use SSH keys.)

I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.

If you want to have a read about unsolved problems around SSH that should be addressed, Tatu Ylonen (the inventor of SSH) has written a paper about it in 2019: https://helda.helsinki.fi/server/api/core/bitstreams/471f0ff...


> With IPv6 on the other hand, it is trivially easy to get a new IP

OpenSSH already seems to take that into account by allowing you to penalize not just a single IP, but also an entire subnet. Enable that to penalize an entire /64 for IPv6, and you're in pretty much the same scenario as "single IPv4 address".

I think there's some limited value in it. It could be a neat alternative to allowlisting your own IP which doesn't completely block you from accessing it from other locations. Block larger subnets at once if you don't care about access from residential connections, and it would act as a very basic filter to make annoying attacks stop. Not providing any real security, but at least you're not spending any CPU cycles on them.

On the other hand, I can definitely see CGNAT resulting in accidental or intentional lockouts for the real owner. Enabling it by default on all installations probably isn't the best choice.


IPv6 has the potential to be even worse. You could be knocking an entire provider offline. At any rate, this behavior should not become default.


FYI it's pretty common to get a /48 or a /56 from a data center, or /60 from Comcast.


I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"

People should write 80/48 or 48/80 to be clear


It's not about how many bits are 1 - it's about how many bits are important. And the first bits are always most important. So it's the first x bits.

If you have a /48 then 48 bits are used to determine the address is yours. Any address which matches in the first 48 bits is yours. If you have a /64, any address which matches in the first 64 bits is yours.


It's about how many bits are 1, in the subnet mask.


The number of bits that are important is the number of 1 bits in the which bits are important mask, yes. I thought you couldn't remember how that mask worked.


/48 is netmask of ffff:ffff:ffff:0:0:0:0:0. `sipcalc` can help with this.

  $ sipcalc ::/48
  -[ipv6 : ::/48] - 0
  
  [IPV6 INFO]
  Expanded Address - 0000:0000:0000:0000:0000:0000:0000:0000
  Compressed address - ::
  Subnet prefix (masked) - 0:0:0:0:0:0:0:0/48
  Address ID (masked) - 0:0:0:0:0:0:0:0/48
  Prefix address  - ffff:ffff:ffff:0:0:0:0:0
  Prefix length  - 48
  Address type  - Reserved
  Comment   - Unspecified
  Network range  - 0000:0000:0000:0000:0000:0000:0000:0000 -
       0000:0000:0000:ffff:ffff:ffff:ffff:ffff

I remember how this works because of the IPv4 examples that I have baked into my head, e.g. 10.0.0.0/8 or 192.168.1.0/24. Clearly the first 24 bits must be 1 for that last one to make any sense.

I recently found a case where an "inverted" netmask makes sense - when you want to allow access through a firewall to a given IPv6 host (with auto-config address) regardless of the network that your provider has assigned.


> I can never remember whether /x means "the last x bits are 0" or "the first x bits are 1"

> People should write 80/48 or 48/80 to be clear

The clarity is found implied in your preferred example.

- "80/" would mean "80 bits before"

- "/48" would mean "48 bits after"


... and this is the opposite of the other 2 responses


/x is almost always the number of network bits (so the first half). There are some Cisco ISO commands that are the opposite but those are by far the minority.

99/100 it means the first bits.


Maybe the only equivalent is to penalize a /32, since there are roughly as many of those as there are ipv4 addresses.


That may be true mathematically, but there are no guarantees that a small provider won't end up having only a single /64, which would likely be the default unit of range-based blocking. Yes, it "shouldn't" happen.


You cannot reasonably build an ISP network with single /64. RIPE assigns /32s to LIRs and LIRs are supposed to assign /48s downstream (which is somewhat wasteful for most of kinds of mass-market customers, so you get things like /56s and /60s).


As I said, "should". In some places there will be enough people in the chain that won't be bothered to go to the LIR directly. Think small rural ISPs in small countries.


What if it uses NAT v6 :D


i cannot tell if facetious or business genius.


Well seriously, I remember AT&T cellular giving me an ipv6 behind a cgnat (and also an ipv4). Don't quote me on that though.


That’s what Azure does. They also only allow a maximum of 16(!) IPv6 addresses per Host because of that.


Right. It's analogous to how blocking an ipv4 is unfair to smaller providers using cgnat. But if someone wants to connect to your server, you might want them to have skin in the game.


The provider doesn't care, the owner of the server who needs to log in from their home internet at 2AM in an emergency cares. Bad actors have access to botnets, the server admin doesn't.


Unfortunately the only answer is "pay to play." If you're a server admin needing emergency access, you or your employer should pay for an ISP that isn't using cgnat (and has reliable connectivity). Same as how you probably have a real phone sim instead of a cheap voip number that's banned in tons of places.

Or better yet, a corp VPN with good security practices so you don't need this fail2ban-type setup. It's also weird to connect from home using password-based SSH in the first place.


> you or your employer should pay for an ISP that isn't using cgna

That may not be an option at all, especially with working from home or while traveling.

For example at my home all ISPs i have available use cgnat.


> That may not be an option at all, especially with working from home or while traveling.

Your work doesn't provide a VPN?

> For example at my home all ISPs i have available use cgnat.

Doubtful - you probably just need to pay for a business line. Somtimes you can also just ask nicely for a non-NATed IP but I imagine this will get rarer as IP prices increase.


The better answer is to just ignore dull password guessing attempts which will never get in because you're using strong passwords or public key authentication (right?).

Sometimes it's not a matter of price. If you're traveling your only option for a network connection could be whatever dreck the hotel deigns to provide.


Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd. If you're traveling, cellular and VPN are both options. VPN could have a similar auth dilemma, but there's defense in depth.

Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.


> Even with strong passwords, maybe you don't want someone attempting to authenticate so quickly. Could be DoS or trying to exploit sshd.

DoS in this context is generally pretty boring. Your CPU would end up at 100% and the service would be slower to respond but still would. Also, responding to a DoS attempt by blocking access is a DoS vector for anyone who can share or spoof your IP address, so that seems like a bad idea.

If someone is trying to exploit sshd, they'll typically do it on the first attempt and this does nothing.

> Also it's unlikely that your hotel's IP address is spamming the particular SSH server you need to connect to.

It is when the hotel is using the cheapest available ISP with CGNAT.


Good point on the DoS. Exploit on first attempt, maybe, I wouldn't count on that. Can't say how likely a timing exploit is.

If the hotel is using such a dirty shared IP that it's also being used to spam random SSH servers, that connection is probably impractical for several other reasons, e.g. flagged on Cloudflare. At that point I'd go straight to a VPN or hotspot.


Novel timing attacks like that are pretty unlikely, basically someone with a 0-day, because otherwise they quickly get patched. If the adversary is someone with access to 0-day vulnerabilities, you're pretty screwed in general and it isn't worth a lot of inconvenience to try to prevent something inevitable.

And there is no guarantee you can use another network connection. Hotspots only work if there's coverage.

Plus, "just use a hotspot or a VPN" assumes you were expecting the problem. This change is going to catch a lot of people out because the first time they realize it exists is during the emergency when they try to remote in.


I already expect unreliable internet, especially while traveling. I'm not going to have to explain why I missed a page while oncall.


Well, allocating anything smaller than a /64 to a customer breaks SLAAC, so even a really small provider wouldn't do that as it would completely bork their customers' networks. Yes, DHCPv6 technically exists as an alternative to SLAAC, but some operating systems (most notably Android) don't support it it all.


There are plenty of ISPs that assign /64s and even smaller subnet to their customers. There are even ISPs that assign a single /128, IPv4 style.


We should not bend over backwards for people not following the standard.

Build tools that follow the standard/best practices by default, maybe build in an exception list/mechanism.

IPv6 space is plentiful and easy to obtain, people who are allocating it incorrectly should feel the pain of that decision.


I can't imagine why any ISP would do such absurd things when in my experience you're given sufficient resources on your first allocation. My small ISP received a /36 of IPv6 space, I couldn't imagine giving less than a /64 to a customer.


My ISP has a /28 block, so if they chose to penalize my /32 for some reason, that would include 1/16th of the customers of my ISP. Just guessing based on population and situation, that might include on the order of 50000 people.



> With IPv6 on the other hand, it is trivially easy to get a new IP, so the protection method described here will be completely ineffective.

I’m sure this will be fixed by just telling everyone to disable IPv6, par for the course.


The alternative to ipv6 is ipv4 over cgnat, which arguable has the same problem.


Serious question: why doesn't OpenSSH declare, with about a year's notice ahead of time, the intent to cut a new major release that drops support for password-based authentication?


There are very legit reasons to use passwords, for example in conjunction with a second factor. Authentication methods can also be chained.


Password authentication is still entirely necessary. I don't want to have to setup keys just to ssh into a VM I just setup, as one very minor example.


By the time it gets into distros' package managers, is it not often that long (or more) anyway?


> I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment". It got hacked within 20 minutes of going online and these mechanisms wouldn't have saved it, the attacker got in on the second or third try.

Is it possible to create some kind of reverse proxy for SSH which blocks password-based authentication, and furthermore only allows authentication by a known list of public keys?

The idea would be SSH to the reverse proxy, if you authenticate with an authorised public key (or certificate or whatever) it forwards your connection to the backend SSH server; all attempts to authenticate with a password are automatically rejected and never reach the backend.

In some ways what I'm describing here is a "bastion" or "jumphost", but in implementations of that idea I've seen, you SSH to the bastion/jumphost, get a shell, and then SSH again to the backend SSH – whereas I am talking about a proxy which automatically connects to the backend SSH using the same credentials once you have authenticated to it.

Furthermore, using a generic Linux box as a bastion/jumphost, you run the same risk that someone might create a weak password account–you can disable password authentication in the sshd config but what if someone turns it on? With this "intercepting proxy" idea, the proxy wouldn't even have any code to support password authentication, so you couldn't ever turn it on.


Passwords are not the issue you think they are. Someone compromising a strong password with something like fail2ban isn't more likely than someone finding a 0day that can exploit an sshd setup to only accept keys.


> what if someone turns [password authentication back] on

sshd_config requires root to modify, so you've got bigger problems than weak passwords at this point.


It is a lot more likely for some random admin to inappropriately change a single boolean config setting as root, than for them to replace an entire software package which (by design) doesn't have code for a certain feature with one that does.


Check out the ProxyJump and ProxyCommand option in ssh config. They let you skip the intermediate shell.


Wait, how often do you connect to a ssh remote that isn't controlled by you or say, your workplace? Genuinely asking, I have not seen a use case for something like that in recent years so I'm curious!


GitHub is an example of a service that would want to disable this option. They get lots of legit ssh connections from all over the world including people who may be behind large NATs.


I somehow didn't think about that, even if I used that feature just a few hours ago! Now I'm curious about how GitHub handles the ssh infra at that scale...


GitHub, as I've read[1], uses a different implementation of SSH which is tailored for their use case.

The benefits is that it is probably much lighter weight than OpenSSH (which supports a lot of different things just because it is so general[2]) and can more easily integrate with their services, while also providing the benefit of not having to spin up a shell and deal with the potential security risks that contains.

And even if somehow a major flaw is found in OpenSSH, GitHub (at least their public servers) wouldn't be affected in this case since there's no shell to escape to.

[1]: I read it on HN somewhere that I don't remember now, however you can kinda confirm this yourself if you open up a raw TCP connection to github.com, where the connection string says

SSH-2.0-babeld-9102804c

According to an HN user[2], they were using libssh in 2015.

[2]: https://news.ycombinator.com/item?id=39978089

[3]: This isn't a value judgement on OpenSSH, I think it is downright amazing. However, GitHub has a much more narrow and specific use case, especially for an intentionally public SSH server.


Even the amount of SSH authorized_keys they would need to process is a little mind boggling, they probably have some super custom stuff.


Perhaps at a university where all students in the same class need to SSH to the same place, possibly from the same set of lab machines. A poorly configured sshd could allow some students to DoS other students.

This might be similar to the workplace scenario that you have in mind, but some students are more bold in trying dodgy things with their class accounts, because they know they probably won't get in big trouble at an university.


One of my clients has a setup for their clients - some of which connect from arbitrary locations, and others of which need to be able to scripted automated uploads - to connect via sftp to upload files.

Nobody is ever getting in, because they require ed25519 keys, but it is pounded nonstop all day long with brute force attempts. It wastes log space and IDS resources.

This is a case that could benefit from something like the new OpenSSH feature (which seems less hinky than fail2ban).

Another common case would be university students, so long as it's not applied to campus and local ISP IPs.


I sometimes use this: https://pico.sh/


Git over SSH


> First, with IPv4 this will have the potential to increasingly penalize innocent bystanders... Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network.

So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Your concerned are addressed in TFA:

> ... and to shield specific clients from penalty

> A PerSourcePenaltyExemptList option allows certain address ranges to be exempt from all penalties.

It's easy for the original owner to find the list of all the IP blocks the three or four ISPs he's legitimately be connecting from to that exemption list.

I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".

There's nothing more depressing than that approach.

Kudos to the author of that new functionality: there may be issues, it may not be the panacea, but at least he's trying.


> So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Random brute force attempts against SSH are already a 100% solved problem, so doing nothing beyond maintaining the status quo seems pretty reasonable IMO.

> I don't buy your argument nor all the variation on the same theme: "There's a minuscule risk of X, so we absolutely nothing but saying there's nothing to do and we let bad guys roam free!".

Setting this up by default (as is being proposed) would definitely break a lot of existing use cases. The only risk that is minuscule here is the risk from not making this change.

I don't see any particularly reason to applaud making software worse just because someone is "trying".


> So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

The thing is, we have tools to implement this without changing sshd's behavior. `fail2ban` et. al. exist for a reason.


Sure but if I only used fail2ban for sshd why should I install two separate pieces of software to handle the problem which the actual software I want to run has it built in?


Turning every piece of software into a kitchen sink increases its security exposure in other ways.


Normally I would agree with you, but fail2ban is a Python routine which forks processes based on outcomes from log parsing via regex. There’s so many ways that can go wrong…and has gone wrong, from one or two experiences I’ve had in the past.

This is exactly the sort of thing that should be part of the server. In exactly the same way that some protocol clients have waits between retries to avoid artificial rate limiting from the server.


> There’s so many ways that can go wrong

There are a lot of ways a builtin facility of one service can go wrong, especially if it ends up being active by default on a distro.

`fail2ban` is common, well known, battle-tested. And its also [not without alternatives][1].

[1]: https://alternativeto.net/software/fail2ban/


As I’ve already posted, I’ve ran into bugs with fail2ban too.

Also adding firewalling to SSH is hardly “kitchen sinking” (as another commenter described it). You’re literally just adding another layer of security into something that’s literally meant to be used as an out of the box solution for creating secure connections.

If you want to take issue with the “kitchen sink” mentality of SSH then complain about its file transfer features or SOCKS support. They are arguably better examples of feature creep than literally just having the server own what connections it should allow.


> Also adding firewalling to SSH is hardly “kitchen sinking”

sshd is a service. It may be one among dozens of other services running on a host.

Now imagine for a moment, if EVERY service on the host took that approach. Every backend service, every network-facing daemon, every database, every webserver, voip servers, networked logging engines, userspace network file systems, fileservers...they all now take security into their own hands.

Every single one of them has its own fail2ban-ish mechanism, blocklists it manages, rules for what to block and how long, what triggers a block, if and when a block will be lifted...

Oh, and of course, there is still also a firewall and other centralized systems in place, on top of all that.

How fun would such a system be to administer do you think? As someone with sysadmin experience, I can confidently say that I would rather join an arctic expedition than take care of that mess.

There is a REASON why we have things like WAFs and IDS, instead of building outward-facing-security directly into every single webservice.


If you’ve been a sysadmin as long as I have then you’ll remember when services didn’t even manage their own listener and instead relied on a system-wide daemon that launched and managed those services (inetd). Whereas now you have to manage each listener individually.

That was additional initial effort but the change made sense and we sysadmins coped fine.

Likewise, there was a time when server side website code had to be invoked via a httpd plugin or CGI, now every programming language will have several different web frameworks, each with their own HTTP listener and each needing to be configured in its own unique way.

Like with inetd, the change made sense and we managed just fine.

Tech evolves — it’s your job as a sysadmin to deal with it.

Plus, if you’re operating at an enterprise level where you need a holistic view of traffic and firewalling across different distinct services then you’d disable this. It’s not a requirement to have it enabled. A point you keep ignoring.


> Likewise, there was a time when server side website code had to be invoked via a httpd plugin or CGI, now every programming language will have several different web frameworks, each with their own HTTP listener and each needing to be configured in its own unique way.

And still we keep all those webservices, be they in Java, Go, node, C# or Python, behind dedicated webservers like nginx or apache.

Why? Because we trust them, and they provide a Single-Point-Of-Entry.

> Tech evolves — it’s your job as a sysadmin to deal with it.

Single-Point-Of-Entry is still prefered over having to deal with a bag of cats of different services each having their own ideas about how security should be managed. And when a single point of entry exists, it makes sense to focus security there as well.

This has nothing to do with evolving tech, this is simple architectural logic.

And the first of these points that every server has, is the kernels packet filter. Which is exactly what tools like fail2ban manage.

> A point you keep ignoring.

Not really. Of course an admin should deactivate svc-individual security in such a scenario, and I never stated otherwise.

The point is: That's one more thing that can go wrong.


> And still we keep all those webservices, be they in Java, Go, node, C# or Python, behind dedicated webservers like nginx or apache.

Not really no. They might sit behind a load balancer but that's to support a different feature entirely. Some services might still be invoked via nginx or apache (though the latter has fallen out of fashion in recent years) if nginx has a better threading model. But even there, that's the exception rather than the norm. Quite often those services will be stand alone and any reverse proxying is just to support orchestration (eg K8s) or load balancing.

> Single-Point-Of-Entry is still prefered over having to deal with a bag of cats of different services each having their own ideas about how security should be managed.

Actually no. What you're describing is the castle-and-Moat architecture and that's the old way of managing internal services. These days it's all about zero-trust.

https://www.cloudflare.com/en-gb/learning/security/glossary/...

But again, we're talking enterprise level hardening there and I suspect this openssh change is more aimed at hobbyists running things like Linux VPS

> > A point you keep ignoring.

> Not really. Of course an admin should deactivate svc-individual security in such a scenario, and I never stated otherwise. The point is: That's one more thing that can go wrong.

The fact that you keep saying that _is_ missing the point. This is one more thing that can harden the default security of openssh.

In security, it's not about all or nothing. It's a percentages game. You choose a security posture based on your the level of risk you're willing to accept. For enterprise, that will be using an IDP to manage auth (including but not specific to SSH). A good IDP can be configured to accept requests from non-blacklisted IP, eg IPs from countries where employees are known not to work in), and even only accept logins from managed devices like corporate laptops. But someone running a VPS for their own Minecraft server, or something less wholesome like Bit-Torrent, aren't usually the type to invest in a plethora of security tools. They might not even have heard of fail2ban, denyhosts, and so on. So having openssh support auto-blacklisting on those servers is a good thing. Not just for the VPS owners but us too because it reduces the number of spam and bot servers.

If your only concern is that professional / enterprise users might forget to disable it, as seems to be your argument here, then it's an extremely weak argument to make given you get paid to know this stuff and hobbyists don't.


still better trying to improve fail2ban than to add a (yet another) kitchen sink on sshd


fail2ban has been around for so long, people get impatient at some point


Impatient about what exactly? fail2ban is battle tested for well over a decade. It is also an active project with regular updates: https://github.com/fail2ban/fail2ban/commits/master/


What hnlmorg said a few comments up


a system where sshd outputs to a log file then someone else picks it up and then pokes at iptables, seems much more of hacky than having sshd supporting that natively, imo. Sshd is already tracking connection status, having it set the status to deny seems like less of a kitchen sink and more just about security. the S in ssh for secure, and this is just improving that.


fail2ban has a lot of moving parts, I don't think that's necessarily more secure.

I would trust the OpenSSH developers to do a better job with the much simpler requirements associated with handling it within their own software.


> why should I install two separate pieces of software to handle the problem

https://alanj.medium.com/do-one-thing-and-do-it-well-a-unix-...


generally i agree with this principle, but fail2ban is kind of a hacky pos.


> but fail2ban is kind of a hacky pos.

It's battle-tested for well over a decade, has accumulated 10.8k stars and 1.2k forks on github, so it seems to do something right no?

Not to mention that even if it were otherwise, that's not a reason to ignore UNIX philosopies that have served the FOSS world well for over half a century at this point.

Last but not least, there are any number of alternative solutions.


Just because it's 'battle tested' and has stars and is useful does not preclude it from being a hacky pos. Reading logs using regexps and then twiddling IP tables is not the cleanest method of achieving this result. I would much prefer if this functionality were either handled like ssh or if there was some kind of standardized messaging (dbus?) that was more purposeful and didn't rely on regex.

It's useful because you can hook it up to anything that produces logs, it's hacky because that means you are using regexp. If the log format changes, you're likely fucked, not to mention that regexps are notoriously hard to make 'air tight' and often screwed up by newbies. Add to that in a case where your regexes start missing fail2ban will stop doing it's job silently.. not great my friend.

It's been a useful hack for a very long time, but I'd like to see us move on from it.


The issue is that the log parsing things like fail2ban work asynchronously. It is probably of only theoretical importance, but on the other hand the meaningful threat actors are usually surprisingly fast.


Yeah, they exist because nothing better was available at that time.

It doesn’t hurt to have this functionality in openssh too. If you still need to use fail2ban, denyhosts, or whatever, then don’t enable the openssh behaviour feature. It’s really that simple.


How is baking this into sshd "better"?

UNIX Philosophy: "Do one thing, and do it well". An encrypted remote shell protocol server should not be responsible for fending off attackers. That's the job of IDS and IPS daemons.

Password-based ssh is an anachronism anyway. For an internet-facing server, people should REALLY use ssh keys instead (and preferably use a non-standard port, and maybe even port knocking).


It’s better if you want an out of the box secure experience. This might be quite a nice default for some VPSs.

If you have a IDS and IPS set up then you’re already enterprise enough that you want your logs shipped and managed by a single pane of glass. This new SSH feature isn’t intended to solve enterprise-level problems.

Plus if you want to argue about “unix philosophy” with regards to SSH then why aren’t you kicking off about SOCKS, file transfer, port forwarding, and the countless other features SSH has that aren’t related to “shell” part of SSH? The change you’re moaning about has more relevance than most of the other extended features people love SSH for.


> This new SSH feature isn’t intended to solve enterprise-level problems.

But service level security features have the potential to cause enterprise-level problems.

Sure, in an ideal world, all admins would always make zero mistakes. And so would the admins of all of our clients, and their interns, and their automated deployment scripts. Also in that perfect world, service level security features would never be on by default, have the same default configuration across all distros, and be easy to configure.

But, alas, we don't live in a perfect world. And so I have seen more than one service-level security feature, implemented with the best of intentions, causing a production system to grind to a halt.


> But service level security features have the potential to cause enterprise-level problems.

Only if you don’t know what you’re doing. Which you should given you’re paid to work on enterprise systems.

Whereas not having this causes problems for users are not paid to learn this technology.

So it seems completely reasonable to tailor some features to lesser experienced owners given the wide spectrum of users that run openssh.


It would be frustrating to be denied access to your own servers because you are traveling and are on a bad IP for some reason.

Picture the amount of Captchas you already getting from a legitimate Chrome instance, but instead of by-passable annoying captchas, you are just locked out.


I have fail2ban configured on one of my servers for port 22 (a hidden port does not have any such protections on it) and I regularly lock out my remote address because I fat finger the password. I would not suggest doing this for a management interface unless you have secondary access


Why would you use password based auth instead of priv/pub key auth? You'd avoid this and many other security risks.


what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?

After this party, this guy needed help, he lost his wallet and his phone, his sister also went to the party and gave him a ride there but had left. he didn't know her number to call her, and she'd locked down her socials so we couldn't use my phone to contact her. we were lucky that his socials weren't super locked down and managed to find someone that way, but priv keys are only good so long as you have them.


> what do you if you get mugged and you laptop and phone and keys are taken or stolen from you? or lost?

My ssh keys are encrypted. They need a password, or they are worthless.

Sure, I can mistype that password as well, but doing so has no effect on the remote system, as the ssh client already fails locally.


You can and you should back up your keys. There isn't a 100% safe, secure and easy method that shields you from everything that can possibly happen, but there are enough safe, secure and easy ones to cover vast majority of cases other than a sheer catastrophe, which is good enough not to use outdated and security prone mechanisms like passwords on network exposed service.


I use a yubikey. You need a password to use the key. It has it's own brute force management that is far less punishing than a remote SSH server deciding to not talk to me anymore.


but what do you do if you don't have the key? unless it's implanted (which, https://dangerousthings.com/), I don't know that I won't lose it somehow.


My keyboard has a built in USB hub and ports. They key lives there. They keyboard travels with me. It's hard to lose.

I have a backup key in storage. I have escrow mechanisms. These would be inconvenient, but, it's been 40 years since I've lost any keys or my wallet, so I feel pretty good about my odds.

Which is what the game here is. The odds. Famously humans do poorly when it comes to this.


If I present the incorrect key fail2ban locks me out as well. Two incorrect auth attempts locks out a device for 72 hours. The idea is for regular services which depend on ssh (on port 22) to work regularly (because of key auth) but to block anyone attempting to brute force or otherwise maliciously scan the system.

Doesn’t change the advice, if this is your only management interface, don’t enable it :)

Also you know you can have MFA even with pw authentication right? :)


What's the alternative? If you get onto a bad IP today, you're essentially blocked from the entire Internet. Combined with geolocks and national firewalls, we're already well past the point where you need a home VPN if you want reliable connectivity while traveling abroad.


What happens when your home VPN is inaccessible from your crappy network connection? There are plenty of badly administered networks that block arbitrary VPN/UDP traffic but not ssh. Common case is the admin starts with default deny and creates exceptions for HTTP and whatever they use themselves, which includes ssh but not necessarily whatever VPN you use.


Same as when a crappy network blocks SSH, you get better internet. Or if SSH is allowed, use a VPN over TCP port 22.


Better internet isn't always available. A VPN on the ssh port isn't going to do you much good if someone sharing your IP address is doing brute force attempts against the ssh port on every IP address and your system uses that as a signal to block the IP address.

Unless you're only blocking connection attempts to ssh and not the VPN, but what good is that? There is no reason to expect the VPN to be any more secure than OpenSSH.


If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it. If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).

Hacking into the VPN doesn't get the attacker into the SSH server too, so there's defense in depth, if your concern is that sshd might have a vulnerability that can be exploited with repeated attempts. If your concern is that your keys might be stolen, this feature doesn't make sense to begin with.


> If you're using an IP address that's being used to brute force the entire Internet, it's likely that lots of websites are blocking it.

Websites usually don't care about ssh brute force attempts because they don't listen on ssh. But the issue isn't websites anyway. The problem is that your server is blocking you, regardless of what websites are doing.

> If that doesn't matter to you and all you need is to get into a particular SSH server, and also the network blocks VPNs, you're still fine if the SSH is on port 9022 and VPN is port 22. If it's not your own SSH server and it's port 22, then you're still fine if your own VPN is port 22 (on a different host).

Then you have a VPN exposed to the internet in addition to SSH, and if you're not rate limiting connections to that then you should be just as concerned that the VPN "might have a vulnerability that can be exploited with repeated attempts." Whereas if the SSH server is only accessible via the VPN then having the SSH server rate limiting anything is only going to give you the opportunity to lock yourself out through fat fingering or a misconfigured script, since nobody else can access it.

Also notably, the most sensible way to run a VPN over TCP port 22 is generally to use the VPN which is built into OpenSSH. But now this change would have you getting locked out of the VPN too.


The situation is the SSH server is exposed everywhere, and you also have an unrelated VPN, maybe even via a paid service you don't manage. The VPN just provides you with an alternative IP address and privacy when traveling. It matters a lot more if someone hacks the SSH server.


It would also be very rare. The penalties described here start at 30s, I don't know the max, but presumably whatever is issuing the bad behavior from that IP range will give up at some point when the sshd stops responding rather than continuing to brute force at 1 attempt per some amount of hours.

And that's still assuming you end up in a range that is actively attacking your sshd. It's definitely possible but really doesn't seem like a bad tradeoff


lol. depending were you travel the whole continent is already blanket banned anyway. but that only happens because nobody travels there. so it is never a problem.


There is nothing wrong with this approach if enabled as an informed decision. It's the part where they want to enable this by default I have a problem with.

Things that could be done is making password auth harder to configure to encourage key use instead, or invest time into making SSH CAs less of a pain to use. (See the linked paper, it's not a long read.)


> So instead of looking, like the author of these new options, for ways to make life for the bad guys harder we do nothing?

Yes, because as soon as the security clowns find out about these features, we have to start turning it on to check their clown boxes.


[flagged]


Don't use fail2ban

Use keys


Alternatively: Use both


According to the commit message, the motivation is also to detect certain kinds of attacks against sshd itself, not just bruteforced login attempts.


It's not quite fair, but if you want the best service, you have to pay for your own ipv4 or, in theory, a larger ipv6 block. Only alternative is for the ISP deploying the CGN to penalize users for suspicious behavior. Classic ip-based abuse fighter, Wikipedia banned T-Mobile USA's entire ipv6 range: https://news.ycombinator.com/item?id=32038215 where someone said they will typically block a /64, and Wikipedia says they'll block up to a /19.

Unfortunately there's no other way. Security always goes back to economics; you must make the abuse cost more than it's worth. Phone-based 2FA is also an anti-spam measure, cause clean phone numbers cost $. When trying to purchase sketchy proxies or VPNs, it basically costs more to have a cleaner ip.


I like being able to log into my server from anywhere without having to scrounge for my key file, so I end up enabling both methods. Never quite saw how a password you save on your disk and call a key is so much more secure than another password.


This is definitely a common fallacy. While passwords and keys function similarly via the SSH protocol, there's two key things that are different. 1, your password is likely to have much lower entropy as a cryptographic secret (ie: you're shooting for 128 bits of entropy, which takes a pretty gnarly-sized password to replicate), and 2. SSH keys introduce a second layer of trust by virtue of you needing to add your key ID to the system before you even begin the authentication challenge.

Password authentication, which only uses your password to establish you are authentically you, does not establish the same level of cryptographic trust, and also does not allow the SSH server to bail out as quickly, instead needing to perform more crypto operations to discover that an unauthorized authentication attempt is being made.

To your point, you are storing the secret on your filesystem, and you should treat it accordingly. This is why folks generally advocate for the use of SSH Agents with password or other systems protecting your SSH key from being simply lifted. Even with requiring a password to unlock your key though, there's a pretty significant difference between key based and password based auth.


I’ve seen lots of passwords accidentally typed into an IRC window. Never seen that happen with an SSH key.


I heard that if you type your password in HN it will automatically get replaced by all stars.

My password is **********

See: it works! Try it!


So if I type hunter2 you see ****?


A few more things:

An SSH key can be freely reused to log in to multiple SSH servers without compromise. Passwords should never be reused between multiple servers, because the other end could log it.

An SSH key can be stored in an agent, which provides some minor security benefits, and more importantly, adds a whole lot of convenience.

An SSH key can be tied to a Yubikey out of the box, providing strong 2FA.


Putting aside everything else. How long is your password vs how long is your key?


It's this, plus the potential that you've reused your password, or that it's been keylogged.


It's more secure because it's resistant to MITM attacks or a compromised host. Because the password is sent, the private key isn't.


My home IP doesn’t change much so I just open ssh port only to my own IP. If I travel I’ll add another IP if I need to ssh in. I don’t get locked out because I use VPS or cloud provider firewall that can be changed through console after auth/MFA. This way SSH is never exposed to the wider internet.


Another option is putting SSH on an IP on the wireguard only subnet.


I've recently done this for all my boxes, but tailscale over barebones wireguard. So fucking awesome. I just run tailscale at all times on all my boxes, all my dns regardless of what network i'm on goes to my internal server that upstreams over tls. It's great, and tailscale is a snap to set up.


Use TOTP (keyboard-interactive) and password away!


And even with IPv4, botnets are a common attack source, so hitting from many endpoints isn't that hard.

I'd say "well, it might catch the lowest effort attacks", but when SSH keys exist and solve many more problems in a much better way, it really does feel pointless.

Maybe in an era where USB keys weren't so trivial, I'd buy the argument of "what if I need to access from another machine", but if you really worry about that, put your (password protected) keys on a USB stick and shove it in your wallet or on your keyring or whatever. (Are there security concerns there? Of course, but no more than typing your password in on some random machine.)


You can use SSH certificate authorities (not x509) with OpenSSH to authorize a new key without needing to deploy a new key on the server. Also, Yubikeys are useful for this.


Just a warning for people who are planning on doing this: it works amazingly well but if you're using it in a shared environment where you may end up wanting to revoke a key (e.g. terminating an employee) the key revocation problem can be a hassle. In one environment I worked in we solved it by issuing short-term pseudo-ephemeral keys (e.g. someone could get a prod key for an hour) and side-stepped the problem.

The problem is that you can issue keys without having to deploy them to a fleet of servers (you sign the user's pubkey using your SSH CA key), but you have no way of revoking them without pushing an updated revocation list to the whole fleet. We did have a few long-term keys that were issued, generally for build machines and dev environments, and had a procedure in place to push CRLs if necessary, but luckily we didn't ever end up in a situation where we had to use it.


Setting up regular publishing of CRLs is just part of setting up a CA. Is there some extra complexity with ssh here, or are you (rightfully) just complaining about what a mess CRLs are?

Fun fact: it was just a few months ago that Heimdall Kerberos started respecting CRLs at all, that was a crazy bug to discover


There's extra complexity with ssh, it has its own file of revoked keys in RevokedKeys and you'll have to update that everywhere.

see https://man.openbsd.org/ssh-keygen.1#KEY_REVOCATION_LISTS for more info

And unlike some other sshd directives that have a 'Command' alternative to specify a command to run instead of reading a file, this one doesn't, so you can't just DIY distribution by having it curl a shared revocation list.


The hard part is making sure every one of your servers got the CRL update. Since last I checked OpenSSH doesn't have a mechanism to remotely check CRLs (like OCSP), nor does SSH have anything akin to OCSP stapling, it's a little bit of a footgun waiting to happen.


Oh wow... That's pretty nuts. I guess the reason is to make it harder for people to lock themselves out of all their servers if OSCP or whatever is being used to distribute the CRL is down.


Not necessarily. There is a fork of OpenSSH that supports x509, but I remember reading somewhere that it's too complex and that's why it doesn't make it into mainline.


You might want to check out my project OpenPubkey[0] with uses OIDC ID Tokens inside SSH certs. For instance this let's you SSH with your gmail account. The ID token in SSH certificate expires after a few hours which makes the SSH certificate expire. You can also do something similar with SSH3 [1].

[0] OpenPubkey - https://github.com/openpubkey/openpubkey/

[1] SSH3 - https://github.com/francoismichel/ssh3


Why not just make the certificate short-lived instead of having a certificate with shorter-lived claims inside?


You can definitely do that, but it has the downside that the certificate automatically expires when you hit that the set time and then you have to reauth again. With OpenPubkey you can be much more flexible. The certificate expires at a set time, but you can use your OIDC refresh token to extend certificate expiration.

With a fixed expiration, if you choose a 2 hour expiry, the user has to reauth every 2 hours each time they start a new SSH session.

With a refreshable expiration, if you choose a 2 hour expiry, the user can refresh the certificate if they are still logged in.

This lets you set shorter expiry times because the refresh token can be used in the background.


With normal keys you have a similar issue of removing the key from all servers. If you can do this, you can also deploy a revocation list.


My point is that, at first glance, this appears to be a solution that doesn't require you to do an operation on all N servers when you add a new key. Just warning people that you DO still need to have that infrastructure in place to push updated CRLs, although you'll hopefully need to use it a lot less than if you were manually pushing updated authorized_keys files to everything.


Easier to test if Jenkins can SSH in than to test a former employee cannot. Especially if you don't have the unencrypted private key.


Moneysphere lets you do this with tsigs on gpg keys. I find the web of trust marginally less painful than X509


> I have seen experienced sysadmins create the test user with the password of "test" on a live server on port 22 because they were having an "autopilot moment".

pam_pwnd[1], testing passwords against the Pwned Passwords database, is a(n unfortunately abandoned but credibly feature complete) thing. (It uses the HTTP service, though, not a local dump.)

[1] https://github.com/skx/pam_pwnd


meh. enabling any of the (fully local) complexity rules pretty much had the same practical effect of checking against a leak.

if the password have decent entropy, it won't be in the top 1000 of the leaks so not used in blond brute force like this.


I'd love to penalize any attempt at password auth. Not the IP addresses, just if you're dumb enough to try sending a password to my ssh server, you're going to wait a good long time for the failure response.

Actually I might even want to let them into a "shell" that really screws with them, but that's far outside of ssh's scope.


I certainly don't want to expose any more surface area than necessary to potential exploits by an attacker who hasn't authenticated successfully.


Yeah you're right, the screw-with-them-shell would have to be strictly a honeypot thing, with a custom-compiled ssh and all the usual guard rails around a honeypot. The password tarpit could stay, though script kiddie tools probably scale well enough now that it's not costing them much of anything.


I had a similar experience with a Postgres database once. It only mirrored some publicly available statistical data, and it was still in early development, so I didn't give security of the database any attention. My intention was anyway to only expose it to localhost.

Then I started noticing that the database was randomly "getting stuck" on the test system. This went on for a few times until I noticed that I exposed the database to the internet with postgres/postgres as credentials.

It might have been even some "friendly" attackers that changed the password when they were able to log in, to protect the server, maybe even the hosting provider. I should totally try that again once and observe what commands the attackers actually run. A bad actor probably wouldn't change the password, to stay unnoticed.


How did you accidentally expose it to the Internet, was your host DMZ?


I saw a Postgres story like this one. Badly managed AWS org with way too wide permissions, a data scientist sort of person set it up and promptly reconfigured the security group to be open to the entire internet because they needed to access it from home. And this was a rather large IT company.


Yeah on some cloud provider, the virtual networks can be all too confusing. But this story sounded like a home machine.


DMZ setting on a router makes this pretty easy.

I've faced the DMZ at an IP on DHCP. Later when the host changed I had noticed traffic from the internet getting blocked on the new host and realized my mistake.


docker compose, I accidentially commited the port mappings I set up during local development.


Interesting paper from Tatu Ylonen. He seem to be quick on throwing out the idea of certificates only because there is no hardened CA available today? Wouldn’t it be better to solve that problem, rather than going in circles and making up new novel ways of using keys? Call it what you want, reduced to their bare essentials, in the end you either have delegated trust through a CA or a key administration problem. Whichever path you choose, it must be backed by a robust and widely adopted implementation to be successful.


As far as OpenSSH is concerned, I believe the main problem is that there is no centralized revocation functionality. You have to distribute your revocation lists via an external mechanism and ensure that all your servers are up to date. There is no built-in mechanism like OCSP, or better yet, OCSP stapling in SSH. You could use Kerberos, but it's a royal pain to set up and OpenSSH is pretty much the defacto standard when it comes to SSH servers.


> Worst case, this will give bad actors the option to lock the original owner out of their own server if they have a botnet host in the same network.

According to to the article, you can exempt IPs from being blocked. So it won’t impact those coming from known IPs (statics, jump hosts, etc).


most places barely even have the essential monthly email with essential services' ip in case of dns outage.

nobody cares about ips.


Agreed. In addition to the problems you mentioned, this could also cause people to drop usage of SSH keys and go with a password instead, since it's now a "protected" authentication vector.


> innocent bystanders as CGNs are deployed

SSH is not HTTPS, a resource meant for the everyday consumer. If you know that you're behind a CGN, as a developer, an admin or a tool, you can solve this by using IPv6 or a VPN.

> Worst case, this will give bad actors the option to lock the original owner out of their own server

Which is kind of good? Should you access your own server if you are compromised and don't know it? Plus you get the benefit of noticing that you have a problem in your intranet.

I understand the POV that accessing it via CGN can lead to undesirable effects, but the benefit is worth it.

Then again, what benefit does it offer over fail2ban?


Yes, I agree. This seems a naive fix.

Just silencing all the failed attempts may be better. So much noise in these logs anyway.


Fail2ban can help with that


Just throw away that document and switch to kerberos.

All the problems in this document are solved immediately.


This is great, and helps solve several problems at once.

I would like to remind everyone that an internet facing SSH with a password is very unwise. I would argue you need to be able to articulate the justification for it, using keys is actually more convenient and significantly more secure.

Aside from initial boot, I cannot think of the last time I used a password for SSH instead of a key even on a LAN. Support for keys is universal and has been for most of my lifespan.


Some might argue SSH certificates are even better: https://smallstep.com/blog/use-ssh-certificates/


There's a high bar to set for most organizations. Leveraging certificates is excellent if the supporting and engineering actors are all in accordance with how to manage and train the users and workforce how to use them (think root authorities, and revoking issued certificates from an authority).

I've seen a few attempts to leverage certificates, or GPG; and keys nearly always are an 'easier' process with less burden to teach (which smart(er) people at times hate to do).


You can store your regular keys in gpg, it's a nice middle ground especially if you store them on a yubikey with openpgp.

Of course OpenSSH also supports fido2 now but it's pretty new and many embedded servers don't support it. So I'm ignoring it for now. I need an openpgp setup for my password manager anyway.


I use both PKCS#11 and OpenPGP SSH keys and in my opinion, PKCS#11 is a better user experience if you don't also require PGP functionality. Especially if you're supporting MaxOS clients as you can just use Secretive[0]. As you say, FIDO is even better but comes with limitations on both client and server, which makes life tough.

[0] https://github.com/maxgoedjen/secretive


Oh yeah I don't really use macOS anymore. And I do really need PGP functionality for my password manager.

I used pkcs11 before with openct and opensc (on OpenHSM PIV cards) and the problem I had with it was that I always needed to runtime-link a library to the SSH binary to make it work which was often causing problems on different platforms.

The nice thing about using PGP/GPG is that it can simulate an SSH agent so none of this is necessary, it will just communicate with the agent over a local socket.


> And I do really need PGP functionality for my password manager.

Just curious: is it https://www.passwordstore.org/?


Yes it is! It's great!


By the way, to elaborate, I love it because it's really secure when used with yubikeys, it's fully self hosted, it works on all the platforms I use including android and it's very flexible. There's no master password to guess which is always a bit of an Achilles heel with traditional PW managers. Because you have to use it so much you don't really want to have it too long or complex. This solves that while keeping it very secure.

The one thing I miss a bit is that it doesn't do passkeys. But well.


I use it as well (with a Yubikey) and I love it! On Android I use Android-Password-Store [1], which is nice too. There is just this issue with OpenKeychain that concerns me a bit, I am not sure if Android-Password-Store will still support hardware keys when moving to v2... but other than that it's great!

[1]: https://github.com/android-password-store/Android-Password-S...


SSH Certificates are vastly different then the certificates you are referencing.

SSH Certificates are actually just a SSH Key attested by another SSH Key. There's no revocation system in place, nor anything more advanced then "I trust key x and so any keys signed by X I will trust"


There is a revocation system in place (the RevokedKeys directive in the sshd configuration file, which seems to be system-wide rather than configured at the user-level. At least, that’s the only way I’ve used it)

I agree with the sentiment though, it is far less extensive than traditional X.509 certificate infrastructure.


when I said revocation system, I intended to convey something similar to Online Certificate Status Protocol, rather then a hardcoded list that needs to be synchronized between all the physical servers.

You are correct though, you can keep a list and deploy it to all the nodes for revocation purposes.

It's unfortunate that there's no RevokedKeysCommand to support building something like OCSP.


I am no familiar with SSH certificates either. But if there is no revocation system in place, how can I be sure access from a person can be revoked?

At our org we simply distribute SSH public keys via Puppet. So if some leaves, switches teams (without access to our servers) or their key must be renewed, we simply update a line in a config file and call it a day.

That way we also have full control over what types of keys are supported and older, broken kex and signature algorithms are disabled.


The certificates have a validity window that sshd also checks. So the CA can sign a certificate for a short window (hours), until the user has to request a new one.


One department in my cops y does this - you authenticate once with your standard company wide oidc integration (which has instant JML), and you get a key for 20 hours (enough for even the longest shift but not enough that you don’t need to reauth the next day).


> SSH Certificates are vastly different then the certificates you are referencing.

And the SSH maintainers will refuse offers of X.509 support, with a justification.


I like SSH certificates, and I use them on my own servers, but for organizations there's a nasty downside: SSH certificates lack good revocation logic. OCSP/CRL checks and certificate transparency protect browsers from this, but SSH doesn't have that good a solution for that.

Unless you regenerate them every day or have some kind of elaborate synchronisation process set up on the server side, a malicious ex-employee could abuse the old credentials post-termination.

This could be worked around leveraging TPMs, which would allow storing the keys themselves on hardware that can be confiscated, but standard user-based auth has a lot more (user-friendly) tooling and integration options.


It seems to me like short-lived certificates are the way to go, which would require tooling. I am actually a little surprised to hear that you're using long-lived certificates on your own servers (I'm imagining a homelab setup). What benefit does that provide you over distributing keys? Who's the CA?


I'm my own CA; SSH certificates don't usually use X509 certificate chains. I dump a public key and a config file in /etc/ssh/sshd_config.d/ to trust the CA, which I find easier to automate than installing a list of keys in /home/user/.ssh/authorized_keys.

I started using this when I got a new laptop and kept running into VMs and containers that I couldn't log into (I have password auth disabled). Same for some quick SSH sessions from my phone. Now, every time I need to log in from a new key/profile/device, I enroll one certificate (which is really just an id_ecdsa-cert.pub file next to id_ecdsa.pub) and instantly get access to all of my servers.

I also have a small VM with a long-lasting certificate that's configured to require username+password+TOTP, in case I ever lose access to all of my key files for some reason.


Some would argue that in an organization where you'd consider SSH certificates, it's best to use Kerberos and have general SSO. (Some of the GSSAPI functionality is patched in by most distributions, and isn't in vanilla OpenSSH.)


I setup a test smallstep instance recently, and it works really well. Setup is... complicated though, and the CLI has a few quirks.


The more complicated something is, the higher chance I screw it up.


Holy shit. I wondered if this was possible a few weeks ago and couldn't find anything on it. Thanks for the link!


The number of expect scripts I find in production that are used to automate ssh password authentication is ridiculous.


I resent that every application needs its own special snowflake auth method. One uses a certain 2fa app. Another uses another 2fa app. Another uses emailed code. Another uses text code. Another uses special ssh keys. Another opens a prompt in the browser where I have to confirm. Another uses special scoped tokens.

Yes there are good reasons. But it is quite a hassle to manage too.


> internet facing SSH with a password is very unwise

If your password is strong, it's not.


Don't forget to use a different strong password on each server! https://security.stackexchange.com/a/152132


A strong username also helps! Most SSH brute force attempts are for root, admin, or ubnt.


Nope, still unwise. Easy to steal, easy to clone, hard to script. Keys stored in hardware is simple and easy on most platforms these days. Yubikeys or Mac SEP is ideal.


Technically it's easier to steal a private key off of disk than it is to steal a password from inside a person's head or to plant a keylogger. If a keylogger is in place, someone can likely already also access your disk and the password used to protect the private key (or your password manager).


I was recommending the use of secure processor hardware (Mac SEP or Yubikey) that does not allow such malware shenanigans.


It depends on your use case. I have a personal server only I use use. In this use case, being able to access it from anywhere without any device trumps other considerations. The password is ideal.

In a corporate setting, things are of course different.


My use case is the same as yours. Malware can steal your credentials, it cannot steal mine. I also don't need fail2ban or to configure any of these new OpenSSH features. Users added to the server can't get compromised due to use of weak passwords.

Passwords are obsolete in 2024, and using them is very nearly universally bad.


> Passwords are obsolete in 2024, and using them is very nearly universally bad.

The first claim is obviously nowhere near being true, and the second seems very subjective.

As the other user is saying, strong passwords with proper security have minimal risk. More than certs or keys yes, but they offer sufficient security and the balance with convenience is currently unbeatable.

Besides, even if someone gets access to your server they should be limited and unable to do any real damage anyway. Defense in depth and all that.


> I also don't need fail2ban or to configure any of these new OpenSSH features

Me neither. If your password has sufficient entropy, you don't need any of this.

> Malware can steal your credentials, it cannot steal mine

The only solution around this is a hardware key or MFA. I find the convenience of not needing anything with me to be superior to the low risk of malware. I understand your opinion may differ here.


> ... using keys is actually more convenient and significantly more secure.

And for those for whom it's an option, using U2F keys (like Yubikeys) is now easily doable with SSH.

So unless the attacker can hack the HSM inside your Yubikey, he's simply not getting your private SSH keys.


> I would like to remind everyone that an internet facing SSH with a password is very unwise.

Bullshit. You can have a terrible password and your system will still be nearly impossible to get into. Also, these attackers are usually looking for already exploited systems that have backdoor account/password combos, unless they are specifically attacking your organization.

Repeat after me: dictionary attack concerns have nothing to do with remote access authentication concerns.

Let's say my password is two common-use English words (100k-200k.) That's ten billion possibilities. Assume you hit on my password half-way through. That would be fifteen years of continuous, 24x7x365 testing at 10 password attempts per second....and then there's the small matter of you not knowing what my username is, or even whether you've got the right username or not, unless the ssh server has a timing-based detection attack.

The only argument for putting this functionality in the daemon itself is that by locating it in the daemon, it can offer advanced application-layer capabilities, such as failing auth attempts no matter what after the limit is tripped so that brute-forcing becomes more pointless - unless you get it right within the first few attempts, you could hit the right password and never know it. If they intend to implement features like that in the future, great - but if it's just going to do what fail2ban does, then...just run fail2ban.

Fail2ban has a higher overview of auth on the system, is completely disconnected from the ssh daemon in terms of monitoring and blocking, and the blocking it does happens at the kernel level in the networking stack instead of in userspace with much more overhead and in a proprietary system specific to SSH.

As a sysadmin, this is 'yet another place you have to look' to see why something isn't working.


> You can have a terrible password and your system will still be nearly impossible to get into.

Ok, let's try an example of a terrible password for the user "root": "password". Is that nearly impossible to get into? Or does that not qualify as a "terrible password" per your definition?


Another good option is making SSH only accessible over Tailscale or a VPN.


The two aren't exclusive of one another. We've also witnessed situations, with major companies, wherein an SSH "leaks" outside the VPN due to network misconfiguration or misconfiguring interfaces on the server.

As I said above, keys are actually more convenient than passwords. Only reason people still use passwords is because they believe keys are difficult to use or manage.


This, with key pairs, is the best blend of security and convenience. I use ZeroTier and UFW on the server and it’s really very simple and extremely reliable. On the very rare occasion that ZeroTier encounters a problem, or my login fails, I still have IPMI access through Proxmox/VMWare and/or my server provider.


How do you protect the access to the VPN/Tailscale? I suppose you are not using a password?


SSO and MFA, with a Microsoft account.


That's an easy solution. But there is a lot of hidden complexity and it also makes you reliant on third parties https://www.microsoft.com/en-us/security/blog/2024/01/25/mid...


IDD, but it depends on your threat model. Personally, it's a tradeoff I'm happy to make, but an alternative would be setting up your own VPN, e.g. Wireguard.


Any time you access an SSH connection from a different computer, you basically need the password.


This is not true. SSH keys are a viable alternative.


If I can be charitable, I think they mean a different computer than one you usually use (that doesn’t have the SSH key already in authorized_keys). Spouses computer, etc.


Why would you ever do that? How do you know it is not compromised?

Carry your phone (many people already do this on a daily or near-daily basis in 2024) and use that in an emergency.


> If I can be charitable, I think they mean a different computer than one you usually use

If I can be charitable ....

What the hell are you doing storing your SSH keys on-disk anyway ? :)

Put your keys on a Yubikey, take your keys with you.


Right, much easier than a password! And so easy to backup!

I'm not arguing it isn't more secure. The point of this subthread is that SSH keys are not as easy to do ad-hoc as passwords, especially when moving workstations.


> Right, much easier than a password! And so easy to backup!

Extremely easy to recover from when the device you rely on to authenticate for everything gets lost or stolen too!


Exactly.

If I can't use TOTP with backup codes, I'm not using MFA.


Does that work with macOS? I’m currently using 1Password as my ssh key agent.


It indeed works on Mac OS. I have been using SoloKeys with ed25519-sk keys for about three years now. It should be sufficient to run

  ssh-keygen -t ed25519-sk
while a FIDO2 key is connected. You may need to touch the key to confirm user presence. (At least SoloKeys do).

If I recall correctly, the SSH binaries provided by Apple don't have built-in support for signing keys, but if you install OpenSSH from Nix, MacPorts, etc., then you don't have to worry about this.

Another thing to be mindful of is that some programs have a very low timeout for waiting on SSH authentication, particularly git. SSH itself will wait quite a long time for user presence when using a signing key, whereas Git requires me to confirm presence within about 5 seconds or else operations fail with a timeout.


It's just an usually bigger password.


If it's in the cloud, you pass the public key when creating the vm. If it's a real machine, ask the data center person to do it.


This seems like something I wouldn't want. I already use fail2ban, which does exactly the same thing, in a more generic manner. sshd is a security-sensitive piece of software, so ideally I want less code running in that process, not more.


The security sensitive parts of SSH run in a separate process. I would assume that most of the new code would be in the unprivileged part.


There is also endlessh, a very fun project to deploy


I've read the commit message in the post, and read it again, but I did not understand how it would be configured. The penalty system seems complex but only 2 parameters are mentioned.

From the documentation, one of these parameters is in fact a group of 8 parameters. I guess the separator is space, so one could write:

    PerSourcePenalties authfail:1m noauth:5m grace-exceeded:5m min:2m
See https://man.openbsd.org/sshd_config.5#PerSourcePenalties

Unfortunately, the default values are undocumented. So `PerSourcePenalties yes` (which will be the default value, according to the blog post) will apply some penalties. I did attempt to read the source code, but I'm reluctant to install a CVS client, two decades after dropping that versioning system.


The OpenBSD project provides a CVSWeb interface[1] and a GitHub mirror[2]. The portable OpenSSH project[2] that most of the world gets their OpenSSH from uses a Git repo[4] that also has a Web interface (at the same address) and a GitHub mirror[5]. Per the code there[6], the default configuration seems to be

  PerSourcePenalties crash:90 authfail:5 noauth:1 grace-exceeded:20 max:600 min:15 max-sources:65536 overflow:permissive
[1] https://cvsweb.openbsd.org/

[2] https://github.com/openbsd/src

[3] https://www.openssh.com/portable.html

[4] https://anongit.mindrot.org/openssh.git

[5] https://github.com/openssh/openssh-portable

[6] https://anongit.mindrot.org/openssh.git/tree/servconf.c?id=0...



I've seen MaxAuthTries used for similar reasons, and of course fail2ban, but this seems like a nice improvement and it's built in which is probably a win in this case.


I've used fail2ban in production for many years but eventually removed it due to causing very large iptables leading high memory use and ultimately a source of instability for other services (i.e it turns into a DDoS vulnerability for the whole server). I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

I wonder how this SSH feature differs since it's implemented at the SSH level.

So long as the SSH and or PAM config requires more than a password (I use hardware keys), the main concern to me is log noise (making it hard to identify targeted security threats) and SSH DDoS. I know tarpits and alternative ports are another way of dealing with that, but when SSH is used for many things having to change the port is kind of annoying.

I think I'm probably just going to end up layering it like everyone else and stick everything behind a wireguard gateway, although that concept makes me slightly anxious about that single point of access failure.


> I've used fail2ban in production for many years but eventually removed it due to causing very large iptables leading high memory use and ultimately a source of instability for other services (i.e it turns into a DDoS vulnerability for the whole server). I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

Didn't the advice switch to using ipset a while back, precisely in the name of efficiency?


Interesting thanks, I hadn't heard of that option.


> causing very large iptables leading high memory use

> I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

The purpose is to make random password attempts even more impractical. With even fairly lax fail2ban rules, it'll take multiple lifetimes to find a password made up of just two common use english words.

However, that's not really their goal. I think that these SSH probes are mostly intended to find systems that have been compromised and have common backdoor passwords, though. ...and they use networks of zombie machines to do it.

That's where stuff like Crowdsec and IP ban lists come in, with the side benefit of your IP addresses becoming less 'visible'


>> I know the usual advice then is to reduce ban time and just not have permabanning, but that seems to kind of defeat the purpose.

> The purpose is to make random password attempts even more impractical. With even fairly lax fail2ban rules, it'll take multiple lifetimes to find a password made up of just two common use english words.

True, but the other reason to use such a measure is layered security. For instance I systematically disable all password only access, which kind of makes fail2ban seem a little pointless, but if there were to be some obscure bug in PAM, or SSH, or more likely a misconfiguration, then there is another layer that makes it more difficult.


Sounds like you were running SSH on the default port tcp/22? I would expect attacks to exponentially drop off as soon as you move to a custom port.


It does seem to be very similar in spirit and implementation:

> PerSourceNetBlockSize > Specifies the number of bits of source address that are grouped together for the purposes of applying PerSourceMaxStartups limits. Values for IPv4 and optionally IPv6 may be specified, separated by a colon. The default is 32:128, which means each address is considered individually.

Just like fail2ban, this seems like it can be equal parts helpful, a false sense of security, and a giant footgun.

For example, allowing n invalid login attempts per time interval and (v4) /24 is not a big problem for botnet-based brute force attacks, while it's very easy for somebody to get unintentionally locked out when connecting from behind a CG-NAT.


ufw/iptables and other firewalls can also throttle repeated connection attempts, which is almost always fine but could be something you don't want if you have a legitmate need to support many rapid ssh connections from the same source (CM tools, maybe?)


> if you have a legitmate need to support many rapid ssh connections from the same source (CM tools, maybe?)

If you're doing that, I strongly suggest using ControlMaster to reuse the connections; it makes security tools like this less grumpy, but it's also a nice performance win.


Just remember that only first connection, the one creating ControlMaster socket, is being authenticated, subsequent ones are not.


It's easy to do per source IP address and reasonably easy to add source IP address to whitelist automatically after successful auth.


I managed one machine with such a mechanism, and I had to remove it, because it basically took all resources. I can't remember which daemon it was, but now the machine is only accessible from a limited set of ip addresses.


Will it really help today, when anyone with any serious intent doesn't launch their attacks from one or two standalone hosts, but buys botnet capacity?


I don't think this attempts to address botnet attacks, but to be fair, there are very few tools that you can just run on a single physical or VPS host that can effectively defend against a botnet. Frankly, most things that aren't Cloudflare (or in the same ballpark) will be ineffective against well-crafted botnet attacks.

This is useful in a defence-in-depth scenario, same as fail2ban. You might be able to defeat the odd hacker or researcher doing recon on your host, and sometimes that's good enough for you.

If you need botnet protection, you shop around for botnet protection providers, and you get a botnet protection solution. Easy as.


I'm not a fan of this feature. First, I don't think it's going to help all that much for the reasons other people have stated (it's easy to obtain a new IP, and using ssh key-only remote login nullifies most attacks anyway).

More importantly, though, is that it is difficult to debug why you can't login to a remote system, unless you've been diligent enough to setup remote logging and some kind of backdoor you can use in a pinch. I imagine many companies have some unimportant script running in the background that logs into a remote system over ssh, and the person who set it up left the company years ago. One password change/key rotation later, and suddenly 25% of employees cannot login to that remote system because the script got one of the office's 4 public IPv4 addresses blocked on the remote server.

It's very easy to say "you should manage your systems better, you should separate your networks better", and so on. But in my line of work (customer support), I only hear about the issue after people are already locked out. And I've been on many phone calls where users locked themselves out of their server that had fail2ban setup (ubuntu setup fail2ban by default in one of its releases).


People keep mentioning fail2ban. I claim that both this new behavior in sshd, and fail2ban, are unprincipled approaches to security. Now, I know fail2ban is a crowd favorite, so let me explain what I mean by unprincipled.

This is the problem fail2ban (and now sshd) try to solve: I want a few people to log into my computer, so I open my computer to billions of other computers around the world and allow anyone to make a login attempt, and then I want to stop all the illegitimate attempts, after they were already able to access port 22.

It's simple Bayesian probability that any attempt to head off all those illegitimate accesses will routinely result in situations where legitimate users are blocked just due to random mistakes rather than malicious intent. Meanwhile, illegitimate attempts continue to come en masse thanks to botnets, allowing anyone with an ssh exploit the chance to try their luck against your server.

A more principled approach to security is to not roll out the welcome mat in the first place. Instead of opening up sshd to the world, allowing anyone to try, and then blocking them, instead don't open up sshd to the world in the first place.

1. If possible, only permit logins from known and (relatively) trusted networks, or at least networks where you have some recourse if someone on the same network tries to attack you.

2. If access is needed from an untrusted network, use wireguard or similar, so sshd only needs to trust the wireguard connection. Any attempt at illegitimate access needs to crack both wireguard and ssh.

With those one or two simple measures in place, have another look at your sshd auth logs and marvel at the silence of no one trying to attack you a million times per day, while also having confidence that you will never accidentally lock yourself out.


1. Sure, there may be cases where you already know the source IP or network block. But there are many scenarios where your legitimate users may be traveling, or using a mobile provider that won't guarantee much about the source IP. If you open your firewall too wide, a sophisticated attacker can find some box they can proxy through.

2. Doesn't wireguard then have the same challenge as SSH? Isn't that just pushing the problem around?

Another way to cut down on the log spam is by configuring sshd to listen on a nonstandard port.


You must have missed the part where I said "Any attempt at illegitimate access needs to crack both wireguard and ssh."

It doesn't push the problem to wireguard. It requires wireguard to be broken as a pre-requisite for trying their hand at your sshd, and then they also need to break your sshd.


> Doesn't wireguard then have the same challenge as SSH? Isn't that just pushing the problem around?

Yeah, it's actually weird how frequently in those discussions people say some version of "just use vpn". I guess they really mean "just make someone else responsible".


If these people don't know what to do with themselves next that much, they should so something useful, like learn git, instead of implementing fail2ban-style features that nobody needs or wants in the software itself.

People who want this sort of thing and already have a single solution that handles multiple services have to complicate their setup in order to integrate this. They keep their existing solution for monitoring their web and mail server logs or whatever and then have this separate config to deal with for OpenSSH.

What if you don't want to refuse connections that exhibit "undesirable behavior" but do something else, like become a black hole to that IP address, and perhaps others in the IP range?

You want the flexibility to script arbitrary actions when arbitrary events are observed.

In my log monitoring system (home grown), the rules are sensitive to whether the account being targeted is the superuser or not.


> What if you don't want to refuse connections that exhibit "undesirable behavior"

Then you disable the behavior by turning it off in /etc/ssh/sshd_config.


Do the people who are going on about fail2ban know whether that's even ported to, and included in, OpenBSD? I suspect not.


Nobody seems to have noticed that fail2ban is GPL, either.


pam-script with xt_recent works just fine.

Everytime when an authentication fails, you add the ip address to the xt_recent list in /proc and in iptables you just check via --hits and --seconds and then reject the connection attempt the next time.


I would like to see support for blocking by client identifier, though if it were a default all the bots would recompile libssh.

Until then this has been a great differentiator for Bitvise SSH.


Is it something that can replace fail2ban or sshguard?


A “SSHal credit score” tied to a pooled resource, yes, that will work out well! Kind of like how a used car purchase should come with all its tickets!

EDIT: To this feature’s credit, it’s not federated centrally, so a DDOS to nuke IP reputation would have its blast radius limited to the server(s) under attack.


This is interesting but something i feel like id disable on most of my ssh servers as they are only exposed through a shared jump host, and I don't want users that have too many keys in their agent to cause the jump host IP to be penalized.

On the jump host itself it makes sense though


ip addresses are kinda meaningless these days, and address based accounting and penalization can penalize legitimate users. (bitcoind has a banscore system, it's kinda cute but these kinds of things tend to be bandaidy)

it's a hard problem. wireguard has a pretty good attempt at it built into its handshaking protocol, but like all of these things, it's not perfect.

could maybe do something interesting with hashcash stamps for client identity assertion (with some kind of temporal validity window). so a client creates a hashcash stamped cookie that identifies itself for 30 minutes, and servers can do ban accounting based on said cookie.


Also here, didn't gain much notice for some reason.

https://news.ycombinator.com/item?id=40605449


Can you trigger a command when they are in the "penalty box" ? it would be nice to firewall those sources so they stop consuming sshd cpu


This reminds me of Zed Shaw's Utu protocol from back in the day:

https://weblog.masukomi.org/2018/03/25/zed-shaws-utu-saving-...

I am not a crypto guy, but my understanding is that users can downvote each other, and the more downvotes a user gets the harder the proof-of-work problem they had to solve before they post. If you received enough hate, your CPU would spike for a couple of minutes each time you tried to post, thus disincentivizing bad behavior.

I see on github the project is officially dead now:

https://github.com/zedshaw/utu


This seems like a bad fix to the problem of insisting that ssh continue to only use TCP.

Wireguard only responds to a complete useful key from incoming UDP (as I understand). Probe resistant.

I get the legacy argument here but it seems like almost two decades of "this tcp thing has some downsides for this job"?


If a client "causes" sshd to crash, isn't that a server error?


no penalize, just forward the tty to a honeypot and waste the attacker's time. Any login would pass, but you have to figure out if the shell is real.


This looks like an easy DDoS exploit waiting to happen.


Love it. Now I don’t even need to change the default port numbers to stop all of those damn log entries.

Wonder if this is related to why Fail2Ban wasn’t in the pkg repos when I last tried to install it on OpenBSD?

There is only one thing on my wish list from the OpenBSD devs out there - that you’ll figure out Nvidia drivers.


So fail2ban?


Chocker :D


So, like a crude fail2ban service?


It’s just fail2ban, should have been in core years ago


Did you forget to submit a patch for it?


Sounds like almost as much fun as commenting on hacker news


Why are we building this into SSH itself? Isn't this what things like fail2ban are for?


The OpenBSD approach to security always seems to be adding things rather than removing them. For example, this feature is intended to make it harder for attackers to break in by guessing the password. So why not remove password authentication, or enforce a minimum password complexity? Password auth for SSH is almost always a bad idea anyway - good, secure software should nudge people towards using it securely, not give them the option to configure it with a gaping security hole.

It's the same with the OpenBSD operating system. There's so much extremely obscure, complex code that attempts to address the same problems we have been dealing with for 30+ years. What if we started removing code and reducing the attack surface instead of trying to patch over them, or we came up with an entirely new approach?

A good example of how code should be stripped down like this is WireGuard vs the old VPNs. WireGuard came along with fresh cryptography, took all of the bells, whistles, and knobs away and now provides an in-kernel VPN in a fraction of the LOC of IPsec or OpenVPN. As a result, it can be proven to be significantly more secure, and it's more performant too.


> It's the same with the OpenBSD operating system. There's so much extremely obscure, complex code that attempts to address the same problems we have been dealing with for 30+ years. What if we started removing code and reducing the attack surface instead of trying to patch over them, or we came up with an entirely new approach?

OpenBSD absolutely removes things: Bluetooth, Linux binary compatibility, and sudo, off the top of my head, with the sudo->doas replacement being exactly what you're asking for.


Also Apache → httpd, sendmail → (open)smtpd, ntpd → (open)ntpd. Probably some other things I'm forgetting.

I've seen a number of reasonable criticisms of OpenBSD over the years. Not being minimalist enough is certainly a novel one.


I can't think of any long term, open source project that has removed and ripped out more code than OpenBSD.

They are know for doing exactly what you are suggesting.

Go ask @tedunangst. It was literally called "tedu'd" for ripping out old crusty code.


You can (and need) to do both. And OpenBSD does. LibreSSL as one example removed a huge amount of dead/spaghetti/obsolete code from OpenSSL. And they are removing old/obsolete features all the time. Do you use OpenBSD? Do you read the release notes?


That's not really good enough though, the distros just enable the build flags that let them do naughty things. The software needs to be opinionated on how to use it securely, not leave it up to the users, because the developers that wrote it probably know best! The code simply needs to not exist. If users want to fork and maintain their own insecure branch, let them.


As the parent comments note, LibreSSL ripped out tons of code. Not "hidden behind build flags". Deleted.

There's plenty of flaws with any project, but OpenBSD is pretty well known for doing exactly the thing you're claiming they don't do.


OpenBSD is also known for this. They constantly push back against adding configuration knobs or running non standard configurations.

Have you used OpenBSD? You're telling them they should be doing something, that is already basically their mission statement.


Looking at OpenSSH tells a different story. It is a massive, overly configurable behemoth. The 'WireGuard of SSH' would be 1% of the LOC. It would not provide password auth, or let you log in as root with password auth, or let you use old insecure ciphers.

Maybe OpenBSD itself is better at sticking to these principles than OpenSSH. I haven't used (experimented with) it for ~5 years but read about various updates every so often.


You seem to be confusing "OpenSSH" with "OpenSSH Portable Release". As explained here: https://www.openssh.com/portable.html

> Normal OpenSSH development produces a very small, secure, and easy to maintain version for the OpenBSD project. The OpenSSH Portability Team takes that pure version and adds portability code so that OpenSSH can run on many other operating systems.

Unless you actually run OpenBSD, what you think is "OpenSSH" is in fact "OpenSSH Portable Release". These are very different things.


By removing password auth from openssh, you're not reducing the complexity, you're just moving it somewhere else. I would argue that you're actually adding significantly more complexity because now users/admins can't just bootstrap by using a default or generated root password on a new machine, creating a user, copying over the public key, and then disabling password auth. Now you have to figure out how to get that key into the image, an image you may or may not control. God help you if you don't have physical access to the machine.

Edit: I realized after posting that I was equivocating on "complexity" a bit because you're talking about code complexity for openssh itself. I don't disagree with you that openssh itself would be less complex and more secure without password auth features, but I think it would have spillover effect that isn't a net positive when considering the whole picture.


>Now you have to figure out how to get that key into the image, an image you may or may not control

I'd say this is a good thing, initial secret distribution is an unavoidable complexity and avoiding it leads to "admin/admin" logins which get hacked within seconds of internet access. There is plenty of tooling developed for this, even when setting up a VPS or flashing a Raspberry PI you can put a public key on the device to be active on first boot.


That's a pretty weird summary of openbsd development.


OpenSSH is used in variety of platforms, enforcing secret key will prohibit it's usage in lot of places due to the added complexity.


Indeed. And then someone will just fork it and the situation will be messier.


Just for info: there are alternative SSH server implementations out there that disable features that are discouraged (e.g. password authentication)[0]

Tinyssh is just one I already knew, I suppose you would find more via a proper search.

[0] https://tinyssh.org/


If you want password auth, you already have to change a default setting in SSHD and restart it. How exactly is removing that as a option ‘less complex’ for the downstream distros?


I don't really understand your question. Removing password auth reduces code complexity and therefore attack surface whilst also preventing users from using the software with a dangerous configuration. Maybe the users don't want that, but tough shit, maybe it's the nudge they need to use SSH keys.


In practice, this will just result in people and organisations using the last version of OpenSSH that supports password authentication.


Last time I checked "apt install openssh-server" on debian still launched sshd with password login enabled




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: