My experience is the opposite. Most productions users can tolerate downtime.
It's usually the company leadership that can't tolerate it to have downtime.
And it's fact, they are hit an unplanned with downtime of one service or the other every month or so because of an outage. They are used to it.
So if you plan for it, explain it, and limit the scope and time of it, it usually goes very well unless you are a fortune 500, a hospital or something alike.
I think you really underestimate the amount of dependencies (documented and otherwise) that exist in even medium sized company.
I once caused a production outage in a retail company that caused all cash registers to stop working. The team that worked on that had pushed in a last minute change and didn't test if it handed going offline gracefully.
Right now I'm on call for an identity provider used by people in various timezones, including logistics workers in places that operate 24/7. Even when we do weekend upgrades, we still cause quite a bit of collateral damage. 10 minutes of time, multiply by the number of employees affected. It adds up fast.
I’d say a system not designed to support maintenance is not properly architected. You will need maintenance windows for an individual subsystem to perform OS upgrades, DB upgrades are no different and Postgres upgrades using pg_upgrade are relatively painless, provided you test them first in CI or against production snapshots, something the author’s company seemingly lacks.
"a system not designed to support maintenance is not properly architected"
Indeed, but who cares about the system design anymore? How many companies/ teams can claim honestly they even had a person with proper DBA competency, while features over features were added in sprints doing the minimum required to get the feature shipped out at the soonest possible (usually one DB schema change with feature and then one or more to add index due to performance regressions)? DBA competency is only sought when DB schema has fubar'd to an extent that frequent outages are norms or the version used is EOL'd by a few months at least. And by that time the people who "designed" the system are gone, not having documented ever why a given decision was made.
Indeed you can get away with a lot now but just paying (a lot) more money; it feels like design is no longer needed as it works. It is how I make some money; people come to me with; ‘we run some trivial online shop made by our team and with 100k uniques a month we pay $15k+ for rds, is that normal?’. So I go in and fix it. Usually it is bad or no design of the db schemas which was countered by picking heavy rds instances. Fun times considering I expect this to be the case in a very high % of all rds deployments (all that I have seen so far, so for me it’s 100%), not only the ones that asked me for help. When asked, the story is the usual ‘people are more expensive than hardware’ blah. It usually takes me around $1000 in fees to cut half the costs so that is blatantly false. Not to mention that the human costs are one off; if someone pays me 20k to bring 15k/mo to 1k/mo (which is typical), it is worth it. Unfortunately that’s not the break everything and go bankrupt way of working I guess! Still I notice that in current harder Financial Times, I do get more requests.
You design the complete system so it does not have dependencies on a single component, that way each subsystem can have proper maintenance schedules. It takes a lot more up-front work, along with defining what levels of service degradation are acceptable during a window, but that's the difference between a professionally run service and one run by amateurs. Look up Google's SRE methodology and their concept of error budget.
You mentioned hospitals yourself. There are also alarm management solutions for alarm receiving centres, and security and emergency services. Can't really have downtime there too. Of course there are less busy hours, but an alarm can come at any moment and needs to be handled right then.
It's usually the company leadership that can't tolerate it to have downtime.
And it's fact, they are hit an unplanned with downtime of one service or the other every month or so because of an outage. They are used to it.
So if you plan for it, explain it, and limit the scope and time of it, it usually goes very well unless you are a fortune 500, a hospital or something alike.