It absolutely is, BGP with geographically diverse paths, databases, app servers, etc, are all redundant. It's hosted in-house, so there is a cold standby database in AWS which would only be used if, say, an aircraft crashed into our server rooms.
We have everything in place to run from AWS if needed but do not operate from there because of cost.
This goes both ways -- there have been many AWS outages which have not affected us. I hear what you're saying, but we've had only one instance of extended (hours) downtime in the last 20 years.
The more experienced I become, the more such down to earth solutions seem OMG SO MUCH MORE reasonable than all the bells and whistles of "modern" engineering practices.
For postgres we send WAL files to a server in AWS which processes them. To bootstrap the database initially we sent zfs snapshots, and those WAL files are applied on an ongoing basis. If our data center were to die a horrendous fiery death, we could lose, at most, about 3 minutes of data although monitoring shows that it's closer to 30s under normal operating conditions.
For the app servers, saltstack is used and we synchronize that repository with what is needed to reproduce a production environment in AWS.
Obviously we'd have to provision servers, etc, but it's all possible in a worst-case scenario.