It's endlessly surprising how people don't care / don't think about backups. And not just individuals! Large companies too.
I'm consulting for a company that makes around €1 billion annual turnover. They don't make their own backups. They rely on disk copies made by the datacenter operator, which happen randomly, and which they don't test themselves.
Recently a user error caused the production database to be destroyed. The most recent "backup" was four days old. Then we had to replay all transactions that happened during those four days. It's insane.
But the most insane part was, nobody was shocked or terrified about the incident. "Business as usual" it seems.
this is side effect of soc2 auditor approved disaster recovery policies.
company where i worked, had something similar. i spent a couple of months going through all teams, figuring out how disaster recovery policies are implemented (all of them were approved soc auditors).
outcome of my analysis was that in case of major disasters it will be easier to shut down company and go home than trying to recover to working state within reasonable amount of time.
I'd go even a step further: For the big corp, having a point of failure that lives outside its structure can be a feature, and not a bug.
"Oh there goes Super Entrepise DB Partner again" turns into a product next fiscal year, that shutdowns the following year because the scope was too big, but at least they tried to make things better.
RTO/RPO is a thing. Despite many companies declare waht they need SLA of five nines and RPO in minutes... this situations are quite evident what many of them are fine with SLA of 95% SLA and PTO of weeks
Wait, the prod db, like the whole thing? Losing 4 days of data? How does that work. Aren't customers upset? Not doubting your account, but maybe you missed something, because for a $1 billion company, that's likely going to have huge consequences.
Well it was "a" production database, the one that tracks supplier orders and invoices so that suppliers can eventually get paid. The database is populated by a data stream, so after restoration of the old version, they replayed the data stream (that is indeed stored somewhere, but in only one version (not a backup)).
And this was far from painless: the system was unavailable for a whole day, and all manual interventions on the system (like comments, corrections, etc.) that had been done between the restoration date and the incident, were irretrievably lost. -- There were not too many of those apparently, but still.
I'm consulting for a company that makes around €1 billion annual turnover. They don't make their own backups. They rely on disk copies made by the datacenter operator, which happen randomly, and which they don't test themselves.
Recently a user error caused the production database to be destroyed. The most recent "backup" was four days old. Then we had to replay all transactions that happened during those four days. It's insane.
But the most insane part was, nobody was shocked or terrified about the incident. "Business as usual" it seems.