This really needs to be more of a standard thing. I've been near (but as an engineer, never responsible for) production systems my whole career. None of these systems were as terribly maintained as the one in the linked article. Production data was isolated. Backups were done regularly. Systems were provisioned with fault tolerance in mind.
Not once have I seen a full backup restore tested. Not once have I seen a network failure simulated (though I've seen several system failures due to "kicking out a cable" that sort of acts as a proxy for that technique). On multiple occasions I've seen systems taken down by single points of failure[1] that weren't forseen, but probably could have been.
[1] My favorite: the whole closet went down once because everything was plugged into a single, very expensive, giant UPS that went poof. $40/system for consumer batteries from Office Depot would have been a much better bet. And the best part? Once the customer service engineer replaced whatever doodad failed and brought the thing back up? They plugged everything right back into it.
I'll never forget when my boss was showing the girl scouts (literally) our very expensive UPS room. He explained how even if the power goes off we'll switch to batteries then switch over to generator power. See watch, he says - then flicked the switch. Fooomm... our entire office goes dark.
This took down news information for a good chunk of Wellington finance for about half a day. (Fortunately Wellington, NZ is a tiny corner of the finance world).
Hilarious! But I admit I was super glad it was the boss playing chaos monkey, not me.
Back when I worked for a small ISP, we had a diesel generator in case the power went out longer than our UPS batteries would last. This provided a great sense of security until we decided to test the system by powering off the main break and... it didn't start!
It turns out the emergency stop button was pushed in. Easy enough for us to fix then, but if the power had gone out at 4am it would have been quite another matter.
After that incident, we turned off the main breaker to the building weekly. It was great fun, as most of our offices were in the same building. We had complaints for the first couple of months until everyone got used to it and had installed mini UPS's for their office equipment.
We did actually have to use the generator for real a while later. Someone had driven their car into the local power substation, and it was at least a month until it was fixed. Electricity was restored through re-routing fairly quickly, but until the substation was repaired we were getting a reduced voltage that caused the UPSs to slowly drain...
The last time they tested the diesel generator failover at a customer's site, the generator went on just fine, but then it did not want to switch to mains again. The whole building was powered by the generator for almost two days, until they managed to convince the generator to switch.
> Not once have I seen a network failure simulated.
Reminds me of the webserver UPS setup at a previous company.
The router (for the incoming T1) and the webserver were plugged in to the UPS.
UPS connected (via serial port) to webserver. Stuff running on webserver to poll whether UPS running from mains power or batteries and send panic emails if on batteries (for more than 60 seconds) and eventually shutdown the webserver cleanly if UPS power dropped below 25%.
Thing not plugged in to UPS: DMZ Network switch (that provided the connectivity between webserver and router).
Doing that kind of testing is hard. It costs time and effort. If you want to see it done on a truly awe-inspiring scale (whole data centers being taken down by zombies ;) : http://queue.acm.org/detail.cfm?id=2371516
Doing this kind of testing in a gold-plated, heavily-engineered way is hard. But that's not an excuse for not doing it at all. Just walking into your closet and pulling a cable gets you 80-95% of the testing you need, and is free. Setting up a sandbox and "restoring" a backup onto it and then doing some quick queries is likewise easy to do and eliminates huge chunks of the failure space of "bad backups".
Really, this attitude (that things have to be done right) is part of the problem here. To a seasoned IT wonk, the only alternative to doing something "The Right Way" is not doing it at all. And that's a killer in situations like these.
Don't hack your systems to make them work. Absolutely do hack at them to test.
"walking into your closet and pulling a cable" is not free, if your planned disaster recovery is not a seamless failover, but a process to recover data with some work and limited (nonzero) downtime/cost to business.
For example, our recovery plan for a financial mainframe in case of most major disasters was to restore the daily backup to off-site hardware identical to production hw; however, the (expensive) hardware wasn't "empty" but used as an acceptance test environment.
Doing a full test of the restore would be possible, but it would be a very costly disruption; taking multiple days of work for the actal environment restoration, deployment,testing and then all of this once more to build a proper acceptance-test-environment. Also destroying a few man-months worth of long tests-in-progress and preventing any change deployments while this is happening.
All of this would be reasonable in any real disaster, but such costs and disruptions aren't acceptable for routine testing.
"Chaos Monkey" works only if your infrastucture is built on cheap unstable and massively redundant items. You can also get excellent uptime with expensive, stable, massively controled environment with limited redundancy (100% guaranteed recovery, but not "hot failover") - but you can't afford chaos there.
To paraphrase: if you go with a awful hack job for your disaster recovery plan, testing is more expensive. And to extend: you won't actually test because it's "too expensive", and your disaster recovery plan won't work.
How is this distinct from "Don't hack your systems to make them work. Absolutely do hack at them to test."? I don't see it.
This just sounds like "my business doesn't have the financial capacity to engineer data recovery processes". Well, OK then. Just don't claim to be doing it.
We did know that we can recover backups because we did it for small parts of data, and we know that we can do disaster recovery because (a) we did test this, though very rarely; and (b) we had successfully recovered from actual full-scale disasters twice over ~7 years.
But successful, efficient disaster recovery plan doesn't always mean "no damage" - it often means damage mitigation; i.e., we can fix this with available resources while meeting our legal obligations so that our customers don't suffer; not that there aren't consequences at all - valid data recovery plans ensure that data recovery really is possible and details how it happens, but that recovery can be expensive. And while you can plan, document, train and test activities like "those 100 people will do X; and those 10 sales reps will call the involved customers and give them $X credit", you really don't want to put the plan into action without a damn good reason.
For example, a recovery plan for a bunch of disasters that are likely to cut all data lines from a remote branch to HQ involves documenting, printing & verifying a large pile of deal documents of the day, having them shipped physically and handled by a designated unit in the HQ. The process has been tested both as a practice and in real historical events.
However, if you "pull a wire in the closet" and cause this to happen just so, then you've just 'gifted' a lot of people a full night of emergency overtime work, and deserve a kick in the face.
All I can say is that you're very lucky to have a working system (and probably a company to work for), and I'm very lucky not to work where you do. Seriously, your "test" of a full disaster recovery was an actual disaster! More than one!
And frankly, if your response to the idea of implementing dynamic failure testing is that someone doing that should be "kicked in the face" (seriously, wtf? even the image is just evil), then shame on you. That's just way beyond "mistaken engineering practice" and well on the way to "Kafkaesque caricature of a bad IT department". Yikes.
Admittedly: you have existing constraints that make moving in the right direction expensive and painful. But rather than admit that you have a fragile system that you can't afford to engineer properly you flame on the internet against people who, quite frankly, do know how to do this properly. Stop.
I'd like not to stop, but continue exploring the viewpoints. And I'd like you and others to try and consider also less-tech solutions to tech problems if they meet the needs instead of automatically assuming that we made stupid decisions.
For example, any reasonable factory also has a disaster recovery process to handle equipment damage/downtime - some redundant gear, backup power, inventory of spare parts, guaranteed SLA's for shipping replacement, etc; But still, someone intentionally throwing a wrench in the machine isn't "dynamic failure testing" but sabotage that will result in anger from coworkers who'll have to fix this. Should their system be called "improperly engineered"?
We had great engineers implementing failover for a few 'hot' systems, but after much analysis we knowingly chose not to do it 'your way' for most of them since it wasn't actually the best choice.
I agree, in 99% of companies talked about in HN your way is undoubtedly better, and in tech startups it should be the default option. But there, much of the business process was people & phone & signed legalese, unlike any "software-first" businesses; and the tech part usually didn't do anything better than the employees could do themselves, but it simply was faster/cheaper/automated. So we chose functional manual recoveries instead of technical duplications. And you have to anyway - if your HQ burns down, who cares if your IT systems still work if your employees don't have planned backup office space to do their tasks? IT stuff was only about half of the whole disaster recovery problems.
In effect, all the time we had an available "redundant failover system" that was manual instead of digital. It wasn't fragile (it didn't break, ever - as I said, we tried), fully functional (customers wouldn't notice) but very expensive to run - every hour of running the 'redundant system' meant hundreds of man-hours of overtime-pay and hundreds of unhappy employees.
So, in such cases, you do scheduled disaster-testing and budget the costs of these disruptions as neccessary tests - but if someone intentionally hurts his coworkers by creating random unauthorised disruptions, then it's not welcome.
The big disadvantage for this actually is not the data recovery or systems engineering, but the fact that it hurts the development culture. I left there because in such place you can't "move fast and break things", but everyone tends to ensure that every deployment really, really doesn't break anything. So we got there very good system stability, but all the testing / QA usually required at least 1-2 months for any finished feature to go live - which fits their business goals (stability & cost efficiency rather than shiny features) but demotivates developers.
My favourite way to test restores is to do them frequently to the dev server from the production backups - this keeps the dev data set up to date, and works as a handy test of the restore mechanism. Of course if you have huge amounts of data or files on production this becomes more difficult, but not impossible, to manage.
This works well, though you may need an "anonymizer" (and maybe some extra compliance testing) if your systems have PCI or HIPPA data on them. We have federal restrictions against storing certain types of data on servers outside the US. Cloud computing sounds great but neither Amazon or Google will guarantee the data stays within the country's borders.
Minor correction: the Chaos Monkey was Netflix's innovation. It just happened to be implemented on Amazon's cloud. It would have been just as useful if they had their own colocated servers or used a different cloud computing provider.
Apple did this before Amazon or Netflix in this regard [1], but the point needs to be made that a system needs to be tested and not just in a controlled aseptic way, because the real world isn't.
Another story supporting Chaos Monkey is what the Obama team did for their Narwhal infrastructure - they staged outages and random failures to prepare for their big day, meanwhile Romney's team who outspent the Obama team at least an order of magnitude, had their system fail on e-day.
I'd like to see a source for Romney outspending the Obama team "at least" 10x, because while I can speak from experience that ORCA was a gigantic piece of shit, it's not like the Obama people were struggling to pay their bills.
I don't know what metric the parent comment is referring to, but in terms of technology stack, I can fully believe that the Romney team spent more than Obama's team. Here's a post by one of the creators of the fundraising platform:
I actually had that post in my mind when writing my reply, but I assumed r00fus was referring to ORCA and Narwhal specifically.
> ... what the Obama team did for their Narwhal infrastructure - they staged outages and random failures to prepare for their big day, meanwhile Romney's team who outspent the Obama team at least an order of magnitude, had their system fail on e-day.
http://www.codinghorror.com/blog/2011/04/working-with-the-ch...
This only happened because nobody even asked "What happens if I press this button?"