The security exec at Pivotal, where I work, has been talking about "repaving" servers as a security tactic (along with rotating keys and repairing vulnerabilities).[0]
The theory runs that attackers need time to accrue and compound their incomplete positions into a successful compromise.
But if you keep patching continuously, attackers have fewer vulnerabilities to work with. If you keep rotating keys frequently, the keys they do capture become useless in short order. And if you rebuild the servers frequently, any system they've taken control of simply vanishes and they have to start from scratch.
I'm not completely sold on the difference between repair and repave, myself. And I expect that sophisticated attackers will begin to rely more on identifying local holes and quickly encoding those in automated tools so that they can re-establish their positions after a repaving happens.
But it raises the cost for casual attackers, which is still worthy.
Having everything patched as soon as patches are available (or within, say, 6 hours of availability, for "routine" patches, with better responsiveness for critical patches) is a win.
The rest: not so much.
Rebuilding continuously for security is not something I would recommend.
It's not worth the bother. Apart from keeping patches up today --- which is a good idea --- it's probably not really buying you anything.
It's not crazy to periodically rotate keys, but attackers don't acquire keys by, you know, stumbling over them on the street or picking them up when you've accidentally left them on the bar. They get them because you have a vulnerability --- usually in your own code or configuration. Rebuilding will regenerate those kinds of vulnerabilities. Attackers will reinfect in seconds.
A lot of companies do lose their keys that way. www roots, gists, hardcoded in products, github history, etc.
The win to rotating them is not so much because you'll be regularly evicting attackers you didn't know had your keys, but because when you do have a fire, you won't be finding out for the first time that you can't actually rotate them.
It also forces you to design things much more reliably which helps continuity in non-security scenarios.
After redeploying and realizing that Todd has to ssh in and hand edit that one hostname and fix a symlink that was supposed to be temporary so the new version of A can talk to B, that's going to get rolled in pretty quickly. Large operations not doing this tend to quickly end up in the "nobody is allowed to touch this pile of technical debt because we don't know how to re-create it anymore" problem.
It seems like it's good to be able to rebuild everything at a moment's notice after patching against a major exploit, though. You should have a fast way to rebuild secrets and servers after the next heartbleed-scale vulnerability.
Being able to rebuild critical infrastructure from source, and know that you'll be able to reliably deploy it, is a _huge_ win for security.
In that case, you might be interested in bosh: http://bosh.io/docs/problems.html (the tool that enables the workflow jacques_chester was describing). It embraces the idea of reliably building from source for the exact reasons you've mentioned.
The theory runs that attackers need time to accrue and compound their incomplete positions into a successful compromise.
But if you keep patching continuously, attackers have fewer vulnerabilities to work with. If you keep rotating keys frequently, the keys they do capture become useless in short order. And if you rebuild the servers frequently, any system they've taken control of simply vanishes and they have to start from scratch.
I'm not completely sold on the difference between repair and repave, myself. And I expect that sophisticated attackers will begin to rely more on identifying local holes and quickly encoding those in automated tools so that they can re-establish their positions after a repaving happens.
But it raises the cost for casual attackers, which is still worthy.
[0] https://medium.com/built-to-adapt/the-three-r-s-of-enterpris...