I disagree here - you've either lost the data or you haven't. You can make guesses as to the expected resources of the attacker, but if you're wrong and the attacker has more resources, then you might as well have not even bothered.
As an example, you have some fairly non-sensitive private health records. Here are three approaches:
(1) No security at all. You hope nobody is going to bother taking them and using them for anything malicious.
(2) You put in decent security, but a contractor for a new feature left open a vulnerability you didn't know about.
(3) You make sure everything is secure and have security audits over the code that closes the vulnerabilities that a contractor made.
The data for (1) and (2) get hacked and used in a bigger hack on a different service that results in money being stolen.
Now you could say that (1) gets an F, (2) gets a B because at least they tried, and (3) gets an A+ because the data wasn't stolen. This is rubbish - both (1) and (2) resulted in data being stolen and lost customers / lost money / insurance penalties / whatever. The security teams for both (1) and (2) failed utterly and get an F.
If (2) had guessed correctly and nobody had actually devoted those resources then (2) gets a flying colors because the data is safe - but it's just pure gambling. Gambling with security will always be a losing bet in the long run. Rather just make it secure. Going off some strange 'expected resources' is just asking for the time when your data somehow becomes valuable and those resources get brought (or more likely, one of your employees annoys the wrong person with too much free time).
Explaining to your customers that their email addresses weren't valuable enough to do proper security is a great way to lose me as a customer.
I think the idea the parent was trying to express is there is different risk appetites for various things/companies. If it would cost more than your profits to secure something 100%, obviously you need to look at other ways to go about it. Mitigation is a major force in information security. Mitigation isn't solving the risk, it's just making sure that the impact of the risk is low if it does get exploited. Likewise, while PCI data needs to be as locked down as possible, other data doesn't need that level of security because the tradeoffs are too massive to be cost effective or business effective.
What you should realize is that "security teams" are generally not responsible for the level of security at organizations. The information security team will generally present the risk to the business owner of that process, that data, that application, etc and let the business owner decide if they want to accept the risk, mitigate the risk, or avoid the risk. If I went to the CEO of Dropbox and told him the biggest security flaw in Dropbox is that users can share files with each other, he's going to tell me to jump in a lake because that's their entire business.
Nothing is 100% secure, and nothing can be 100% secure. I'm not agreeing or disagreeing with what Prezi is doing, but your notions of all-or-nothing security seem a little out of touch with the reality of business.
> I disagree here - you've either lost the data or you haven't.
You seem to be implying that the fact there are two possible outcomes implies there are only two possible initial states - vulnerable and not vulnerable. If the attacker steals data, the initial state was vulnerable, and if the attacker fails, the initial state was not vulnerable.
This is what poker players call "results-orientated thinking". The initial state is much more like a range of continuous values, where 0 is "having literally no security whatsoever" and 1 is "having security no earthly force can overcome in any scenario".
No private company has perfect security, and perfect security is not desirable, because incremental security has non-zero cost. Does it make sense for a typical firm to spend millions of dollars hardening their office building against the threat of attack by a heavily armed private militia? No, because for most firms the cost of preparing against such an attack outweighs the risk-weighted value of preventing such an attack.
Incrementally improving security narrows the range of successful attacks. Incrementally improving security means fewer attackers will be skilled enough able to successfully infiltrate, and fewer attackers with enough skill will go to the effort to successfully infiltrate. The goal is not to guard against every conceivable attacker, but, in a simplified model, to incrementally improve security until the marginal cost of the last improvement is equal to the marginal value of the reduction of attack scenarios.
> If (2) had guessed correctly and nobody had actually devoted those resources then (2) gets a flying colors because the data is safe - but it's just pure gambling
"Gambling" has no particular meaning in this context, because every decision about security precautions involves weighing known costs against potential risks. The division of security plans is not between "gambling" and "not gambling" but rather between "positive expected value" and "negative expected value".
How do you relate this to something like home security? You have valuables at your home. People can come in and take it with varying degrees of force. Are you prepared for the maximum force attack, or do you accept the typical security features which you know to be minimally effective?
hahah was just gonna write about this, but you beat me to it.
Either way, the point is, there's a trade off. Kinda like the 80-20 rule. It obviously taken 20% effort to protect against 80% attacks (the casual opportunistic attacks. like preventing sql injections, or locking your front door) and it takes 80% effort to prevent those last 20% attacks (actual Pros). SO "you might as well not have bothered" is somewhat naive in my opinion
By your logic, there are only two kinds of chess (or any game) players: those who win every game they play, and those that don't.
Unfortunately, the real world isn't so black and white. The resources someone will put into hacking your site depends on their perceived value of success, and if someone with enough resources values it enough, they will hack your site, no matter what you do.
Anyone who claims to have constructed an unhackable internet service has either constructed a trivial and useless service, or doesn't understand the complexity of software.
"Because they tried" doesn't get a B. You're not graded on effort. What you're graded on - when it comes to defense as opposed to recovery, though both are a part of this - is how likely a breach is. Unfortunately, you don't always learn your grade (and when you do, it's bad).
"Gambling with security will always be a losing bet in the long run. Rather just make it secure. Going off some strange 'expected resources' is just asking for the time when your data somehow becomes valuable and those resources get brought (or more likely, one of your employees annoys the wrong person with too much free time)."
So, every site you deploy is going to indefinitely withstand armed assault by government forces?
No. But if the site gets hacked, I failed. If I asked users for their credits cards and stored it in a publicly accessible plain text file or in a secure system that still gets hacked the end result is still the same. My users are having unauthorized payments coming off their credit cards. I've failed.
Maybe I can sleep better at night if I didn't go storing them in plain text and I can make up excuses easier, but I still failed. Regardless of how likely any breach was, I failed. My customers have probably jumped ship.
If I store it in plain text and I never get hacked, then I've succeeded. I'm more likely to succeed the more security I add, but if it gets stolen then it doesn't matter anymore. Basically I'm trying to imply that success or failure is a boolean based on real world results and does not depend on the amount of effort placed into the security. The security can influence the result, but once the result occurs the security I used or did not use is irrelevant.
So skimping on security is always a terrible idea. If you know of a way to increase security, then you should increase it. If you offer a bug bounty to improve security, make sure you give a reward for any possible breach that could cause you to get hacked, regardless of whose 'fault' the vulnerability is. If someone can social engineer your developer, then pay out the bounty. Maybe it won't happen next time because now the developer has learned something.
This is a fascinating discussion because it betrays two fundamental attitudes of society to risk.
RyanZAG is "correct". If someone breaks into my house and steals my TV, then my security was a failure.
This leads to the next problem - its not a catastrophic failure in today's (western) society. I am probably out at work, and I am insured, and the burglar is unlikely to be waiting when I get home to murder me.
However, there have been plenty of societies in the past, and are many now, where the expectation of loss would be almost total - someone breaches your security, they take the tv, kill you and your family and burn the house down on the way out.
So its not a judgement on the resources of the attacker that matters, it is the expected consequences of the breach - the expected value of damage.
Which side of the argument you come down on depends on whether you see the Internet as basically a nice London suburb with a few bad eggs in it, or a violent amalgam of Feudal Middle England and Mogadishu on a bad day.
"So its not a judgement on the resources of the attacker that matters, it is the expected consequences of the breach - the expected value of damage."
I nodded at this when I mentioned resiliency and recovery, but I still think resources of the attacker matters. A determined attacker could doubtless breach your front door with a battering ram or axe and enough time. Part of the reason you don't worry about this, I assert, is that it's not likely because the costs to the attacker (in terms of chances of getting caught and penalties if they are) are too high. Part of it, as you say, is that we have some amount of resiliency against the threats posed. And probably part of it is that most of us are not terribly inclined to do damage to each other without provocation and there are many possible targets for the few who are - I'm not really sure the degree to which we should legitimately consider that bit a part of "security" but it certainly merits weight in calculating risks.
That is a good point - I factor in the security of a effective police force, a legal system that will not tolerate using threats to sign over a business for $1 - all of these are part of our security.
Curiously I am not convinced of the total damage done by these various break-ins. Stealing credit card numbers is not the same as getting the loot into a laundered bank account. Grabbing bitcoin wallets is closer, but the liquidity does not exist to extract much.
The damage is seemingly more reputational, or other internal costs to the hacked company (like paying security consultants). The actual "money the thieves ran off with and could convert into real cash" is pretty thin - would value some pointers at studies here.
You're conflating two things, inappropriately in my opinion:
> If you offer a bug bounty to improve security, make sure you give a reward for any possible breach that could cause you to get hacked, regardless of whose 'fault' the vulnerability is.
This is true. There's no upside for rejecting this as "out of bounds" except for a relatively tiny sum of cash.
> If you know of a way to increase security, then you should increase it.
This I disagree with completely. If there's anything you can do with negligible cost you should do it, however there are all kinds of costs. There are usability costs, operational costs, training costs, etc, etc.
You can't hand-wave these away by declaring that any breach is failure without recognizing the fact that there is no such thing as perfect security. In fact all security is gambling, and it should be a gamble based on the best odds we can come up with professionally against the cost of failure. If something requires 100% perfect security then that thing should not be done, period.
'This is true. There's no upside for rejecting this as "out of bounds" except for a relatively tiny sum of cash.'
There can be. If the attack involved something that - done broadly - would itself cause problems even without a vulnerability, then you don't want to reward people for probing those ways without arranging it first. As a sort of extreme example, imagine hundreds of security researchers getting in the way of your paying customers while trying social engineering attacks on your staff.
No, security is about trade-offs. If you throw absurd resources toward protecting against entirely unrealistic threats and your company goes out of business, you've failed. If you have legitimately made the risks small enough, for the resources and threat model (and that threat model sufficiently matches reality), you've succeeded. There are of course some legitimate caveats, including talk of externalities and questions about how one would measure things, but I still assert my basic model is more correct than yours.
Recognition that security is only as strong as the weak link does not imply that all links must be infinitely strong.
> So skimping on security is always a terrible idea. If you know of a way to increase security, then you should increase it.
This is what all of the "security" vendors would like you to believe. It completely ignores the value of the assets you are securing.
How many rounds do you use with PBKDF2 if you want to slow down attackers? You can always add more rounds to slow down brute forcing, so how would you reconcile this with your statement of always increasing security. The same applies to bcrypt.
>>> You can make guesses as to the expected resources of the attacker, but if you're wrong and the attacker has more resources, then you might as well have not even bothered.
"Butler spent months plotting to infiltrate and overtake his four competitors, culminating in the two-day hackfest in his overheated safe house high above the Tenderloin. The sites blinked out of existence, their thousands of forum posts later rematerializing on CardersMarket. Iceman now had upwards of 6,000 users on his site, making it by far the biggest carder site on the Internet."
Your security people work 8-5 and go home and leave their work at their office. Most hackers have the ability go days or weeks at a time banging away on your system until they find a crack wide enough to get in and then its game over.
As an example, you have some fairly non-sensitive private health records. Here are three approaches:
(1) No security at all. You hope nobody is going to bother taking them and using them for anything malicious.
(2) You put in decent security, but a contractor for a new feature left open a vulnerability you didn't know about.
(3) You make sure everything is secure and have security audits over the code that closes the vulnerabilities that a contractor made.
The data for (1) and (2) get hacked and used in a bigger hack on a different service that results in money being stolen.
Now you could say that (1) gets an F, (2) gets a B because at least they tried, and (3) gets an A+ because the data wasn't stolen. This is rubbish - both (1) and (2) resulted in data being stolen and lost customers / lost money / insurance penalties / whatever. The security teams for both (1) and (2) failed utterly and get an F.
If (2) had guessed correctly and nobody had actually devoted those resources then (2) gets a flying colors because the data is safe - but it's just pure gambling. Gambling with security will always be a losing bet in the long run. Rather just make it secure. Going off some strange 'expected resources' is just asking for the time when your data somehow becomes valuable and those resources get brought (or more likely, one of your employees annoys the wrong person with too much free time).
Explaining to your customers that their email addresses weren't valuable enough to do proper security is a great way to lose me as a customer.