Hacker News new | past | comments | ask | show | jobs | submit login

Having the government outline minimum penalties for data breaches would go a long way toward fixing the problem. It’s much easier to justify fixing a known issue or dedicating time to updating dependencies if you know there’s a defined cost (per customer!) of failing to do so.



> Having the government outline minimum penalties for data breaches would go a long way toward fixing the problem.

Then companies start covering up data breaches because disclosing them would cost millions in fines, resulting in people not even knowing when they've been compromised.

You also have the problem where politicians/media have no idea what they're talking about, e.g. calling the Google+ issue a "data breach" when it was actually a vulnerability discovered internally with no evidence of anyone having ever used it. If that's the standard then every time there is a vulnerability in a major operating system or TLS library, no one will be safe from the litigious trolls.


> Then companies start covering up data breaches because disclosing them would cost millions in fines, resulting in people not even knowing when they've been compromised.

Couldn't you make this same argument about any law that punishes bad behavior? As an extreme example, if we make murder illegal, that incentivizes covering up the act, at the expense of closure for victims' families. It seems flawed to me.


It's a lot harder to cover up a murder than a data breach. People notice when a someone turns up dead or mysteriously disappears. If some criminals break into your servers, who has any way to know other than you and the criminals?

There is also the issue of intent. Murder is illegal when you intend to do it. Nobody intends to have a data breach. In that case sunlight is more important than punishment because it's in everyone's interest to prevent it happening again, which requires understanding how it happened, which requires cooperation. Putting otherwise-aligned people on opposite sides creates unnecessary conflict at odds with the common goal.


I'm not sure you could cover up a data breach that easily. Those data dumps are going to be sold on the black market eventually, and I speculate that in many cases government agencies will be able to identify unannounced breaches.

Slap a 10x (or even 100x) fine on companies whose data breaches are discovered independently and covering stuff up won't look like such a good idea anymore.


> Those data dumps are going to be sold on the black market eventually, and I speculate that in many cases government agencies will be able to identify unannounced breaches.

Sure, but how do you prove it was covered up rather than merely discovered externally before it was discovered internally?


Covering up a data breech sounds like criminal behavior in the example above. Which you know, lands people in jail? I seriously doubt many employees will risk jail time so their company is spared a fine.

If it is mandatory to disclose data breeches and equally mandatory to cooperate with full transparency to fix the issue, then we are assuming that employees would act criminally with being accountable themself because it would be best for the company?


  Covering up a data breech sounds like criminal
  behavior in the example above. Which you know,
  lands people in jail?
Only if the cover-up isn't successful.


I would like to meet those employees who are that loyal that they will risk jailtime for their company. Sure, some people will risk persecution to work in outlawed political groups, but that is some real dedication in the example. Especially as they have nothing to gain whatsoever.


> Especially as they have nothing to gain whatsoever.

Other than their stock options, their relationships with other employees and potentially their job and career. True, whistleblowers have little to gain, but they have much to lose.


They have their freedom to loose and nothing to gain by not sending in an anonymous tips.

You have groups of multiple peoples there, where only one has to talk for everyone to go to jail. This is how organized crime has been prosecuted for years. The only one of the guilty who gets out is the snitch.


> They have their freedom to loose and nothing to gain by not sending in an anonymous tips.

Sending an anonymous tip that could result in their company losing a lot of money if not going out of business has a highly undesired effect on their continued employment, future raises, stock options, etc.

> You have groups of multiple peoples there, where only one has to talk for everyone to go to jail. This is how organized crime has been prosecuted for years. The only one of the guilty who gets out is the snitch.

Prosecuting organized crime works by busting the little fish and cutting a deal to go after the big fish. There is no starting point for that process when you're dealing with an otherwise non-criminal organization. If you're not already aware of their offense you have no reason to be investigating them to begin with and nobody there has the incentive to tell you when they don't expect you to have any other way to find out.

"The only one of the guilty who gets out is the snitch" is also obviously incompatible with remaining anonymous. Anyone would be able to deduce what happened.

You get whistleblowers when someone is outraged at what the company is doing sufficiently to take the risk to try and stop them. Not when the government is threatening severe penalties for a past mistake that has already been remediated.

The NTSB method produces better outcomes than the War On Drugs method.


>It's a lot harder to cover up a murder than a data breach.

Is it? You can murder someone by yourself and be the only person who knows what happened. You as the murdered are strongly incentivized to never tell anyone if you want to remain free.

In a corporate IT department a bunch of people will have to know just to make a decision as to whether or not to publicly disclose it. An anonymous tip could have zero consequences for the individual even while they remain in their current job. What is the turnover among IT staff? Once they have another job they have virtually zero motivation to keep their former employers dirty secrets.


You could totally make the same argument. And so you go with whichever rules lead to the best societal outcomes. It may be different per crime.


> You also have the problem where politicians/media have no idea what they're talking about, e.g. calling the Google+ issue a "data breach" when it was actually a vulnerability discovered internally with no evidence of anyone having ever used it.

When (1) there's no evidence of anyone ever having exploited the issue; and (2) the logs where that evidence would appear, if it existed, only go back two weeks...

...it seems fine to assume that people have exploited the issue, the evidence was there once, but it isn't now.


By this logic everything is already compromised, because there are no major operating systems that have never had a security vulnerability and most logs don't go back more than a couple of months.


Is that a problem?

"Our logs don't show evidence of any data compromise" is not stronger evidence of anything than "our logs for the last two weeks don't show evidence of any data compromise" if you don't have logs that go back more than two weeks. How much evidence do you think that is? How do you think it might play in the press if the denial was accompanied by the two-week qualifier?


Allowing class action lawsuits of unlimited scope would encourage whistle-blowers and incentivize investigators. Not to mention scare the living daylights out of would be violators.


A treble damages multiplier with criminal charges and mandatory minimum sentences in the decades would be a rather effective deterrent. A breach can be just as devastating to someone as losing a partner or parent to senseless violence and should be punished in proportion to the number of victims. Fines are ineffective. There needs to be life sentences and company dissolution for incidents like Equifax.


The War on Drugs method then.

High penalties are irrelevant when people don't expect to get caught. They often make things worse by creating a "no snitching" culture because the disproportional penalties are seen as unfair by would-be informants who are then less inclined to cooperate.


I think it is important to make the distinction that it is the impact of the breach that is important and not the breach itself. If the information gained from a breach, say credit card numbers, is immediately rendered useless by the action of the breached company should they be penalized? I also find it unlikely that penalties would be accurately priced, which is a whole separate conversation of perverse incentives.

I think most data breaches boil down to poor culture and management. Software maintenance gets cheaper as you do it more frequently since the complexity of deferred maintenance scales exponentially. It's a lot easier to justify doing things when the costs to do them are nearly nothing. This is ideally where we should aim as an industry.


I think it's not simple to predict the impact of a breach. I would rather the breach itself be penalized.

I prefer incentives that prevent spilling the milk to post-spill arguments about how damaging the spill may have been.


> I prefer incentives that prevent spilling the milk to post-spill arguments about how damaging the spill may have been.

Then what you want is subsidies to audit popular software/hardware for vulnerabilities.

This is a classic high transaction cost tragedy of the commons. The manufacturer has no incentive to make secure devices because customers still buy the insecure ones. Imposing liability is difficult because the issues are highly technical (difficult for judge/jury to understand) and the damages are highly speculative and hard to calculate. Imposing specific security standards is equally problematic because of the same bad interaction between technical complexity and politicians.

But the solutions are known -- it basically just requires money for security hardening. So have the government provide the money. Without specific byzantine standards it allows the job to be done properly, and providing the money removes the incentive to cut corners.


> But the solutions are known -- it basically just requires money for security hardening. So have the government provide the money. Without specific byzantine standards it allows the job to be done properly, and providing the money removes the incentive to cut corners.

If you don't force the audit, why a company would want to take a risk with testing their product? I feel that at best, it'll end up like "quality seals" on food items. Yes, I can draw a quality seal in Photoshop too.

But even if companies would be somehow willing (how, without forcing them?), then you need to ensure the audits are reliable, and prevent companies from creating a fake rubber-stamping auditing entity, and going to market either way. What's the preferred way to accomplish that?

Companies are in a race to the bottom, and they'll do their best to weasel out of "unnecessary" costs.


The point of the audit isn't to get a seal of approval, it's to identify the security problems, which is 97% of the work of fixing them.

There will always be the company which is literally on fire because it's 0.0013% cheaper in the short term, but that's true no matter what you do because that company will be out of business in six months regardless. You can't change their behavior because they're already in the midst of self-destruction by the time you even become aware of their existence.

Any kind of normal company is going to be happy to have a free confidential security audit, and offering that would in practice significantly improve the security of this garbage.


I see your point better now. I'm still not sure if a normal company is really to be going so happy about free audits (due to IP and extra administrative workload). Do we have an existing precedent of something like this working in other industries, or is this something that hasn't been tested before?


It's the same general principle as insurance companies offering no-copay annual medical checkups.


s/milk/oil/, and suddenly your metaphor is an order of magnitude stronger.


I don't disagree, but it's really really hard to determine what proper liability should be. It's easy with things like "Company X left their S3 bucket publicly accessible then put PII in it." It's much less easy when "Company Y got hacked because of a 0-day in their OS kernel, or in their web server that is an open-source project, or even a 0-day in a codebase that they own but have generally very good practices on (stuff happens, even very good devs write bugs sometimes)."

Does the Linux Foundation or a group of random devs on github that gave their code away for free get handed a massive and possibly bankrupting fine for that? And if so, why would anybody release code for free? If not, and you say that "GPL/MIT/etc. warrants no serviceability so it's on the adopter," then why would anybody use open source and open themselves up to the liability when they have zero control over the project and its associated quality control and process?

For this reason I don't think governments can do a good job at making fair regulation that doesn't have severe unintended consequences.

Something that might help tho, is providing easier civil recourse for those affected. For example, if Equifax gets hacked and leaks my identity and somebody buys a car in my name, it should be very easy for me to sue them to make it right. That's something that is clearly broken with our current system.


I say companies should pay a painful fine for all data breaches no matter what. They'll have to buy insurance against it, and the insurers will eventually learn to price the risk properly. And it will also be in the insurers' interest to learn how to audit their client companies.


Yup. Take GDPR. It doesn't try to price user data. If you screw up real bad, it just says, "up to €20 million, or 4% of the worldwide annual revenue of the prior financial year, whichever is higher".


Failing to disclose a breach should be treated like covering up a crime. Companies are dissolved, fines levied, people go to jail, damages are paid.

Failing to address security vulnerabilities quickly, or publicize their existence and suspend operations until they're fixed is negligent and is most similar to reckless endangerment. Similarly, hiring inexperienced, unqualified, or otherwise knowingly incapable people to perform a job is negligence. Failing to know and perform what precautions one needs to take is likewise negligent. People go to jail, fines are levied, damages are paid.

A previously unknown exploit used to gain access to a company is like a car accident and should be treated as such. Company pays damages and moves on.


Imo, the most effective part of gdpr is the self disclosure stuff. Transparency has already improved and pressure to avoid disclosable leaks has gone up.

Idk that fines would help that much. I would just double down on the disclosure part. Improved policing of nondisclosure (with penalties).


The problem with fines as an enforcement mechanism is that incidents are rare and big.

Fines work when they are assured, fast, and high. But because the incident that triggers fines is not the actual behaviour, but only its later consequences, it's too easy to put off security "for some other time".

It's like a single superhero randomly killing shoplifters every few months: the penalty is far disproportionate, yet the policy is still not going to stop anyone.


This is how I feel also. Businesses will always weigh the potential costs of a breach vs cost for proper security methods. Until you can quantify that with hard numbers for a breach, they will not take it seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: