Prison time for executives is the only thing that gets taken seriously. Fines to executives can just be paid by executive insurance, and unless you crank them up to 4% of global revenue like GDPR, fines to the company will likely simply become cost of doing business.
But threaten the executives with prison, and they'll suddenly make sure that the company complies with the law. I bet e.g. SOX would be taken a lot less seriously without those teeth.
Besides that, I don't see a reason why wilful privacy violations should not be met with prison terms. Don't want to go to prison? Don't collect/share data that you're not allowed to collect/share.
Even for negligence, there is precedent to send people to prison, although that usually requires the negligence to result in death. But the scale at which software mistakes can cause damage is often higher than the scale at which mechanical/structural engineering does: A collapsing building kills hundreds of people. A collapsing Equifax database doesn't kill anyone directly, but affects over a hundred million people, and has the potential to ruin their lives (imagine e.g. private Facebook profiles outing people in intolerant areas - would probably lead to a larger number of deaths than most modern-day building collapses).
The 2008 crash is full of executives who did not comply with the law, and did not take the threat of prison seriously. I think to this date you can count the number in prison on two hands, and most of those were more foot soldiers than masterminds.
So why pass a law if it can't or won't be enforced? And do you realize how political prosecutors offices are? Also, laws like this become patronage to the drafters when they're out of office. "Well you REALLY should hire our compliance services."
Laws only work when cultural norms already do 99% of the heavy lifting.
> Laws only work when cultural norms already do 99% of the heavy lifting.
And this is the aspect where a lot remains to be done in the USA. Many people who buy, sell or plain steal personal data don't perceive it as something negative. Some of these people are high-level executives and get praise as "disrupting the industry", some do their job quietly, and very often their victims don't even realize the extent of the harm done to them.
They published opinions about the future performance of certain securities. Those opinions were incorrect. Lying would mean misrepresenting the present state of those securities, which they didn’t do. Everyone know they were buying subprime mortgages. They just expected them to trade like prime mortgages.
> the only difference between lying and knowingly misleading the point in time of the subject in question?
If I say “this stock will rise in value” and then it doesn’t, that isn’t lying. If I say that while knowing the CEO is committing felonies, it still wouldn’t be lying, but it would be problematic. If I say “this is a share of Apple” when it’s actually Twitter, that’s lying. Bad forecasts aren’t lies, they’re mistakes.
The rating agencies’ role in the crisis is interesting and nuanced, and it is difficult to fault anyone other than the investors who over-relied on (and arguably misinterpreted) their guidance.
If you look at every security the agencies rated AAA, they performed as expected in cash flows. The underlying mortgages were mostly garbage, but some kept paying. That meant the top tranche of those structures, the tranches that got a high rating, kept paying.
If you held to maturity, the securities performed as promised. (Cf: if the government hadn’t bailed out AIG, they may not have.) The problem was they crashed in value in the interim because people began doubting if they would perform. That was a problem for liquidity-constrained investors, which was almost everyone in 2008. Rating agencies don’t say “this bond won’t trade at 50¢ on the dollar.” They say “this bond will probably pay you back.” And in the latter assessment, they were surprisingly accurate. The problem was people took the second to mean the first.
As the Devil's Advocate, one can argue that the fraud was perpetrated at the peon level of the loan originators, who originated loans to people that could not afford the loans. The executives hands were washed clean by the ratings agencies that gave out AAA ratings. Those high ratings were "reasonable" because the loans were consolidated into CDOs and the top traunche(s) of the CDOs can handle a few missed payments from some of the debtors. In hindsight, many of those products probably should not have been AAA rated (especially ones that consolidated lower traunches of other CDOs). If there was a law that stated that ratings agency executives go to jail when AAA rated products fail (i.e., make them strictly liable for top rated products instead of relying on reputation alone), those executives would likely pay better attention to which products get AAA ratings. Looking back, there was no such law and the ratings agencies were free to give out improper ratings backed by their reputations.
Skipping back to private consumer data, a law that clearly states that executives go to jail when there is a breach of private consumer data would likely increase the chances that the executives will pay better attention to what data is collected, how data is stored, and how data is protected.
Tangential topic: an interesting regulatory idea would be to have mandatory jail time for all executives (and board members) of a company of a minimum size that is successfully sued in a class action for more than X dollars. Make it strictly liable so that no direct knowledge is required to convict so as to punish executives who allow their companies to harm the public. If an executive allows their company to harm the public, then that person is not fit to serve as an executive.
That reminds me of itself a highly dysfunctional old system of private prosecutors. Especially with double jeopardy given that not having it turns it to a clear way to harass anyone. One obvious flaw is that trivially the best defense strategy isn't to hire a good defense attorney - it is to hire the absolutely worst prosecutor - against yourself.
Infamously prosecutors with conflicts of interest - often seen in cases of outrage over police misconduct. The prosecutors proceed to obviously present the worst possible case in what is an /unopposed/ hearing for a crime. There is a reason why the saying is 'they could indict a ham sandwich' - the standard is low enough that they just need to be able to present a very initial case to begin the process of proving guilt officially.
That sounds somewhat naive, but on the other hand it also might help in getting all these unenforced, legacy laws of the book that result in anyone at all times being with one foot in prison if someone decided to enforce every law.
I agree with it sounding naïve. The thing is, we’ve tried a lot more things that attempt to pragmatic and reasonable with limited success: innocent people get hurt, while bad actors weasel out. I know the standard is supposed to be to allow a hundred guilty people go free rather than allow one innocent person be imprisoned, but given how many people are in prison, there might be more innocent people in there than we’d like to admit.
That would end up using money against law enforcement or prosecution as a proxy for lack of fairness/effectiveness, and we already know that money is a bad proxy for morality. That’s why this law has prison time.
If the blockchain could send people to prison, then we’d probably have other problems.
Fair point. Maybe we should use this technique to protect software. Software engineers should go to jail for exposing vulnerabilities that are known. SQL Injection possible? You go to jail for ten years. Get root on your webserver? Fifteen years.
It reads to me like the prison time in this bill is reserved for the crime of knowingly mis-certifying annual reports to the FTC on data protection (ie, for defrauding the government) --- those reports are also only required from businesses with $1B+ of revenue. The rest of the penalties in the bill seem to take the form of liability under FTC's "Unfair Trade Practices" authority.
Journalists are much better than me at obtaining and reporting information. This story just isn't very important; it's a "discussion draft" of controversial legislation by a lone member of the minority party. It wasn't allocated many resources by Reuters.
I disagree. The current standard is that privacy and security are only so important as to prevent law suits unless there is law that says otherwise. That is a pretty low bar.
With this privacy and security must be taken seriously from the start. I am of the opinion that wanton neglect that results in massive consumer harm (think Equifax hack) should warrant prison time.
If management asks you to cut corners to ship the app, ask them to put it in writing.
There's nothing wrong with iterating quickly, but developers are responsible for making sure their systems are secure and resisting shipping applications that aren't. It is a developer's responsibility to push back on shipping broken software. If they're forced to, they need to make sure there's a paper trail that shows that. Otherwise it's indistinguishable from a developer just having done a bad job of their own accord.
> It needs punishment for the management who doesnt allow time to set things up properly
Hard to show it was management's fault, and not the result of a developer who really didn't have the AWS skills (though that's probably still the fault of management, for not verifying skillsets before hiring or giving AWS keys)
Everything we know about iteration cycles, quick access to devops resources, etc, will change when prison time becomes an option.
> Hard to show it was management's fault, and not the result of a developer...
No. A leader is ultimately responsible for everything that happens or fails to happen under his or her leadership. Full stop.
The people in charge of your hypothetical developer are the only ones with the ability to put processes in place to prevent it from happening. They are the least-cost avoider. Therefore, the power and the responsibility belong there.
Strict liability for the least-cost avoider is a sound strategy from both a moral perspective and a law & economics perspective. When proving knowledge and proximate cause are difficult, and someone is clearly in charge, and the harm is great - you place the liability on the people in charge whenever something goes wrong, and be done with it.
As a member of upper management, how would you address this then? More QA? More management of development practices? Move way from "devops" and back toward a world where there's a clear isolation between ops and development?
Over the past few years, developers have seen more and more autonomy and power. I can't imagine there's a way to avoid walking some of that back (if it can be)
> As a member of upper management, how would you address this then?
By specifying what is and what isn't allowed wrt. user data in products. By ensuring it gets included in new employee training, and communicating that exposing the company to data-related risks is a serious, fireable offense. By having internal checks and audits that monitor data risk and keep it low. I.e. basically the same things you'd address risk of financial malfeasance.
> Move way from "devops" and back toward a world where there's a clear isolation between ops and development?
It's not really about "devops" vs. dev/ops separation - it's about not moving fast and breaking other people's things. You can solve that with good professional practices, but there needs to be an incentive to adopt them (as opposed to the existing incentives to ignore them, in pursue of short-term profit).
Say you have a medical office, and you installed a cheap door lock which most burglars can easily crack, then a burglar breaks into your office, steals Ssn and other private info, should the doctor who runs the office go to prison for that?
You’d have to walk back from who owns the medical office. If it’s the doctor who owns the office, then that would be the where responsibility ends.
Doctors already have a responsibility to safeguard the privacy of their patient e.g. interviewing minors-turned-young-adults on their own and asking if they have permission to share the patient’s health information with their parents or guardians.
If the doctor installed the lock manually and there was a reasonable expectation to install a lock of a certain strength, then yes the doctor would be responsible.
If the doctor hired someone else e.g. a locksmith or the property management, then the people who were hired would be responsible so long as they were informed that they needed a particular kind of lock, they were certified to perform the installation, and there were laws on the books for the each of those kinds of responsibilities.
It definitely seems onerous, burdensome, and expensive, but the costs of these kinds of security breaches have been severely discounted by those who have the power to otherwise act.
The question isn't whether the door lock was great, but instead was there a door lock in the first place and if it was locked. If not do you continue to operate with an unlocked door or do you lock the damn door?
You cannot stop all criminals, but you can take reasonable actions (due diligence) to ensure a reasonable effort, according to industry, and timely corrective actions once a breach is known.
And one solution is enforcing consequences for incompetence.
If a construction company sends a bunch of untrained yahoos out with explosives, well, maybe the yahoos should have known better, but the company absolutely should have known better and I have no problem holding them liable.
I think it would be reasonable to have a law that says you need to have X level of security for certain information. Then it would be criminal to provide less than that level of security, especially just to save money. I just think it would have to somehow have exemptions for honest mistakes, and clever attackers.
The problem with that is the devil is in the details and it fails to account for changes properly in technology and excludes far better alternatives.
Say for instance there is a requirement that there must be DES encryption of passwords. That would be a downright terrible law on several levels - first of which is that the best way to secure passwords is not keeping them in the first place but a hash. The encryption standard is as laughable now as requiring banks lock their vaults with a simple warded lock - the kind where skeleton keys work because shaving the teeth from the key means it no longer has anything to catch on while it turns the lock.
Failure to disclose a significant data breach isn't just about deceit against users, it's also about defrauding investors. To borrow a phrase from Matt Levine (Bloomberg), "everything is securities fraud," including failure to disclose a data breach certain to have a material effect on the company's stock price.
Failure to disclose (or unreasonable delay in disclosing) a massive data breach by the executives of a public company is already a criminal offense! But I don't know of any cases where criminal charges have actually been brought. It's possible that assembling a securities fraud case is extremely challenging, or that US Attorneys are not keen to go after tech co. execs. If the latter is true, I don't suspect a new data privacy law would be any better enforced.
That's already sort of the case in the investment management world with trading algorithms. Axa Rosenberg comes to mind. When that case first came down a lot of people were weary of how that relationship between the PM and the programmers would shake out. For the most part it's been okay. PMs were required to become a bit more familiar with what the tech guys were doing and allow a little bit more oversight into the process so that the programmers had time for a maker/checker process as opposed to coding it all in as fast as they could and hoping they didn't make a mistake.
Hire good developers. Or prove the developer is at fault. Then the developer should be jailed. Developers should be responsible for deploying good code. Software engineers act like they are professionals, want to get paid like they are professionals, want to be treated like they are professionals but don’t want to take on the ethical argument. Maybe they should be required to take ethical exams like other professions.
To those who say white collar crime is out of control, I agree, but we don't get it under control by criminalizing new things. We control it through fairer and better enforcement of things that are already illegal.
I strongly suspect the only people criminally prosecuted under such a bill will be patsies, leaving the politically powerful free to continue rear-ending Americans while prosecutors continue to whistle and look the other way, occasionally rounding up (and convicting) the usual suspects to appear busy, and laughing all the way to the bank when they run for higher office and the same politically powerful underwrite their campaigns.
> I strongly suspect the only people criminally prosecuted under such a bill will be patsies
The only people that can be prosecuted under it are the chief executive officer, the chief privacy officer, and the chief information security officer.
The thing that is criminal under this bill and thus can subject them to prosecution is, despite what most news stories imply, not violating privacy.
The bill requires the company to file an annual privacy report with the FTC. The aforementioned three offices are required to provide a written certification to accompany that filing certifying that the report follows the rules of the bill.
The crime is certifying a report that they know does not follow the rules.
The annual report has to describe in detail whether the company complied with the regulations in accordance with subparagraphs (A) and (B) of section 7(b)(1), and to the extent that the company did not list which regulations were violated and how many consumer's personal information was impacted.
7(b)(1)(A) requires the company "to establish and implement reasonable cyber security and privacy policies, practices, and procedures to protect personal information used, stored, or shared by the covered entity from improper access, disclosure, exposure, or use".
7(b)(1)(B) requires the company "to implement reasonable physical, technical, and organizational measures to ensure that technologies or products used, produced, sold, offered, or leased by the covered entity that the covered entity knows or has reason to believe store, process, or otherwise interact with personal information are built and function consistently with reasonable data protection practices".
To be criminally liable the officer has to certify the report "knowing that the annual report accompanying the statement does not comport with all the requirements set forth in this section".
In other words, to be criminally liable the officer has to lie to the FTC.
The main risk it seems to me is that the officers might be given false information from underlings, leading the officers to believe the report is accurate when it is not. If the FTC discovers this, their initial suspicion will be that the officers were the ones that lied. If the officers keep good records of where they got the information they relied on when certifying the report, they should be OK, although it will certainly be something of a hassle.
This only applies at companies with $1 billion or more in annual revenue that deals with personal information on more then 1 million consumers or consumer devices, or that deals with personal information on more them 50 million consumers or devices.
I'd expect a CPO or CISO at such a place is paid well enough to handle this.
This bill is linked from a prominent story on the front page of Wyden's website. It's hard to see how much easier he could have made it for you to read it. I found it in less than a minute.
Yeah, I'm not going to even bat an eye about millionaires and billionaires going to jail for actual crimes and damage done to significant parts of the population. We need to start putting more millionaires and billionaires in jail for crimes instead of black teenagers for having 2 grams of weed in their pocket.
When a $30B+ corp sits on a security vulnerability for 14 months while their customer's in K12 and local government sit exposed, something has to change. Student's futures and safety will be impacted.
Indeed. Most people have absolutely no idea what prison is, how it works, who actually suffers, the lasting secondary effects, and how little it changes anything.
Throwing around prison lightly is dangerous. For some reason data privacy gets people emotionally charged instead of thinking clearly, and it's easy to say "lock them up", but that's exactly the same attitude that led to such heavy criminalization of other things that now takes up resources, fills up courts, wastes lives, and accomplishes almost nothing.
It should be heavy per leaked datum fines like HIPAA. This would transform huge troves of personal data from assets to liabilities, which is what we need.
I wonder if the process could be modeled after the recording industry, where the owner of the information can sue for damages that are set at a pre-determined amount per violation.
agreed, but at the same time, we send small time druggies to prison for years, which is also madness.
these are all valuable humans, despite the othering identity politics of law and order rhetoric, and our focus should be bringing them back into the fold, not harshly and debilitatingly punishing them. very few people are so far gone that (long) prison terms make sense.
with that said, it'd be an injustice if we didn't also send privacy violators to prison, as it would unfairly advantage to them in the eyes of the law. above all, the legal system should strive evermore toward fairnesse. obviously the best solution is not to send any of these folks to prison.
ideally, privacy violators should have to face the consequences of their actions as part of their punishment (like personally making whole each victim, as an extreme example).