The default "better idea" seems to be "let the government do it", but if you've been keeping up with the news in the past few years, "the government" doesn't exactly have a stellar track record either. Where a corporation may prioritize making money over security, government prioritize politics over security, wanting to spend money on things that visibly win them political points or power, not on preventing things that don't happen, which aren't visible to anyone. It's the same problem in a lot of ways. And both corporations and governments have the problems that specific individuals can be empowered to make very bad security decisions because nobody has the power to tell them that their personal convenience must take a back seat to basic operational security.
Even the intelligence agencies have experienced some fairly major breaches, which count against them even if they are inside jobs.
"The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.
> "The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.
My usual reply to this is that we use government to nudge market incentives, which is also what I think would be reasonable here: simply create a class of records related to PII, and create HIPPA like laws regarding those records that certain kinds of information brokers keep on people.
You then provide a corrective force to the market by providing penalties to violations, which raises the costs of breaches, and shifts the focus of the corporation towards security.
HIPPA or financial systems aren't perfect, it's true, but they're at a standard above what most of our extremely personal data is stored at, so we know we can do better, if we choose to as a society.
These laws would also be a lot more effective if you held the executive staff accountable as opposed to the shareholders. The model that corporations seek profit doesn't work in some cases, it's a group of individuals all seeking personal profit.
There's a worthwhile conversation to be had about the corporate liability shield, and whether A) major security/privacy breaches should have some sort of ruinously high statutory damage award rather than requiring people to prove how they were harmed, and B) more suits -- not just over breaches -- should be able to pierce the corporation's protective structure and cause personal liability for corporate officers who make careless or overly-short-term decisions.
Adjusting the incentive structure of the market in which companies operate could do a lot.
It was already done by DOD under Walker's Computer Security Initiative. It succeeded with numerous, high-assurance products coming to market with way better security than their competitors. Here's the components it had:
1. A clear set of criteria for information security for the businesses to develop against with various sets of features and assurance activities representing various levels of security.
2. Private and government evaluators to independently review the product with evidence it met the standard.
3. Policy to only buy what was certified to that criteria.
Criteria was called TCSEC with Orange Book covering systems plus "rainbow collection" covering the rest. IBM was first to be told no in an embarrassing moment. Many systems at B3 or A1, most secure, were produced with a mix of special-purpose (eg guards) or general-purpose (eg kernels or VMM's). The extra methods consistently caught more problems than traditional systems with pentesting confirming they were superior. Changes in policy to focus on COTS not GOTS... for competition or campaign contributors I'm not sure... combined with NSA's MISSI initiative killed the market off. Got simultaneously improved and neutered afterward into Common Criteria.
Example of security kernel model in VAX hypervisor done by legendary Paul Karger. See Design and Assurance sections especially then compare to what OSS projects you know are doing:
So, that was government, corporations, and so-called IT security industry threw away in exchange for what methods and systems we have. No surprise the results disappeared with them. Meanwhile, a select few under Common Criteria and numerous projects in CompSci continued to use those methods with amazing results predicted by empirical assessments from 1970's-1980's that led to them being in criteria in first place. Comparing CompCert's testing with Csmith to most C compilers will give you an idea of what A1/EAL7 methods can do. ;)
So, just instituting what worked before minus the military-specific stuff and red tape would probably work again. We have better tools now, too. I wrote up a brief essay on how we might do the criteria that I can show you if you want.
I posted this elsewhere, but I think I intended to post it in response to your post:
Well, there are a few possible solutions, and they don't all involve corporate incentives:
1. Government regulation
2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)
3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)
Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.
However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.
> Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)
Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.
> However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.
I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.
However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.
> Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.
You bring up a deep problem that I admit I'm not sure how to solve. I'd love to see a large rise in non-profit foundations, but I'm not actually convinced even that would solve the problem.
I think the solutions proposed by i.e. the FSF where up-front contractual obligation to follow through with their ideals may be a better solution, but we're beginning to see very sophisticated corporate attacks on that model, so it remains to be seen how effective that will be.
> I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.
> However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.
This argument is purely sophistry. You take a step back and talk about a more general case to make your position seem more moderate, admitting that the free market isn't self-regulating, but then return to the stance that the free market solves this problem because a regulated market (regulated how? by itself? In the context of free market versus government regulation, "regulated market" is a very opaque phrase) solves most cases, and on that principle, we don't know whether the very general field of data/communications/security warrants alternative solutions (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?).
We're speaking about a case where the free market didn't work, de facto: Yahoo exposed user data, hid that fact, and likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is.
So let's not speak in generalizations here: the free market has failed already in this case, and you admit that the free market doesn't self-regulate, so you can't argue that the free market will suddenly start self-regulating in this case. Regulation isn't an "alternative solution", it's the only viable solution we have that hasn't been tried.
" (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?)."
I presume an unregulated market is preferable to regulation at the outset, yes. Government regulation should be done in the face of systemic failures while retaining Pareto efficiency.
Put another way, I think the market can be very effective with well thought out regs. but I don't believe there are better general/default alternatives than to start with a free market and use the empirical evidence to guide policies...
"likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is."
I disagree that this is a case of market failure.
This is a case of "you know better than the market" and you want to force a specific outcome through regulation. But I'm not sure that's what people want.
what if people don't really care about their data being exposed all that much? It's a risk they're willing to take to use social networks. The penalty is that people might move off your service if you leak their information (as is likely to some degree with Yahoo). That to me seems to be the evidence here. That's not a market failure, that's a choice.
The default "better idea" seems to be "let the government do it", but if you've been keeping up with the news in the past few years, "the government" doesn't exactly have a stellar track record either. Where a corporation may prioritize making money over security, government prioritize politics over security, wanting to spend money on things that visibly win them political points or power, not on preventing things that don't happen, which aren't visible to anyone. It's the same problem in a lot of ways. And both corporations and governments have the problems that specific individuals can be empowered to make very bad security decisions because nobody has the power to tell them that their personal convenience must take a back seat to basic operational security.
Even the intelligence agencies have experienced some fairly major breaches, which count against them even if they are inside jobs.
"The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.