> That's the current mindset of the technological world, estimating whether the cost of atoning for the problem is lower than the cost of securing the systems.
And for the record, this will always be the mindset of corporations whose only concern is the bottom line. Until we as a culture accept that the market does not solve all problems, we're not going to solve these kinds of problems.
> > That's the current mindset of the technological world,
> > estimating whether the cost of atoning for the problem
> > is lower than the cost of securing the systems.
>
> And for the record, this will always be the mindset of
> corporations whose only concern is the bottom line.
> Until we as a culture accept that the market does not
> solve all problems, we're not going to solve these
> kinds of problems.
My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.
This is particularly true in the security space, because no system is 100% secure. And since resources aren't infinite, where do you stop? 90%? 99%? 99.9%? What if addressing that incremental 0.9% costs as much as the rest of the security apparatus combined? As much as the rest of the product combined? As much as your total revenue?
What's the other option? It can't be "not release anything", so a middle ground is found. We're arguing about shades of grey.
And sure, the government can help. Either by bearing some of the cost (e.g., investment, tax breaks, etc.) or increasing the impact of an incident (e.g., penalties, etc.).
But this isn't a big, bad, greedy corporate problem. This is a broader issue about how much risk we're willing or unwilling to absorb, and how efficiently we can address that risk.
> My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.
You're looking at this in only monetary terms, or at least Yahoo is. But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.
> But this isn't a big, bad, greedy corporate problem.
Of course it's a big, bad, greedy corporate problem. The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line. And quite frequently Yahoo's bottom line is in direct opposition to improving my life and the lives of other people.
>... But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.
In this situation it doesn't matter that yahoo is a private corporation - the same cost/benefit analysis essentially needs to be done no matter what the structure of the organization. Let's pretend that email had been created by a government agency and that agency has to decide how much of the budget to spend on security. If it costs X dollars to make something 90% secure, 10X for 95% secure and 10,000X for 99.9999% secure, etc etc eventually you have to choose how much to spend - resources aren't infinite for that government agency either. (And to make it much more difficult, they just have a guess that X dollars will make their product N% secure.) It isn't as black and white as you are trying to portray it.
I think it is fair to criticize yahoo for how the prioritized security but the same kind of issue has happened with non-profit companies and with government organizations, so no, it isn't just a "big, bad, greedy corporate problem."
You're the one trying to make it black and white, he's simply saying that unlike private industry, government can have another motive be primary rather than profit, i.e. help it citizens as the primary goal. Yea, budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken. Profit is not the correct goal for every action that can be taken by an organization, government isn't a business.
If "profit" is defined as: "generating more value than is consumed in the production process"...
Then yes, we damn well better demand that profit be the correct goal for every action regardless of organizational structure.
If our system is distorted to inaccurately measure profit locally, without properly accounting for negative externalities, then that's a legitimate problem, but the way to solve it is by factoring those hidden costs back into the profit calculation, not giving up on "profitability" properly defined.
If profit is defined as $income - $expenses = $profit, then you'd be using the word the way everyone else is using it, and you'd be participating productively in the conversation.
> ... government can have another motive be primary
> rather than profit, i.e. help it citizens as the
> primary goal.
But there's still ROI here, and there's still a budget (no matter how big the deficit gets). So the question remains: how do I spend that money? Do I spend all of it on security apparatuses, or do I have to scale back and spend some on other social services? How much? What's the best bang for my buck?
> budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken.
Profits are still required for gov't spending, but they are just made by someone else in the country and transferred to the gov't via taxation. Even deficit spending is just the choice to spend money today that will be obtained from taxation at a later date.
I'm looking at this in quantitative terms. Money is one measure. Effort, time, security, and others may be harder to quantify, but they're still important factors. "Security at any cost" quickly becomes simply impossible.
This is the general sense. Yahoo is probably on the "wrong" side of average.
But in some sense, you can vote with your feet. Companies who don't value security won't get your business. If enough people feel as you do, then the ROI calculation changes. And the same applies to politics as well: if you think more money should be spent on security and there's a societal good here, write to your congressman, or elect one who's receptive. Again, if enough people feel as you do, the political ROI makes this an imperative as well.
The fiction of markets is that costs and value can be reasonably determined. The truth is that in far too many instances, they cannot. Surface appearances or gross misbeliefs drive costing or valuation models and behavior, and as a consequence, goods are tremendously disvalued.
That's on top of the problems of externalities in which the costs or benefits aren't fully contained to the producer or consumer of a particular good or service.
A misprioritisation of values is what the drunk waking up with a hangover, the sweet-tooth spending 40 years dealing with systemic effects of diabetes, or the smoker suffering 20 years of emphysema and COPD comes to realise. The externalities are the drink-driving victim, the socialised medical costs (and privitised profits of the sugar firms), and the 2nd and tertiary smoke victims.
There are rather larger issues far more fundamental than these in the modern industrial economic system, but I'll spare you that lecture.
The point being that trusting on "the market" to offer corrections simply doesn't work.
>The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line.
I would argue that it's ALL corporations that only care about their bottom line. The entire reason a corporation exists is to make money, any other considerations like employee well-being, care for the environment, etc are driven entirely by either legal requirements or a need to retain talent in order to make that money. Any corporation who successfully projects an image of being "different" just has a good marketing team.
Externalities are a word we use to describe costs we find hard to model, but I find that most externalities do cost corporations real money. They just often aren't aware of it and haven't developed enough sophistication in their business cases to account for it. The best companies who support their security teams understand this. They understand that broken things lose them trust, customers and goodwill and those things are, even from a purely monetary and numerical perspective, incredibly valuable for a successful business in the long term.
The problem is not merely whether or not a profit motive exists to do right, but whether or not a business is insightful enough to model the full costs and include what we normally let go unexamined as mere "externalities".
Externality != "hard to model". Rather, it means difficult to internalise.
Garrett Hardin's "Tragedy of the Commons" gives a quite simple model of what an externality can be (overgrazing). The problem isn't in the modelling, but rather in the mutual enforcement of a collectively beneficial behavior.
That isn't to say that there aren't costs which are hard to model, but that's an orthogonal issue, and can apply just as well to internalised effects (e.g., the goodwill loss of a massive security breach) as to externalities.
Goodwill loss is not an externality.
I agree, adamantly, with your comment that businesses are frequently not enlightened or intelligent enough to model full costs. I'm seeing the issue of the long-term development of both cost and benefit awareness as a pressing issue, general to economics. It undermines many of the assertions of market efficiency.
I'd argue it >is< a corporate problem, and the article we are looking at shows exactly why. There should be consequences for running a company in this manner, and there are not. The people who made this decision did it because they were protected from the damage they did.
No, that assumes people are rational actors and they are not; preying on human psychology doesn't alleviate you of guilt, the companies are the problem, not their victims for not leaving.
It's similar to a company selling defective products or contaminating a city's water supply. The market response is too late to deal with those types of problems, and undervalues individual lives.
Yup, and it's too reactionary to problems that can be easily avoided by regulation, food safety for example. If it were up to the market, people would be dropping like flies because safety doesn't tend to increase short term profits as well as corner cutting.
I don't think you need to even concede the idea that users are rational actors--there are plenty of reasons why a rational actor would prioritize another factor over security. For example, many people got Yahoo email addresses a long time ago, and built a personal contact list of people who only know their Yahoo email. A rational actor might value keeping in contact with those people over their privacy. That doesn't mean that it's okay to expose that person's data.
The consequences should be that the company loses its ability to run a business. You've arbitrarily decided that the only acceptable mechanism for this happening is users choosing a different company. There are a whole host of reasons that doesn't work, and simply shifting the blame onto users for not making it work doesn't solve the problem.
> The consequences should be that the company loses its ability to run a business.
Or gains ability to run it properly.
> the only acceptable mechanism for this happening is users choosing a different company.
I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.
> shifting the blame onto users for not making it work
Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in.
And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.
> I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.
That's simply not realistic on technical issues. Users can't take responsibility for choices they can't be reasonably expected to understand.
> Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in
If you believe that, in your own words, user's responsible choice should be the primary mechanism of enforcement of this, you've rejected any effective means of achieving the above trite and obvious truisms.
In fact, security should matter to us a lot more than performance, pleasant design, or simple UI, because unlike those, security can be a matter of life and death. Which is why I don't want to leave it up to users.
> And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.
Which begs the question why you want to leave security regulation up to users moving away from the product.
Security people grade issues from two simultaneous yet different perspectives, security risk and business risk. It sounds like you are describing accountants not security people.
The default "better idea" seems to be "let the government do it", but if you've been keeping up with the news in the past few years, "the government" doesn't exactly have a stellar track record either. Where a corporation may prioritize making money over security, government prioritize politics over security, wanting to spend money on things that visibly win them political points or power, not on preventing things that don't happen, which aren't visible to anyone. It's the same problem in a lot of ways. And both corporations and governments have the problems that specific individuals can be empowered to make very bad security decisions because nobody has the power to tell them that their personal convenience must take a back seat to basic operational security.
Even the intelligence agencies have experienced some fairly major breaches, which count against them even if they are inside jobs.
"The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.
> "The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.
My usual reply to this is that we use government to nudge market incentives, which is also what I think would be reasonable here: simply create a class of records related to PII, and create HIPPA like laws regarding those records that certain kinds of information brokers keep on people.
You then provide a corrective force to the market by providing penalties to violations, which raises the costs of breaches, and shifts the focus of the corporation towards security.
HIPPA or financial systems aren't perfect, it's true, but they're at a standard above what most of our extremely personal data is stored at, so we know we can do better, if we choose to as a society.
These laws would also be a lot more effective if you held the executive staff accountable as opposed to the shareholders. The model that corporations seek profit doesn't work in some cases, it's a group of individuals all seeking personal profit.
There's a worthwhile conversation to be had about the corporate liability shield, and whether A) major security/privacy breaches should have some sort of ruinously high statutory damage award rather than requiring people to prove how they were harmed, and B) more suits -- not just over breaches -- should be able to pierce the corporation's protective structure and cause personal liability for corporate officers who make careless or overly-short-term decisions.
Adjusting the incentive structure of the market in which companies operate could do a lot.
It was already done by DOD under Walker's Computer Security Initiative. It succeeded with numerous, high-assurance products coming to market with way better security than their competitors. Here's the components it had:
1. A clear set of criteria for information security for the businesses to develop against with various sets of features and assurance activities representing various levels of security.
2. Private and government evaluators to independently review the product with evidence it met the standard.
3. Policy to only buy what was certified to that criteria.
Criteria was called TCSEC with Orange Book covering systems plus "rainbow collection" covering the rest. IBM was first to be told no in an embarrassing moment. Many systems at B3 or A1, most secure, were produced with a mix of special-purpose (eg guards) or general-purpose (eg kernels or VMM's). The extra methods consistently caught more problems than traditional systems with pentesting confirming they were superior. Changes in policy to focus on COTS not GOTS... for competition or campaign contributors I'm not sure... combined with NSA's MISSI initiative killed the market off. Got simultaneously improved and neutered afterward into Common Criteria.
Example of security kernel model in VAX hypervisor done by legendary Paul Karger. See Design and Assurance sections especially then compare to what OSS projects you know are doing:
So, that was government, corporations, and so-called IT security industry threw away in exchange for what methods and systems we have. No surprise the results disappeared with them. Meanwhile, a select few under Common Criteria and numerous projects in CompSci continued to use those methods with amazing results predicted by empirical assessments from 1970's-1980's that led to them being in criteria in first place. Comparing CompCert's testing with Csmith to most C compilers will give you an idea of what A1/EAL7 methods can do. ;)
So, just instituting what worked before minus the military-specific stuff and red tape would probably work again. We have better tools now, too. I wrote up a brief essay on how we might do the criteria that I can show you if you want.
I posted this elsewhere, but I think I intended to post it in response to your post:
Well, there are a few possible solutions, and they don't all involve corporate incentives:
1. Government regulation
2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)
3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)
Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.
However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.
> Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)
Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.
> However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.
I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.
However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.
> Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.
You bring up a deep problem that I admit I'm not sure how to solve. I'd love to see a large rise in non-profit foundations, but I'm not actually convinced even that would solve the problem.
I think the solutions proposed by i.e. the FSF where up-front contractual obligation to follow through with their ideals may be a better solution, but we're beginning to see very sophisticated corporate attacks on that model, so it remains to be seen how effective that will be.
> I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.
> However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.
This argument is purely sophistry. You take a step back and talk about a more general case to make your position seem more moderate, admitting that the free market isn't self-regulating, but then return to the stance that the free market solves this problem because a regulated market (regulated how? by itself? In the context of free market versus government regulation, "regulated market" is a very opaque phrase) solves most cases, and on that principle, we don't know whether the very general field of data/communications/security warrants alternative solutions (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?).
We're speaking about a case where the free market didn't work, de facto: Yahoo exposed user data, hid that fact, and likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is.
So let's not speak in generalizations here: the free market has failed already in this case, and you admit that the free market doesn't self-regulate, so you can't argue that the free market will suddenly start self-regulating in this case. Regulation isn't an "alternative solution", it's the only viable solution we have that hasn't been tried.
" (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?)."
I presume an unregulated market is preferable to regulation at the outset, yes. Government regulation should be done in the face of systemic failures while retaining Pareto efficiency.
Put another way, I think the market can be very effective with well thought out regs. but I don't believe there are better general/default alternatives than to start with a free market and use the empirical evidence to guide policies...
"likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is."
I disagree that this is a case of market failure.
This is a case of "you know better than the market" and you want to force a specific outcome through regulation. But I'm not sure that's what people want.
what if people don't really care about their data being exposed all that much? It's a risk they're willing to take to use social networks. The penalty is that people might move off your service if you leak their information (as is likely to some degree with Yahoo). That to me seems to be the evidence here. That's not a market failure, that's a choice.
With legisilation, we can change the market. In the Fight Club example legislation can make C be ten times as big, and change the equation. For Yahoo, legally mandated fines, or restrictions on what they can do in future[1] could make them wake up.
[1] Maybe if you run email and you get hacked, you're not allowed to run email again for a few years? That'd've have woken them up.
And until we figure out how to incentivize this behavior (or discourage malicious behavior), corporations won't be willing to solve these kinds of problems.
If only we had some sort of structure in our society that could solve the problem but wan't profit-driven. Maybe something that could oversee these corporations. We could call it "government" or something.
Pretty sure the US has one of those, doesn't seem to be working. In fact, it often acts against that (preventing sharing of encryption algorithms, trying to force inclusion of backdoors..).
Maybe you can try Bernard Stiegler. His english wikipedia page is a little poor in information, so I put a link about his last book (not yet translated).
Not sure why you're getting down-voted - if the market is set up to incentivize a certain behavior, then someone will do it eventually. People can get mad all they want, but they should be furious that the government has laws in place to incentivize that type of behavior (or at least allow it to happen.)
Well, there are a few possible solutions, and they don't all involve corporate incentives:
1. Government regulation
2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)
3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)
Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.
However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.
The market seems to be solving the problem just fine: nobody uses Yahoo anymore and companies with solid security practices (e.g. Google, Apple, Facebook) are thriving. If Google had a serious security breach, you can bet the market would respond to it and Google knows it.
And for the record, this will always be the mindset of corporations whose only concern is the bottom line. Until we as a culture accept that the market does not solve all problems, we're not going to solve these kinds of problems.