> > That's the current mindset of the technological world,
> > estimating whether the cost of atoning for the problem
> > is lower than the cost of securing the systems.
>
> And for the record, this will always be the mindset of
> corporations whose only concern is the bottom line.
> Until we as a culture accept that the market does not
> solve all problems, we're not going to solve these
> kinds of problems.
My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.
This is particularly true in the security space, because no system is 100% secure. And since resources aren't infinite, where do you stop? 90%? 99%? 99.9%? What if addressing that incremental 0.9% costs as much as the rest of the security apparatus combined? As much as the rest of the product combined? As much as your total revenue?
What's the other option? It can't be "not release anything", so a middle ground is found. We're arguing about shades of grey.
And sure, the government can help. Either by bearing some of the cost (e.g., investment, tax breaks, etc.) or increasing the impact of an incident (e.g., penalties, etc.).
But this isn't a big, bad, greedy corporate problem. This is a broader issue about how much risk we're willing or unwilling to absorb, and how efficiently we can address that risk.
> My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.
You're looking at this in only monetary terms, or at least Yahoo is. But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.
> But this isn't a big, bad, greedy corporate problem.
Of course it's a big, bad, greedy corporate problem. The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line. And quite frequently Yahoo's bottom line is in direct opposition to improving my life and the lives of other people.
>... But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.
In this situation it doesn't matter that yahoo is a private corporation - the same cost/benefit analysis essentially needs to be done no matter what the structure of the organization. Let's pretend that email had been created by a government agency and that agency has to decide how much of the budget to spend on security. If it costs X dollars to make something 90% secure, 10X for 95% secure and 10,000X for 99.9999% secure, etc etc eventually you have to choose how much to spend - resources aren't infinite for that government agency either. (And to make it much more difficult, they just have a guess that X dollars will make their product N% secure.) It isn't as black and white as you are trying to portray it.
I think it is fair to criticize yahoo for how the prioritized security but the same kind of issue has happened with non-profit companies and with government organizations, so no, it isn't just a "big, bad, greedy corporate problem."
You're the one trying to make it black and white, he's simply saying that unlike private industry, government can have another motive be primary rather than profit, i.e. help it citizens as the primary goal. Yea, budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken. Profit is not the correct goal for every action that can be taken by an organization, government isn't a business.
If "profit" is defined as: "generating more value than is consumed in the production process"...
Then yes, we damn well better demand that profit be the correct goal for every action regardless of organizational structure.
If our system is distorted to inaccurately measure profit locally, without properly accounting for negative externalities, then that's a legitimate problem, but the way to solve it is by factoring those hidden costs back into the profit calculation, not giving up on "profitability" properly defined.
If profit is defined as $income - $expenses = $profit, then you'd be using the word the way everyone else is using it, and you'd be participating productively in the conversation.
> ... government can have another motive be primary
> rather than profit, i.e. help it citizens as the
> primary goal.
But there's still ROI here, and there's still a budget (no matter how big the deficit gets). So the question remains: how do I spend that money? Do I spend all of it on security apparatuses, or do I have to scale back and spend some on other social services? How much? What's the best bang for my buck?
> budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken.
Profits are still required for gov't spending, but they are just made by someone else in the country and transferred to the gov't via taxation. Even deficit spending is just the choice to spend money today that will be obtained from taxation at a later date.
I'm looking at this in quantitative terms. Money is one measure. Effort, time, security, and others may be harder to quantify, but they're still important factors. "Security at any cost" quickly becomes simply impossible.
This is the general sense. Yahoo is probably on the "wrong" side of average.
But in some sense, you can vote with your feet. Companies who don't value security won't get your business. If enough people feel as you do, then the ROI calculation changes. And the same applies to politics as well: if you think more money should be spent on security and there's a societal good here, write to your congressman, or elect one who's receptive. Again, if enough people feel as you do, the political ROI makes this an imperative as well.
The fiction of markets is that costs and value can be reasonably determined. The truth is that in far too many instances, they cannot. Surface appearances or gross misbeliefs drive costing or valuation models and behavior, and as a consequence, goods are tremendously disvalued.
That's on top of the problems of externalities in which the costs or benefits aren't fully contained to the producer or consumer of a particular good or service.
A misprioritisation of values is what the drunk waking up with a hangover, the sweet-tooth spending 40 years dealing with systemic effects of diabetes, or the smoker suffering 20 years of emphysema and COPD comes to realise. The externalities are the drink-driving victim, the socialised medical costs (and privitised profits of the sugar firms), and the 2nd and tertiary smoke victims.
There are rather larger issues far more fundamental than these in the modern industrial economic system, but I'll spare you that lecture.
The point being that trusting on "the market" to offer corrections simply doesn't work.
>The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line.
I would argue that it's ALL corporations that only care about their bottom line. The entire reason a corporation exists is to make money, any other considerations like employee well-being, care for the environment, etc are driven entirely by either legal requirements or a need to retain talent in order to make that money. Any corporation who successfully projects an image of being "different" just has a good marketing team.
Externalities are a word we use to describe costs we find hard to model, but I find that most externalities do cost corporations real money. They just often aren't aware of it and haven't developed enough sophistication in their business cases to account for it. The best companies who support their security teams understand this. They understand that broken things lose them trust, customers and goodwill and those things are, even from a purely monetary and numerical perspective, incredibly valuable for a successful business in the long term.
The problem is not merely whether or not a profit motive exists to do right, but whether or not a business is insightful enough to model the full costs and include what we normally let go unexamined as mere "externalities".
Externality != "hard to model". Rather, it means difficult to internalise.
Garrett Hardin's "Tragedy of the Commons" gives a quite simple model of what an externality can be (overgrazing). The problem isn't in the modelling, but rather in the mutual enforcement of a collectively beneficial behavior.
That isn't to say that there aren't costs which are hard to model, but that's an orthogonal issue, and can apply just as well to internalised effects (e.g., the goodwill loss of a massive security breach) as to externalities.
Goodwill loss is not an externality.
I agree, adamantly, with your comment that businesses are frequently not enlightened or intelligent enough to model full costs. I'm seeing the issue of the long-term development of both cost and benefit awareness as a pressing issue, general to economics. It undermines many of the assertions of market efficiency.
I'd argue it >is< a corporate problem, and the article we are looking at shows exactly why. There should be consequences for running a company in this manner, and there are not. The people who made this decision did it because they were protected from the damage they did.
No, that assumes people are rational actors and they are not; preying on human psychology doesn't alleviate you of guilt, the companies are the problem, not their victims for not leaving.
It's similar to a company selling defective products or contaminating a city's water supply. The market response is too late to deal with those types of problems, and undervalues individual lives.
Yup, and it's too reactionary to problems that can be easily avoided by regulation, food safety for example. If it were up to the market, people would be dropping like flies because safety doesn't tend to increase short term profits as well as corner cutting.
I don't think you need to even concede the idea that users are rational actors--there are plenty of reasons why a rational actor would prioritize another factor over security. For example, many people got Yahoo email addresses a long time ago, and built a personal contact list of people who only know their Yahoo email. A rational actor might value keeping in contact with those people over their privacy. That doesn't mean that it's okay to expose that person's data.
The consequences should be that the company loses its ability to run a business. You've arbitrarily decided that the only acceptable mechanism for this happening is users choosing a different company. There are a whole host of reasons that doesn't work, and simply shifting the blame onto users for not making it work doesn't solve the problem.
> The consequences should be that the company loses its ability to run a business.
Or gains ability to run it properly.
> the only acceptable mechanism for this happening is users choosing a different company.
I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.
> shifting the blame onto users for not making it work
Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in.
And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.
> I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.
That's simply not realistic on technical issues. Users can't take responsibility for choices they can't be reasonably expected to understand.
> Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in
If you believe that, in your own words, user's responsible choice should be the primary mechanism of enforcement of this, you've rejected any effective means of achieving the above trite and obvious truisms.
In fact, security should matter to us a lot more than performance, pleasant design, or simple UI, because unlike those, security can be a matter of life and death. Which is why I don't want to leave it up to users.
> And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.
Which begs the question why you want to leave security regulation up to users moving away from the product.
Security people grade issues from two simultaneous yet different perspectives, security risk and business risk. It sounds like you are describing accountants not security people.
This is particularly true in the security space, because no system is 100% secure. And since resources aren't infinite, where do you stop? 90%? 99%? 99.9%? What if addressing that incremental 0.9% costs as much as the rest of the security apparatus combined? As much as the rest of the product combined? As much as your total revenue?
What's the other option? It can't be "not release anything", so a middle ground is found. We're arguing about shades of grey.
And sure, the government can help. Either by bearing some of the cost (e.g., investment, tax breaks, etc.) or increasing the impact of an incident (e.g., penalties, etc.).
But this isn't a big, bad, greedy corporate problem. This is a broader issue about how much risk we're willing or unwilling to absorb, and how efficiently we can address that risk.