Hacker News new | past | comments | ask | show | jobs | submit login
Defending Against Hackers Took a Back Seat at Yahoo, Insiders Say (nytimes.com)
357 points by apetresc on Sept 28, 2016 | hide | past | favorite | 319 comments



From the article:

> The "Paranoids," the internal name for Yahoo’s security team, often clashed with other parts of the business over security costs. And their requests were often overridden because of concerns that the inconvenience of added protection would make people stop using the company’s products.

That's the best summary of the problem for the industry as a whole, not only tech but any industry where failures are uncommon but with grave consequences.

A quote from Fight Club that illustrates that problem:

> Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall?

> Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X.

> If X is less than the cost of a recall, we don't do one.

That's the current mindset of the technological world, estimating whether the cost of atoning for the problem is lower than the cost of securing the systems.


Calling them the 'paranoids' probably seemed like a fun idea at the time, but I wonder if it set up a subconscious bias against their work. I wonder if they had been called 'The Guardians' or 'The Defenders' there would have been a different outcome.

Seems trivial, but words matter.


The Yahoo Paranoids chose their own name. It was designed to be light-hearted in a way that didn't make them seem stuffy so that engineering teams would be more receptive to their work. In my experience, this is incredibly important from the outset.

Anyone who has worked in information security for a month knows that the relationship between product engineering and security engineering defaults to antagonistic. It takes a lot of work to make it friendly and productive, and as a security professional I think "Paranoids" is much better for overall collaboration than something like "Defenders", which in my opinion reeks of self-importance.

The more pertinent issue here is management not fostering the culture enough.


Where I'm working now, we've got security engineers assigned to seating in each development team.

They're not managed by, or working for, our teams. They have their own manager and security work that they're getting on with.

Having them sitting amongst the team, however, is resulting in a much different narrative than any I've been around before. There's a much higher quality, and less antagonistic kind of engagement going on. They've become someone you chat with at the watercooler, or at their desks, instead of having to file tickets, or wait for scheduled reviews to raise things.

People can quickly consult with them and deal with a whole heap of small potential risks way early on in the development process, and it's paying serious dividends down the road.


That approach Works well with Q&A too.


You're talking about Squads basically. Bring different people in the same group. And yeah, QA is very similar to Security in some points, but if you think straight QA should include security. Weird to say that a software has quality without security included, but the truth is that security is specific that the regular QA usually can't handle.


You've capitalized Squad, but it's hard to Google. Where did you get that term, and where is it defined outside your head?


As xxr said, Squad is how Spotify names their (previously Scrum) teams. Other interesting concepts they use are "Tribes" and "Guilds". Take a look at the Spotify engineering practices, they are really inspiring.


Not the commenter you're replying to, but at least at my organization we borrow the term from Spotify.


Security engineers are seen as experts you consult about something you don't know. QA are not seen this way. Some QA engineers actually are experts that can give good advice on structuring an application in a more testable way, but that's not the norm.


Most QA guys only check that something meets the spec/story requirements, not that the code is sane or testable... many don't even go beyond UI testing. That said, I think GP was referring to having a QA embedded as part of a team.


You know... I keep thinking that with source control systems like Bitbucket enterprise, etc... why aren't more mid-large sized orgs requiring a security signoff for every pull-request with a pull request to master/release branches being the trigger point.

I do a lot of PR reviews, and while I may not catch everything, I will catch a few things here and there... someone with that mindset would be in a better position to handle that from the start...

Having a few security guys that do PR reviews for about half their workload would go a long way to improving things.

We're going through an audit for an internal application now... there's 1 major flaw (SSL2/3 is enabled), a minor (session cookie isn't https only) and a couple trivial (really non-issue) concerning output caching on api resources and allowing requests with changed referrers (this can be spoofed).

In any case, having auditing earlier on and as a potential blocker would make each minor change easier to deal with than potentially much larger changes... the app in question was developed for the first 8 months without even a Pull Request check in place... by then many issues regarding code quality are already too late to fix completely. :-(


Nobody wants this.

No "security guy" who has a choice wants to spend half their workload waiting for PRs to come in so they can chime in with feedback about default configurations.

No product programmer wants to deal with some "security guy" parroting the results of an automated tool to them over a code review platform.

No product manager wants to see progress stall because the product programmer and "security guy" are arguing over whether or not a call to strncpy should be replaced with a call to strcpy_s.

In the immortal words of my generation, ain't nobody got time for that.


Honestly, someone should have time for that, it's part of the problem... I go out of my way to comment on as many PRs as I can, because I'll catch things that will become problems later far more than other peers who just click approve.

The same can be said for security guys... they spend their day needing to work as well, and seeing a bunch of smaller things fly by is just as valid as a big audit periodically. It's easier to catch a lot of things before they become big as well...

There are plenty of times I'll comment (Okay, letting this through, but in the future revise to do it this way), sometimes I'll push back, but not always, that's what the review process is for. I'm just suggesting multiple approvers for PR, where one is someone who is security minded.

It's funny how many issues I'll see from other systems where someone does something per the spec, that has a flaw because they were completely compliant. Someone crafts an exploit, and I'm interested because I'd usually be more pragmatic in implementation. Last year there was a huff about JWT allowing cert overrides in some frameworks, as they don't ensure the origin cert matches a whitelist... when I'd implemented JWT, I only checked against our whitelist and ignored the property.

Sometimes security guys will see things and think of things in a way others won't... for me, one thing I often catch that others don't are potential points for DDOS target viability. Some of that comes from using node, where you do NOT want to constrain your main event loop thread. Others don't think about putting limits on JSON size, or compute heavy tasks, etc.

And, frankly, I'm tired of fixing related bugs to patterns that were broken from the start.... turtles all the way down, but the turtles are eating all the errors.


In the immortal words of every other generation :), "someone is going to find your issues. It's either you or your customers."

You don't seen to have an appreciation for the difference between a secure and an insecure product. Yahoo didn't either.


Personally, I have an appreciation for it. I'm a working security professional.

However, for a decade and a half I've been part of many different security regimes at many different organizations. None of them had an appreciation for the difference between a secure and insecure product, and additionally, none of them were punished by the market for it. Products have success or failure because of other factors. Security is something that organizations invest in, in the best case, because it's something they believe in, and in the worst case, for compliance reasons.

So now Yahoo has a big problem because they had this breach. First of all, is this actually a big problem? Yahoo has many other big problems. Is this going to make or break the company? No. Has any security issue made or broken a company? Microsoft thought they could be broken by security, so they invested billions into it. They were wrong. They were broken because they had crappy products that people were forced to buy. They figured this out and shut down their security organization. What about Target? What badness has befallen them? Surely not to their earnings or stock prices. What about any company that has suffered a breach? The biggest thing that happens is the CSO gets fired. Maybe some vendors get fired. That's it.

This is where the questions end when you start to push for more security involvement in the product. Ultimately you will (personally!) stand in front of the CEO who will ask you "will I lose my job, or suffer some other negative outcome on that scale, if I don't listen to you?" and you will answer, truthfully, "no." And that is the end of the conversation.


> Target’s chairman and chief executive officer, Gregg Steinhafel, a 35-year company veteran, is stepping down, as the massive pre-Christmas data breach suffered by the Minnesota retailer continues to roil the company. The decision is effective immediately, according to a statement posted today on the company’s website. John Mulligan, Target’s chief financial officer, has been appointed as interim president and CEO.

http://www.bloomberg.com/news/articles/2014-05-05/as-data-br...


Well, I am most certainly a working security professional. It sounds as if you've given up and become a bean counter.

If the answer you give your CEO is "no," then you aren't giving the proper answer. You are just being a "yes man," saying comforting words.

>> So now Yahoo has a big problem because they had this breach. First of all, is this actually a big problem?

I mean this in absolutely the best way possible, you shouldn't ever be allowed near either a business or a security decision that affects people's lives or livelihood. If you think that disclosing hundreds of millions of records (many of which must contain PII) is without repercussion, then I have a pretty good idea of which end of the security stick you are holding. You are describing a business model where you piss on your customers by transferring 100% of the risk to them.


You don't pay me. The C-suite pays me. Thanks for making this personal when it has no need to be, by the way.

Personal attacks aside, let's you and me go out to a bar and sing songs of how things should be. Tomorrow, we have to go back to how things are. In the land of how things are, to the business, the disclosure doesn't matter. Full stop. Does it matter to the customers? Oh yes. Dearly. It's a really big deal to humanity. The business and humanity are discrete.

Is that a tragedy? Yes. I weep. I go home and drink every night for this reason. Until I don't want to work for people that pay money, though, you have to think about the business first. Humanity second. Anything else is a fairy tale or communism.


Much more valuable to have the security folks a critical part of reviewing the _frameworks_, and then pushing adoption of those frameworks. Human reviewers won't catch everything no matter what, but you can make entire classes of problems go away by making them impossible to commit.


Does that mean we can kill angular 1.x because it encourages points of disconnect, undiscoverable code, too much pfm (pure fucking magic) and failure?


I understand what you are saying, but having been around similar dynamics in the past I think such deprecation is a little like starting off the relationship apologizing for what they're supposed to be doing.


Indeed, regulatory and security are the two parts of the company that are supposed to be antagonistic in order to keep the company out of trouble. How that plays out in practice has a lot to do with the personalities involved.


I used to slip in words like 'awesome', 'clever' and 'amazing' when talking to colleagues from other teams about the work that I was doing in the hope that it would influence their perception of the work. I've no idea if it worked though.


That was my first thought when I read it. A better name would have been "Tron", "Patronus", or "Endor".

Calling it "Paranoids" or "Inquisition" is just giving it another reason for people to loathe it.


Infosec isn't the ones doing the defending or guarding or any of that. They're typically working with the teams to do build and maintain to ensure their policies and procedures lead to and maintain a secure posture.


As a developer who is not very much into security, I am guilty of this crime. Infosec teams are very important and deserve respect and attention.


My experience with Yahoo (admittedly ending more than a decade ago, so I'm sure much has changed) was that cost probably was a huge deal. I ran engineering for the European billing platform. We processed many millions of dollars worth of transactions a year.

Yet when I had to ask for a new database server, I had to submit a written request to a committee in Sunnyvale, with graphs and other supporting documentation to demonstrate that the load of the server we already had was high enough to justify it. Then I had to join a hardware review meeting, that included maybe a dozen people. One of them being either Jerry Yang or David Filo (Yahoo founders; I've forgotten which one of them it was that did these).

The people in the meeting, even excluding whichever one of the founders, easily cost Yahoo more in salaries for the time they spent discussing my request for one lonely server than the fully loaded amortised cost of operating it for a couple of years.

It's not that I have an issue with reviews, and cost controls - on the contrary, but some degree of delegation and trusting staff with budgets would have been nice. I mean, I could have trivially cost Yahoo millions of dollars with a few keypresses if I wanted to or didn't pay attention - they trusted me with the ability to mess up their entire European payments platform with basically no oversight, yet I couldn't approve a single cent of hardware expenditure for the production platform, and neither could my manager, nor, I believe, could my managers manager, who was responsible for all of engineering across Europe.

I suspect a structure like that may have created a lot of resistance to recommendations from the Paranoids even when engineering (they seemed generally very well respected; one of my old developers is part of the Paranoids now - he'd wanted to for years) would like to accommodate them for the simple reason that getting approvals would be a massive hassle and slow things down.


Marcus Aurelius specifically talks about how important it was to have governors that he could trust, because the empire was so large that he could not possibly know everything about the empire in its current state. His lesson about task delegation is timeless. Well, his lessons are timeless, full stop.


I've been thinking a lot about how ancient empires operated and functioned, and what institutions they required.

Realise that Egypt, Greece, Macedonia, Rome, Persia, and China each spanned a thousand miles or more, the most effective transportation was over water, either along rivers or across seas or oceans, that ocean travel was impossible for much the year (Roman vessels were restricted to port from November through May, this lasted until the 1300s in Europe), and the minimum time for a message to traverse a thousand miles was easily ten days, if not months.

You needed autonomous lieutenants in place who could be given general orders (much like goal-seeking AI, now that I think about it), be trusted to be only modestly corrupt, not collude with enemies or others against the centre (a frequent problem), and to truthfully report what they'd experienced, in words -- writing existed, but not photography, video, audio, etc. Testimony, that is, someone's testement or attestation of fact, was all you had, though multiple testimonies could be compared against one another.

I find it interesting that every major empirical power had some intrinsic religion, probably serving as a moral check and guidance, a role that's often underappreciated today. Also that other than a set of strictures, the religions themselves often had little in common with one another: polytheistic vs. monotheistic, theistic vs. meditative, commandments vs. ancestor worship or reverence.

It's a topic on which I'm almost wholly ignorant, but find fascinating.


Communication delays were a big part of this.

I think unappreciated problem with modern communications tools is that by default, they enable and encourage micromanagement.


And as a consequence, deprecate trust.


That works... before the ipo. After that relentless success is the expectation - delegation has built in risk so is very hard to justify.


His column is pretty cool too.


> That's the current mindset of the technological world, estimating whether the cost of atoning for the problem is lower than the cost of securing the systems.

And for the record, this will always be the mindset of corporations whose only concern is the bottom line. Until we as a culture accept that the market does not solve all problems, we're not going to solve these kinds of problems.


  > > That's the current mindset of the technological world, 
  > > estimating whether the cost of atoning for the problem 
  > > is lower than the cost of securing the systems.
  >
  > And for the record, this will always be the mindset of 
  > corporations whose only concern is the bottom line. 
  > Until we as a culture accept that the market does not 
  > solve all problems, we're not going to solve these 
  > kinds of problems.
My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.

This is particularly true in the security space, because no system is 100% secure. And since resources aren't infinite, where do you stop? 90%? 99%? 99.9%? What if addressing that incremental 0.9% costs as much as the rest of the security apparatus combined? As much as the rest of the product combined? As much as your total revenue?

What's the other option? It can't be "not release anything", so a middle ground is found. We're arguing about shades of grey.

And sure, the government can help. Either by bearing some of the cost (e.g., investment, tax breaks, etc.) or increasing the impact of an incident (e.g., penalties, etc.).

But this isn't a big, bad, greedy corporate problem. This is a broader issue about how much risk we're willing or unwilling to absorb, and how efficiently we can address that risk.


> My immediate reaction is "Of course". A return on investment or risk analysis should drive activities on both the corporate and the government level.

You're looking at this in only monetary terms, or at least Yahoo is. But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.

> But this isn't a big, bad, greedy corporate problem.

Of course it's a big, bad, greedy corporate problem. The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line. And quite frequently Yahoo's bottom line is in direct opposition to improving my life and the lives of other people.


>... But frankly, I don't give a fuck about whether Yahoo succeeds financially--I want my life and the lives of other people to be better. And I want that to be the goal of my government.

In this situation it doesn't matter that yahoo is a private corporation - the same cost/benefit analysis essentially needs to be done no matter what the structure of the organization. Let's pretend that email had been created by a government agency and that agency has to decide how much of the budget to spend on security. If it costs X dollars to make something 90% secure, 10X for 95% secure and 10,000X for 99.9999% secure, etc etc eventually you have to choose how much to spend - resources aren't infinite for that government agency either. (And to make it much more difficult, they just have a guess that X dollars will make their product N% secure.) It isn't as black and white as you are trying to portray it.

I think it is fair to criticize yahoo for how the prioritized security but the same kind of issue has happened with non-profit companies and with government organizations, so no, it isn't just a "big, bad, greedy corporate problem."


You're the one trying to make it black and white, he's simply saying that unlike private industry, government can have another motive be primary rather than profit, i.e. help it citizens as the primary goal. Yea, budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken. Profit is not the correct goal for every action that can be taken by an organization, government isn't a business.


If "profit" is defined as: "generating more value than is consumed in the production process"...

Then yes, we damn well better demand that profit be the correct goal for every action regardless of organizational structure.

If our system is distorted to inaccurately measure profit locally, without properly accounting for negative externalities, then that's a legitimate problem, but the way to solve it is by factoring those hidden costs back into the profit calculation, not giving up on "profitability" properly defined.


If profit is defined as $income - $expenses = $profit, then you'd be using the word the way everyone else is using it, and you'd be participating productively in the conversation.


  > ... government can have another motive be primary 
  > rather than profit, i.e. help it citizens as the 
  > primary goal.
But there's still ROI here, and there's still a budget (no matter how big the deficit gets). So the question remains: how do I spend that money? Do I spend all of it on security apparatuses, or do I have to scale back and spend some on other social services? How much? What's the best bang for my buck?


Given the current state of computer security, a government program that fines companies for poor security practices could easily pay for itself.


> budgets aren't unlimited, but not having to be profitable makes a huge difference in which actions can be taken.

Profits are still required for gov't spending, but they are just made by someone else in the country and transferred to the gov't via taxation. Even deficit spending is just the choice to spend money today that will be obtained from taxation at a later date.


I know this is snarky, but: tell it to the OMB.

Corporations do not have any sort of exclusive lock on cost-benefit analysis.

Edit: including bad cost-benefit analysis.


I'm looking at this in quantitative terms. Money is one measure. Effort, time, security, and others may be harder to quantify, but they're still important factors. "Security at any cost" quickly becomes simply impossible.

This is the general sense. Yahoo is probably on the "wrong" side of average.

But in some sense, you can vote with your feet. Companies who don't value security won't get your business. If enough people feel as you do, then the ROI calculation changes. And the same applies to politics as well: if you think more money should be spent on security and there's a societal good here, write to your congressman, or elect one who's receptive. Again, if enough people feel as you do, the political ROI makes this an imperative as well.


The fiction of markets is that costs and value can be reasonably determined. The truth is that in far too many instances, they cannot. Surface appearances or gross misbeliefs drive costing or valuation models and behavior, and as a consequence, goods are tremendously disvalued.

That's on top of the problems of externalities in which the costs or benefits aren't fully contained to the producer or consumer of a particular good or service.

A misprioritisation of values is what the drunk waking up with a hangover, the sweet-tooth spending 40 years dealing with systemic effects of diabetes, or the smoker suffering 20 years of emphysema and COPD comes to realise. The externalities are the drink-driving victim, the socialised medical costs (and privitised profits of the sugar firms), and the 2nd and tertiary smoke victims.

There are rather larger issues far more fundamental than these in the modern industrial economic system, but I'll spare you that lecture.

The point being that trusting on "the market" to offer corrections simply doesn't work.


>The reason "return on investment" matters in a financial sense is because big, bad, greedy corporations only care about their bottom line.

I would argue that it's ALL corporations that only care about their bottom line. The entire reason a corporation exists is to make money, any other considerations like employee well-being, care for the environment, etc are driven entirely by either legal requirements or a need to retain talent in order to make that money. Any corporation who successfully projects an image of being "different" just has a good marketing team.


Or they’re just a small-to-medium-business with a consistent set of ethics? Ever thought about that?


Externalities are a word we use to describe costs we find hard to model, but I find that most externalities do cost corporations real money. They just often aren't aware of it and haven't developed enough sophistication in their business cases to account for it. The best companies who support their security teams understand this. They understand that broken things lose them trust, customers and goodwill and those things are, even from a purely monetary and numerical perspective, incredibly valuable for a successful business in the long term.

The problem is not merely whether or not a profit motive exists to do right, but whether or not a business is insightful enough to model the full costs and include what we normally let go unexamined as mere "externalities".


Externality != "hard to model". Rather, it means difficult to internalise.

Garrett Hardin's "Tragedy of the Commons" gives a quite simple model of what an externality can be (overgrazing). The problem isn't in the modelling, but rather in the mutual enforcement of a collectively beneficial behavior.

That isn't to say that there aren't costs which are hard to model, but that's an orthogonal issue, and can apply just as well to internalised effects (e.g., the goodwill loss of a massive security breach) as to externalities.

Goodwill loss is not an externality.

I agree, adamantly, with your comment that businesses are frequently not enlightened or intelligent enough to model full costs. I'm seeing the issue of the long-term development of both cost and benefit awareness as a pressing issue, general to economics. It undermines many of the assertions of market efficiency.


I'd argue it >is< a corporate problem, and the article we are looking at shows exactly why. There should be consequences for running a company in this manner, and there are not. The people who made this decision did it because they were protected from the damage they did.


> There should be consequences for running a company in this manner, and there are not

And the consequences should be users choosing another company and they don't. So the core problem are users.


No, that assumes people are rational actors and they are not; preying on human psychology doesn't alleviate you of guilt, the companies are the problem, not their victims for not leaving.


It's similar to a company selling defective products or contaminating a city's water supply. The market response is too late to deal with those types of problems, and undervalues individual lives.


Yup, and it's too reactionary to problems that can be easily avoided by regulation, food safety for example. If it were up to the market, people would be dropping like flies because safety doesn't tend to increase short term profits as well as corner cutting.


I don't think you need to even concede the idea that users are rational actors--there are plenty of reasons why a rational actor would prioritize another factor over security. For example, many people got Yahoo email addresses a long time ago, and built a personal contact list of people who only know their Yahoo email. A rational actor might value keeping in contact with those people over their privacy. That doesn't mean that it's okay to expose that person's data.


The consequences should be that the company loses its ability to run a business. You've arbitrarily decided that the only acceptable mechanism for this happening is users choosing a different company. There are a whole host of reasons that doesn't work, and simply shifting the blame onto users for not making it work doesn't solve the problem.


> The consequences should be that the company loses its ability to run a business.

Or gains ability to run it properly.

> the only acceptable mechanism for this happening is users choosing a different company.

I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.

> shifting the blame onto users for not making it work

Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in.

And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.


> I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.

That's simply not realistic on technical issues. Users can't take responsibility for choices they can't be reasonably expected to understand.

> Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in

If you believe that, in your own words, user's responsible choice should be the primary mechanism of enforcement of this, you've rejected any effective means of achieving the above trite and obvious truisms.

In fact, security should matter to us a lot more than performance, pleasant design, or simple UI, because unlike those, security can be a matter of life and death. Which is why I don't want to leave it up to users.

> And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.

Which begs the question why you want to leave security regulation up to users moving away from the product.


Security people grade issues from two simultaneous yet different perspectives, security risk and business risk. It sounds like you are describing accountants not security people.


But what's the concrete proposal?

The default "better idea" seems to be "let the government do it", but if you've been keeping up with the news in the past few years, "the government" doesn't exactly have a stellar track record either. Where a corporation may prioritize making money over security, government prioritize politics over security, wanting to spend money on things that visibly win them political points or power, not on preventing things that don't happen, which aren't visible to anyone. It's the same problem in a lot of ways. And both corporations and governments have the problems that specific individuals can be empowered to make very bad security decisions because nobody has the power to tell them that their personal convenience must take a back seat to basic operational security.

Even the intelligence agencies have experienced some fairly major breaches, which count against them even if they are inside jobs.

"The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.


> "The market screws this up!" isn't a particularly relevant criticism if there isn't something out there that doesn't screw this up.

My usual reply to this is that we use government to nudge market incentives, which is also what I think would be reasonable here: simply create a class of records related to PII, and create HIPPA like laws regarding those records that certain kinds of information brokers keep on people.

You then provide a corrective force to the market by providing penalties to violations, which raises the costs of breaches, and shifts the focus of the corporation towards security.

HIPPA or financial systems aren't perfect, it's true, but they're at a standard above what most of our extremely personal data is stored at, so we know we can do better, if we choose to as a society.


These laws would also be a lot more effective if you held the executive staff accountable as opposed to the shareholders. The model that corporations seek profit doesn't work in some cases, it's a group of individuals all seeking personal profit.


s/HIPPA/HIPAA/rg


So adjust the market.

There's a worthwhile conversation to be had about the corporate liability shield, and whether A) major security/privacy breaches should have some sort of ruinously high statutory damage award rather than requiring people to prove how they were harmed, and B) more suits -- not just over breaches -- should be able to pierce the corporation's protective structure and cause personal liability for corporate officers who make careless or overly-short-term decisions.

Adjusting the incentive structure of the market in which companies operate could do a lot.


It was already done by DOD under Walker's Computer Security Initiative. It succeeded with numerous, high-assurance products coming to market with way better security than their competitors. Here's the components it had:

1. A clear set of criteria for information security for the businesses to develop against with various sets of features and assurance activities representing various levels of security.

2. Private and government evaluators to independently review the product with evidence it met the standard.

3. Policy to only buy what was certified to that criteria.

Criteria was called TCSEC with Orange Book covering systems plus "rainbow collection" covering the rest. IBM was first to be told no in an embarrassing moment. Many systems at B3 or A1, most secure, were produced with a mix of special-purpose (eg guards) or general-purpose (eg kernels or VMM's). The extra methods consistently caught more problems than traditional systems with pentesting confirming they were superior. Changes in policy to focus on COTS not GOTS... for competition or campaign contributors I'm not sure... combined with NSA's MISSI initiative killed the market off. Got simultaneously improved and neutered afterward into Common Criteria.

Summary here:

http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-B...

Example of security kernel model in VAX hypervisor done by legendary Paul Karger. See Design and Assurance sections especially then compare to what OSS projects you know are doing:

http://lukemuehlhauser.com/wp-content/uploads/Karger-et-al-A...

Best production example of capability-based security was KeyKOS. Esp see "KeyKOS NanoKernel" & "KeySAFE" docs:

https://www.cis.upenn.edu/~KeyKOS/

So, that was government, corporations, and so-called IT security industry threw away in exchange for what methods and systems we have. No surprise the results disappeared with them. Meanwhile, a select few under Common Criteria and numerous projects in CompSci continued to use those methods with amazing results predicted by empirical assessments from 1970's-1980's that led to them being in criteria in first place. Comparing CompCert's testing with Csmith to most C compilers will give you an idea of what A1/EAL7 methods can do. ;)

So, just instituting what worked before minus the military-specific stuff and red tape would probably work again. We have better tools now, too. I wrote up a brief essay on how we might do the criteria that I can show you if you want.


I posted this elsewhere, but I think I intended to post it in response to your post:

Well, there are a few possible solutions, and they don't all involve corporate incentives:

1. Government regulation

2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)

3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)

Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.

However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.


> Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)

Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.

> However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.

I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.

However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.


> Unless you're proposing a large rise in non-profit foundations, these are mostly funded by for-profit corporations operating in a market.

You bring up a deep problem that I admit I'm not sure how to solve. I'd love to see a large rise in non-profit foundations, but I'm not actually convinced even that would solve the problem.

I think the solutions proposed by i.e. the FSF where up-front contractual obligation to follow through with their ideals may be a better solution, but we're beginning to see very sophisticated corporate attacks on that model, so it remains to be seen how effective that will be.

> I don't think most people beyond libertarians or knee jerk conservatives believe that. Heck most economists don't really believe the market is "self regulating", there's just too much evidence that it's not.

> However, most do believe that a regulated market solves the problem of large-scale resource allocation better than planning in most cases. In same cases, no: healthcare is a well-studied is a case of market failure and why centralized / planned players fare better. It's not clear to what ends data/communications/security is a case of market failure and warranting alternative solutions.

This argument is purely sophistry. You take a step back and talk about a more general case to make your position seem more moderate, admitting that the free market isn't self-regulating, but then return to the stance that the free market solves this problem because a regulated market (regulated how? by itself? In the context of free market versus government regulation, "regulated market" is a very opaque phrase) solves most cases, and on that principle, we don't know whether the very general field of data/communications/security warrants alternative solutions (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?).

We're speaking about a case where the free market didn't work, de facto: Yahoo exposed user data, hid that fact, and likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is.

So let's not speak in generalizations here: the free market has failed already in this case, and you admit that the free market doesn't self-regulate, so you can't argue that the free market will suddenly start self-regulating in this case. Regulation isn't an "alternative solution", it's the only viable solution we have that hasn't been tried.


" (now we can't even say "government regulation" and have to euphemistically call it "alternative solutions" as if government involvement is an act of economic deviance?)."

I presume an unregulated market is preferable to regulation at the outset, yes. Government regulation should be done in the face of systemic failures while retaining Pareto efficiency.

Put another way, I think the market can be very effective with well thought out regs. but I don't believe there are better general/default alternatives than to start with a free market and use the empirical evidence to guide policies...

"likely will get the economic equivalent of a slap on the wrist because users simply aren't technical enough to know how big of a problem this is."

I disagree that this is a case of market failure.

This is a case of "you know better than the market" and you want to force a specific outcome through regulation. But I'm not sure that's what people want.

what if people don't really care about their data being exposed all that much? It's a risk they're willing to take to use social networks. The penalty is that people might move off your service if you leak their information (as is likely to some degree with Yahoo). That to me seems to be the evidence here. That's not a market failure, that's a choice.


With legisilation, we can change the market. In the Fight Club example legislation can make C be ten times as big, and change the equation. For Yahoo, legally mandated fines, or restrictions on what they can do in future[1] could make them wake up.

[1] Maybe if you run email and you get hacked, you're not allowed to run email again for a few years? That'd've have woken them up.


And until we figure out how to incentivize this behavior (or discourage malicious behavior), corporations won't be willing to solve these kinds of problems.


If only we had some sort of structure in our society that could solve the problem but wan't profit-driven. Maybe something that could oversee these corporations. We could call it "government" or something.


Pretty sure the US has one of those, doesn't seem to be working. In fact, it often acts against that (preventing sharing of encryption algorithms, trying to force inclusion of backdoors..).


If only Hobbes, Locke, Rousseau et al. were around today..


Maybe you can try Bernard Stiegler. His english wikipedia page is a little poor in information, so I put a link about his last book (not yet translated).

https://en.wikipedia.org/wiki/Bernard_Stiegler

http://www.samkinsley.com/2016/06/28/how-to-survive-disrupti...


bookmarked this interview link for later. Thanks!


Not sure why you're getting down-voted - if the market is set up to incentivize a certain behavior, then someone will do it eventually. People can get mad all they want, but they should be furious that the government has laws in place to incentivize that type of behavior (or at least allow it to happen.)


Well, there are a few possible solutions, and they don't all involve corporate incentives:

1. Government regulation

2. Technical solutions (alternatives to communication that have end-to-end encryption, for example)

3. Eschew corporations entirely when it comes to our data and communications (i.e. open source and personal hardware solutions)

Personally, I think some combination of 2 and 3 is my ideal endgame, but we aren't there technically yet. 1 isn't really a great option either, because government is so controlled by corporate interests, and corporations will never vote to regulate themselves. But we can at least make some short term partial solutions with option 1 until technology enables 2 and 3.

However, none of these options will happen while people hold onto the naive idealism that the free market will solve all our problems.


The market seems to be solving the problem just fine: nobody uses Yahoo anymore and companies with solid security practices (e.g. Google, Apple, Facebook) are thriving. If Google had a serious security breach, you can bet the market would respond to it and Google knows it.


I mean, i am not agreeing with Yahoo here... but isn't that a reasonable thing to do?

Every act of securing or ensuring quality has a cost, and there is a line. I think most of us would agree that the line is very broken currently, but it appears you're citing a problem with the line in general, not the location of said line.

Everything has a cost, from a recall to better security to even a human life, the debate should be what we think should be paid, not whether or not we should worry about costs at all.

(If i misunderstood your intent, apologies)


The problem here is that the people who pay the costs of security are different from the people who are hurt when security is breached.


Loss of user trust hurts Yahoo.


Not enough that it isn't in Yahoo's favor to take that risk (you can't argue this--this is what happened).


And who are you to decide that that isn't a legitimate decision made by the users? If people cared more about security, they'd move away from Yahoo after something like this, and Yahoo would be more incentivized to keep this from happening.

Your problem is that you disagree with other users - but that's totally legitimate, not everyone has to care about the same things you care about.


> And who are you to decide that that isn't a legitimate decision made by the users? If people cared more about security, they'd move away from Yahoo after something like this, and Yahoo would be more incentivized to keep this from happening.

Who said this wasn't a legitimate decision by users? Certainly I didn't and wouldn't say that. There are a lot of reasons why a rational actor would choose to stick with Yahoo--that doesn't mean Yahoo exposing their private data is okay.

The other thing to realize here is that users aren't rational actors. My grandma is senile--is it okay for Yahoo to expose her private data because she doesn't know they aren't secure?

You've arbitrarily decided that users have to take all the responsibility here, and that the only way we can judge or punish Yahoo is by users leaving. But a) in many cases Yahoo is the only actor with agency to make a decision, and b) there are other ways Yahoo could be punished for using that agency to make decisions that harm users.

> Your problem is that you disagree with other users - but that's totally legitimate, not everyone has to care about the same things you care about.

No, I don't think that I disagree with other users--I think that many people care about their privacy, they simply a) don't know enough to make pragmatic decisions on how to protect their privacy, or b) have other priorities. And this is beside the point--none of this makes it okay for Yahoo to endanger their users' privacy.


> If people cared more about security, they'd move away from Yahoo after something like this

That's why Yahoo's failure to disclose this immediately bothers me so much.


Maybe the long-run solution is to make the coupling explicit: publicly post the value the company places on an account not being breached. (Ideally, this would work in tandem with some insurance policy that pays out for that amount, to validate that they really do so value it.)

Then, you can choose the provider with a high enough value to make you feel comfortable, in the understanding that higher-valued accounts will cost more.


This would work in many more contexts: The window sticker on my car can include the value they placed on passengers' lives when making cost-benefit trade offs.


It is reasonable, when your estimates are good and you're honest with regulators and customers. Sometimes your estimates are off by a factor of 10.

https://en.wikipedia.org/wiki/General_Motors_ignition_switch...

And you kill over 100 people, lie to regulators, lie to consumers, and end up spending billions trying to rectify the situation (recalls, settling suits, fines).


Yes, using the outcome of a formula to determine your actions generally relies on the formula being accurate.


It also relies on whomever is modeling the reductive, simplistic "cost model" to know the effect of all the other variables that factor into the companies success. Do these people really think that the legal/compensation costs are the only effect? How many sales did Ford miss out on because they were labeled as the "There is a known issue in this car that might kill you but until your life is worth more than a replacement part we wont repair it" car company? Did they factor in those costs into their revenue model projections? Did they factor in the sag in price point demand "Boss I wouldn't bid the same on that contract because they've shown themselves to sell a known defective product and we'll open ourselves to legal issues if one of their cars kill one of our customers we're transporting in their vehicles"

Despite what an MBA will tell you, the world is more complicated that X<Y*Z


There will always be things you can do that increase safety at a cost, but some of them will necessarily not be worth the effort, or you're forced to spend without bound on ever-more safety to the point that it's not worth using (and which may push people into still-riskier alternatives).

>How many sales did Ford miss out on because they were labeled as the "There is a known issue in this car that might kill you but until your life is worth more than a replacement part we wont repair it"

If you're turning down a company for making such a tradeoff, that's like saying "I'll buy a Ford rather than a GM because people might die in GMs."

You're right that you can legitimately criticize a company for failing to include certain things as costs, but it's not fair to fault them for somehow making this inevitable tradeoff, especially in the belief that you have some alternative provider that isn't.

(And example of such a cost -- that they can legitimately be expected to but don't -- would be something like "impact on general perception of risk", "impact on reputation of the car industry".)

>Despite what an MBA will tell you, the world is more complicated that X<Y*Z

It sounds more like you're agreeing that it's that simple, but that Z (events worthy of consideration) is not as simple as in typical models.


Which is why actuarial reports have around 2 pages of conclusions and 20 pages explaining the assumptions underlying them.


I also agree with this, security is always in a balancing act with convenience. Yahoo fell to far into the convenience side on this one but that debate on security vs convenience is happening in everywhere. The issue I've seen is that many companies are bad at doing risk analysis about these choices. That's the bigger issue in my view.


> security is always in a balancing act with convenience

I don't think that's always the case. A whole lot of security can be had with little or no inconvenience, given an appropriate mindset, though one might argue that such a mindset is an inconvenience in itself. :)

> many companies are bad at doing risk analysis about these choices Amen to that!

I think that having a basic, security aware mindset goes a long way, even if there is very little 'budget' or 'ability' to do inconvenient things.


Philosophically speaking, you cannot improve security without sacrificing usability. What I mean by usability is the capability for someone to do something, not simply convenience for the users themselves. No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.

For example, the user might not see a capability decrease if you use MD5 or bcrypt, but you certainly see a capability decrease because you can no longer see their passwords and you have to do extra work to maintain them securely. Sometimes security decisions are easy, like hashing passwords, because these days no one wants that capability. But sometimes they are not easy decisions.

You can pass a lot of convenience savings on to users by assuming the capability sacrifice yourself (for example, choosing the password hashing algorithm behind the scenes), but you can't do this for everything (for example, mandating two-factor authentication or password resets be masse).

This might come across as pedantic, but it's very important to maintain a mental model this way because it helps you understand risk analysis for more complicated security and usability tradeoffs. Starting from the premise that you can have any security without a decrease in usability is not helpful in that regard.


Your argument is assuming something that I don't believe is true, which is that we're already on the Pareto optimality frontier for security/convenience. It is certainly true that you can not forever increase security without eventually impacting usability, but I don't think many people are actually in that position.

I've improved a lot of real-world security by replacing functions that bash together strings to produce HTML with code that uses functions to correctly generate HTML, and the resulting code is often shorter, easier to understand, easier to maintain, and would actually have been easier to write that way in the first place given how much of the function was busy with tracking whether we've added an attribute to this tag yet and a melange of encoding styles haphazardly applied. What costs you can still come up with ("someone had to create the library, you have to learn to use it") are generally trivial enough to be ignored by comparison, because the costs can be recovered in a single-digit number of uses.


"Your argument is assuming something that I don't believe is true, which is that we're already on the Pareto optimality frontier for security/convenience. It is certainly true that you can not forever increase security without eventually impacting usability, but I don't think many people are actually in that position"

That's true that we aren't at the sweet spot yet but that what I meant by companies being bad about doing the risk analysis judgement of security versus usability.

On you second point languages have gone through that cycle. Look at Java doing boundary checks. That helps avoid a whole class of security issues but at the cost of making things that C was able to do easily more difficult. These tradeoffs happen at every layer.


> No amount of security can be added without a concurrent decrease in usability, even if that usability is something you didn't expect or want to do.

It seems strange to describe this this way for something like fixing a memory corruption bug or switching from a vulnerable cryptographic algorithm to a less vulnerable one. The capability that you're giving up is ... potentially breaking your own security model in a way that you weren't even aware was possible?


I think I might not be conveying my point very well. Let me clarify this as succinctly as I can.

Usability doesn't just mean things users want to do. Usability means things anyone (users, developers) can do. By definition, "securing" things means limiting the capability of certain users or developers to do (hopefully) specific things. How efficient you are at this determines whether or not you'll also reduce the capability users or developers want to have when you reduce the capabilities they don't want to have.

To give a concrete example: using a cryptographic algorithm immediately impacts usability along performance and capability axes. Previously, you could arbitrarily read and manipulate that data because it was plaintext. Afterwards, you could not. Now you need to be careful about handling that data and spend developer time and resources implementing and maintaining the overhead that protects that data and reduces its direct usability.

It doesn't matter if you wanted that capability - it's gone either way. That was a trade-off, and it is an easy decision to make, but not all decisions are easy to make. Every security decision can be modeled as a trade-off.


I fondly remember the convenience advantages of plaintext password storage, both as a user and somebody supporting users.

Occasionally I wonder if there are user accounts in my life that are irrelevant enough I'd be happy to buy that convenience advantage with the necessary security risks ... but of course people's tendency towards password re-use makes that trade-off basically unofferable in any sort of ethical way.

At least bcrypt makes it moderately easy to not completely screw up the hashing part.


Although I'm tempted to argue against your view, it ended up reminding me of

http://www.oreilly.com/openbook/freedom/ch07.html

and somewhat relatedly https://web.archive.org/web/20131210155635/http://www.gnu.or...

which tend to support your point.


That's a good example but a bit cherry picked. I could just as easily point out the opposite with accessing an account. If insecure, it still requires a certain amount of information and time upfront then some login just to identify the user. The server will compare that to its local data. The time due to network latency or server load means it usually happens in seconds.

Adding a password that gets quickly hashed by the application to be sent instead costs some extra time. Almost nothing given libraries are available and CPU cycles cheap. If remembering the password, the user has to just type it in once or rarely. The hashing happens so fast that the user can't tell it happened on top of already slow network. Most of the time the user of this properly-designed system will simply type the URL, the stuff will auto-fill, and exchange will take same time. No loss in usability except one time whose overall effect is forgotten by many interactions with identical, high usability.

Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems. They could theoretically get performance benefits without the tagging but that's only if incorrect software + much extra work is acceptable. If premise is it's correct, which requires safety much of the time, then the more secure CPU and language are better plus improve productivity. Easier to read, too.

A final example is in web development. The initial languages are whatever crap survived and got extended to do things it wasn't meant to. So, people have to write multiple kinds of code with associated frameworks for incompatible browsers, server OS's, and databases. Many efforts to improve this failed to generate productivity/usability and security. Opa shows you can get both by designing a full-stack, ML-like language with strong types that makes many problems impossible by defaults. Easier to write and read plus more secure. Ur/Web does something similar but it's a prototype and functional programming rather than production.

Conclusion: usability and security aren't always at odds. They are also sometimes at odds in some technical, philosophical way that doesn't apply in real-world implementations. Sometimes getting one requires a small sacrifice of the other. Sometimes it requires a major sacrifice or several.

It's not consistently a trade-off in the real world.

Note: I cheat with one final example. An air-gapped Oberon System on Wirth's RISC CPU uses far less transistors, cycles, energy, and time than a full-featured, Internet-enabled desktop for editing documents + many terminal-style apps. Plus you can't get hacked or distracted by Hacker News! :P


> Likewise, a user coding on a CPU like SAFE or CHERI tagged for memory safety in a language that's memory-safe will not be burdened more than someone coding in C on x86. They will be burdened less by less mental effort required in both prevention and debugging of problems.

In the parent commenter's framework, I suppose the safer language still comes at a cost in terms of the ability to use unsafe programming techniques -- like type punning and self-modifying code.


Hmm. You could use those as examples. There would be cases where type punning might be true in developer time. There would be cases where self-modifying code might buy you better memory or CPU efficiency. Yet, self-modifying code is pretty hard to do and do right for most coders that I've seen. Type punning happens automatically in dynamic, safe language with decent conversion rules. You often only do the conversion rules once or when you change the class/type but you mentally do that in your head anyway if you're analyzing for correctness. Difference is you typed it with conversions being mechanically checked.

These you bring up seem to be double-edged swords like the others that can have about no negative impact or significant one depending on context.


Ok, let's not talk philosophy and talk capability-based security with CapDesk instead:

http://www.combex.com/tech/edesk.html

They already demonstrated that integrating POLA at language and security level with simple, user authorizations could knock out most problems automagically. Did a web browser that way, too. KeyKOS previously used that model for whole systems that ran in production on IBM's mainframes with checkpoints of apps and system state every 30 seconds on top of that.

Still think you have to screw usability to improve security? And does it matter that it might be true in an absolute sense of some sort if in practice it might be no different (eg File Dialog on Windows vs on E/CapDesk)?


The point is that not ensuring security also has a cost, one which is harder to see.


> Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. > If X is less than the cost of a recall, we don't do one.

This is a reasonable expected value calculation - and not really that controversial. The real issue is that the model for cost isn't quite accurate; if you ask an actuary, whose livelihood is based on accurately measuring and accounting "risk," he/she will tell you that you would need to account for the probable loss in future revenues due to negative customer sentiment. Once you account for that, the cost of recall is a _much_ better proposition.


I know it's not good economics, but if you see a human life as priceless, the numbers don't work out quite the same. I think that's what 'Fight Club' was about. I guess the conversation departs the realm of economics at that point, and becomes one of philosophy and/or religion.


I think I'd challenge the assertion that Fight Club in either medium was about human life being priceless (but I understand you probably meant the quote). Quite the opposite, I'd think.


Seems to me that the character, the author, and the audience find something tragic in a life measured only in calculation, and that they all think there ought to be more to life than what's apparent. 'Priceless' may be a stretch, I agree.


If you've built an organization on numbers based decision making, you can no longer consider anything priceless, because an infinity (especially if there are two competing infinities) will cripple your ability to decide.

Companies run on strictly utilitarian ethics, which is why so many ethical complaints are invisible to them. For example: (Customer) ad tracking is bad for me! (Company) But tally the value of our services, we're clearly in the black!


I would contend that those equations are a bit more nuanced than you give credit.

Let's say I hypothetically give you that the Customer sees ad tracking as "bad" (whatever that means ...let's just accept it for argument.)

(1) Then the [Customer] utility function is: (Value from free services) - (Negative experience from ad tracking) + (Possible positive experience from learning about a new product or service from better targeted ads)

(2) The [Company] utility function is: (Value from ad revenue alone) - (Negative feedback on ad targeting) + (Revenue gained from higher ROI on marketing spend resulting in more purchases/subscriptions/whatever.)

In (1), I think people on average don't care about "privacy" related news because users don't see the negative experiences outweighing the other parameters. In (2), the negative feedback on ad targeting isn't really that large at the [Company] level to warrant much change (at least if you leave the echo chamber of HN every now and then.)

In the case of Yahoo, I still hold the hypothesis that they underestimated the (Negative feedback on a breakdown in security) as well as (Positive revenue gained from trust in security.) Then again, I doubt myself because if this were true, Box would be lightyears ahead of Dropbox; sometimes the coefficient on UI _really is_ larger than that of security...?


> In the case of Yahoo, I still hold the hypothesis that they underestimated the (Negative feedback on a breakdown in security)

Yahoo's stock is up (+53% since February, with a small dip in late June). Where is the miscalculation?

Volkswagen is back to positive sales growth, and their stock has recovered 50% since their discovery last September. Their calculation was correct, too.


Ah good point - at least for Volkswagen...Yahoo has other confounding factors (their sale, etc.) but overall the impact is probably a short-term shock with few longer-term lagging effects.


And that's why, if the problem was obvious/known, we need to fine companies enough that x becomes way bigger than a recall.


If you make cost X so high that X is an existential risk, people/companies will chance it because security isn't binary and "Either way we're fucked if we get a breach".

So then companies just never disclose.


Or it makes the cost so high that the underlying product becomes impractical. I'm pretty happy to live in a world where I can buy a car for less than $100k, even if that car ends up being much less safe than an S-class.


That's true, but a company can only play that game so many times before it catches up to them. "Never disclose" isn't a workable policy because eventually someone will leak the data.

It's also worth noting that you're talking about a hypothetical, but there are real life examples of this sort of security working despite your claim that it won't work. I've worked for HIPAA-regulated companies. It's certainly difficult to meet their requirements, but it's not impossible, and the regulations do have a real impact on the security of the data.

I'm also not convinced that security isn't a binary. You're either secure or your not, and you're only as secure as the weakest link in your system: that seems pretty binary to me.

A more accurate statement might be that perfect security is prohibitively expensive in many cases. But in many of those cases, data is actually not needed, and is collected because business wants visibility into users, even if that means compromising user security. This divides companies into three camps:

1. Companies where security is cost-effective.

2. Companies where security is cost-prohibitive, but which don't need to collect data.

3. Companies where security is cost-prohibitive, but which need to collect data.

I'd posit that the vast majority of companies are in categories 1 and 2, and that it would be a net benefit to people if all companies in category 3 stopped existing.


> I'm also not convinced that security isn't a binary. You're either secure or your not, and you're only as secure as the weakest link in your system: that seems pretty binary to me.

You cannot use the phrase "as secure as your weakest link" and then assert that security is binary. You're using terms that indicate varying levels of security.

More to the point, security is clearly not binary. You can support login over HTTP, which is quite insecure. You can support login over TLS which is much more secure. You can support only more recent algorithms over TLS which is more secure still. You can enforce two factor authentication, which adds more security. You can make your clients use certificate pinning which makes you more secure yet. You can allow easy access only from known clients and otherwise make the clients go through some extra authentication steps (secret questions, email verification, etc.). You can do the same for known locations.

Each of these options provides different levels of security. None of them are "secure" in any binary sense.


I think the missing piece in what you are saying is that there's an unspoken question here: "Secure against what?"

Let's use your examples to explain:

> You can support login over HTTP, which is quite insecure. You can support login over TLS which is much more secure. You can support only more recent algorithms over TLS which is more secure still.

Secure against what? If it's password exposure you're worried about, then HTTP is definitely not secure unless some other security is used. But given the attacks I know of against older versions of TLS, I don't think it makes sense to say that older versions of TLS are less secure against password exposure than newer versions of TLS, because the vulnerabilities I know of in old versions don't leak passwords[1]. So HTTP: not secure, TLS: secure, for password exposure. It's a binary whether it's secure for password exposure.

If, however, it's unauthorized access we're worried about, the CRIME and BREACH attacks are usable against all versions of TLS for session hijacking, so we could say that neither HTTP nor TLS is secure against unauthorized access. Again, it's a binary whether you're secure for unauthorized access.

So yes, actually each of these options is secure in a binary sense, when you ask what it's secure against.

Security, as a whole, as I see it, is a big `&&` of all the smaller binary pieces of security that matter for a given product. In reality, for most products, you have to be secure against password exposure and unauthorized access. It doesn't matter if you're secure against one if you're insecure against the other--that's what I mean when I say you're only as secure as your weakest link. So when talking about your security as a whole, it really is a binary: either you're secure or you aren't.

[1] This is for the sake of argument--don't take my word that older versions of TLS are secure against password exposure, as I haven't investigated that claim fully.


You're trying really hard to fit this into your binary model. Security is all about managing risk. It's not absolute. TLS didn't change when the CRIME attack was revealed, but it suddenly became less secure because the risk profile changed. But before CRIME, TLS wasn't perfectly secure. There was always the risk that the protocol could have undiscovered flaws, that an attacker could guess the private keys, that a cert authority could issue a valid cert to an attacker, etc.

In a world of imperfect security, talk of binary security is meaningless.


Security is all about managing risk for you because you've already chosen to compromise on security.


Not compromising on security is an unrealistic ideal. A perfectly secure system is a perfectly unusable system.


That isn't something made up for the film, in real life at least one "no recall" decision has been made using exactly that sort of cost/benefit analysis: https://en.wikipedia.org/wiki/Ford_Pinto#Cost-benefit_analys...


Isn't this the case of any business? Strictly speaking, they're profit generating machines. That's the purpose of regulation is to offset this equation by some amount that makes the equation balanced where society collectively deems it reasonable. That's the intended purpose anyways.


  Narrator: A new car built by my company leaves somewhere traveling at 60 mph. The rear differential 
  locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall?

  Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by 
  the average out-of-court settlement, C. A times B times C equals X.

  If X is less than the cost of a recall, we don't do one.
The example reminds me of this discussion[0] between Milton Friedman and a student.

[0]: https://www.youtube.com/watch?v=jltnBOrCB7I


Note that:

1. Friedman positions the student's view as wrong. And changes the question.

2. Friedman argues himself to the student's argument. Without acknowedging this.

3. Friedman never once acknowedges that the problem was that Ford was aware of the risks but chose to conceal them from the public, such that the public was fundamentally unable to make an informed choice.

4. That allowing people to bargain with their own lives leads to numerous other slippery-slope and logically-constrained tragic inevitabilities. Individuals almost always think they can beat the odds. They're almost always wrong.

What cost-benefit analysis almost always fails to consider are the moral and goodwill costs of making a decision which is intrinsically harmful to the customer. Most especially when not informing the customer of the full risks.

That specific clip is among the more prominant reasons I find Friedman an entirely unfaithful and bad-faith debater. He keeps moving the goalposts and using equivocations just enough that unless you're quite attuned to the fact, you'll miss it completely. And that is where he's not lying outright. Curiously enough, his son David does pretty much precisely the same thing.

Neither seem capable of admitting error either, which is the final loss of credibility.


But cost is one - pretty good - way to figure out which branch of that tree to take. You can decide to (arbitrarily?) weight the decision towards the cost of securing the system.

I don't see how it is that security isn't totally analogous to a lighthouse, which is the classic example used to explain public goods. Yet we're expecting Yahoo to underwrite security on its own?

And how is it that we simply let the attackers off the hook? Using naval piracy as a metaphor, the response was rather violent suppression by (primarily the British) Naval forces.

The seas were commons, and pirates were hung from yardarms as a public service.

I could cynically project that "security" is being used somewhat as a make-work program for engineering staff. Te concern is that language systems of inappropriate weight may be used simply because they're "secure". Granted, the hardware ecosystem certainly makes this less than problematic.

And I am sure these metaphors break down at some point, but they work for me, for now.


  That's the current mindset of the technological world,
  estimating whether the cost of atoning for the problem is lower
  than the cost of securing the systems.
That mindset can be changed if the companies are fined heavily for breach of customer data.


This was so fking stupid because it's not even how this works. Yes, the manufacturer can initiate a voluntary recall, but there are other paths to recall. Insurance companies are the point of the spear for payouts -- if they think that a particular model or manufacturer has a problem they're going to say something if only to reduce their costs.


That's not just the tech industry, that's more or less the foundation of society: maximizing value. The only aberration here is that Google, not Yahoo, sees that there is a second component of X: [how much the average customer believes they will lose in future security breaches] * [total number of customers].


that fight club example is amusingly cynical, but a true cynic might think it idealistic to believe that high-level decisions are made according to any formula. if they were, what calculations could explain the decision to allocate resources to Katie Couric?


A long of the good ones left a long time ago.

Y! been stuck in a rut, coasting, like AOL for a long time... hence Verizon sees their old white grandaparents with email as a stable user base. Most old people won't change email addresses no matter what happens.



To this add what I call "pre-breach failures of imagination". Most often manifesting itself as "Why would anyone hack us?"


It sounds to me like Fight Club just repurposed the Ford Pinto lawsuit.


I'm not a lawyer but if the "probable rate of failure" passes a certain threshold, criminal negligence should be a consideration. Certainly in the cases of vehicles and also in the case of computer security where lives are stake, hospital systems for instance.


Funny timing. My in-laws (both 70+) just had to change their passwords (both using @yahoo.de email addresses) and my mother in law probably botched it / managed to type the wrong thing twice or something.

Password reset requires 2 security questions (ugh - already ugly) and while she's 100% certain that she knows the answer to both the second one isn't accepted - probably another spelling issue (think St. Marlo vs. St Marlo vs. Saint Marlo vs. Sankt Marlo vs ..).

All of this is her fault, not yahoo's. But now she's stuck. There are no ways to contact support, at all, and by now her 'resolve this problem' links already contain a "In rare cases like these, we suggest creating a new account" line.

Anecdotal moral of the story: Yahoo has no customer support at all. Migrate your elder family members away while you still can. :)


A certain elder member of my family simply cannot remember passwords. Or that she had a password at all. Or that she had an account to see what she wanted to see. So she uses password reminders everytime she wants to log in. If you log her out from a service to login yourself she asks you why "the computer stopped working". Password reminder works well until she gets logged out from the mail account, at which point creating a new account is a great solution according to her. So now when she gets a password reminder sent you have to check multiple mail accounts and hope that she is logged in to the right one. Otherwise you need to use one mail account to gain access to another until you find the password reset link. At this stage you have to hope for that the reset link is for the right Facebook account, and not any of the duplicate accounts she created when "Facebook wasn't working".

Her comment: "What? Do I need a password for this? I hate passwords! Why do they make me use passwords?" Proceeds to klick password remainder link

True story. Would ve funny if it wasn't so tragic.


Hate to be the one pointing this out, but your relative is unfit to use online services now.

You would not let her drive, would you? Even if the car provided her more automomy, the risk of her running into another car would be too high. Why would you then let her expose herself to a world full of predators that are specifically targeting senior citizens to wipe their bank accounts?

Better to monitor her use of computers as if she was a young girl.


Actually, she isn't that far gone. But she hates computers with a passion and treats them accordingly. I have tried for many years to teach her the concepts, but she refuses based on the premise that she thinks it's too complicated and that it "should be simpler".

Kind of reminds me of myself following useless bureaucracy.


Ok, not what I had in mind when I wrote my first comment, sorry about that.

Have you tried providing written instructions? In my opinion, for her vectors of attack, it is much less risky to have her password written down in paper somewhere on a drawer at home than have an online mess.


As long as she isn't using internet banking, is it really that much of an issue?

Assuming she doesn't fall for any scams (which is a big assumption).


Ideas that might help:

* set all redundant accounts to redirect mail to a main one

* do write the identifiers and passwords on paper notes, and have her keep them, for instance, inside her wallet


I just had to convert my online bank account to their new (2nd in a year) system, which now requires five security questions. As a single man who doesn't have a "favorite" anything, this proved to be a challenge. All questions were either regarding spouses, favorite somethings, or questions about descendants which I have no clue about.

I ended up just picking random questions and setting them all to the same answer, which has nothing to do with any of the questions - something that I can't even misspell if I tried. This is going to be my new strategy moving forward.

And don't even get me started on their silly username and password "security" rules.


I use 1Password and randomly generate answers for each question and log them in the "Notes" of the account.

I'm sure other tools like KeePass have similar sections to do the same thing.

That way the answers aren't reproducible and you have them safely stored somewhere.


As an added bonus, this means someone armed with the real answers to those questions won't be able to get access.

When Sarah Palin's "personal" email was hacked during the 2008 election, the attacker used her Wikipedia page and recovery questions.[0]

[0]: http://nypost.com/2008/09/19/dem-pols-son-was-hacker/


'hacking'


"to circumvent security and break into (a network, computer, file, etc.), usually with malicious intent"

Just because they didn't impress you by finding a side channel timing attack for the password hashing algorithm used by Yahoo, doesn't make it any less of a hack.

Why spend millions investing in a network of computers to break encryption, when the key can be gained far more easily with a $20 tire wrench applied with sufficient force to the DBA's knee caps?


I do this but it poses a problem for, as an example, banking. When they ask you the answer to your security question over the phone and you don't have access to your computer/password manager. Let's say you're one of those weird people without a laptop and your account is frozen while travelling overseas.

Having a cat named 1FD362BW9L6MBOWRD23SEF43 becomes a huge problem...


That's why I like 1Password. It's on my phone, so it's accessible, and I can do "words" instead of "characters".

So, I could very well have my mother's maiden name be "panda porpoise flutist sandpile", but I understand what you're saying. It may not be for everyone, but I work in the security sector and usually over-paranoid is better than ill-prepared.


Yes, this is the right way to go. Unfortunately it is limited to the tech savvy. Also, I know somebody who needed to contact a company by phone, and he needed tell the rep his security answer. He had to read off his 20 random characters. Pretty lol.


Right? Like "what is your favorite food"... what do you mean what's my favorite food? What am I, five years old? Who the hell has a favorite food? I like pizza. I like wings. I like burgers. I like bbq ribs. Who has a single favorite food? "What street did you grow up on" like many people, we moved quite a few times when I was a kid. "What was your favorite vacation/What was your favorite concert/What is your favorite movie" change way too often for any long term kind of thing.

"What city did you meet your spouse in", "what was your first car" and "where were you born" are way too common of knowledge to work reliably. Security questions suck.


Principal.com asked me a question about which car I've owned, and gave me a list of cars with year, make, model, and trim level. I've owned at least a dozen cars, and can't remember them all, much less in that detail. Needless to say, I failed that one.


I have a similar story. I wanted to get a copy of my birth certificate sent to me. In order to do so, I had to answer some security questions, which were apparently drawn from public records.

E.g.: "what was the name of the person you bought your house from?"

Really? Not only was that transaction about 15 years earlier, but I never even met the people. They moved out of town long before their house sold. Who the fuck cares what their names were?!?! Nowadays all that stuff is done through real estate agents and title companies.

So, while filling out an online form, I'm supposed to go rummaging through 15 year old records to find the names? And what if I stored those papers offline for safekeeping (e.g. at a bank)? How long will the question on the screen remain active before it times out?

There has to be a better way of doing these things.


I ultimately ended up calling Principal. The person on the phone asked some silly questions too, but at least she let me pass on a couple. She then reset my account and sent a link to reset my password. That wouldn't be so easy with all sites, since they don't all have reachable humans.


United airlines recently switched to 5 security questions with a predefined list of acceptable answers. You get a drop down to select from.


I'm almost tempted to sign up just to see what that's about, but I'm sure I would just be torturing myself at this point. I've been going through a mortgage refi, and my bank was just the worst offender, but some of the other site from which I needed to retrieve documents were almost as bad (I'm looking at you Principal).


Think of it as a new opportunity to invent a new "favorite" at random. It will confuse whoever tries to break into your account.


Yahoo has also taken over email for Verizon, which I use. Two days ago, without warning or notice, I suddenly couldn't log into my verizon email. I called verizon customer support, and the lady had no idea why I was locked out of my account. After jumping through hoops, I was able to reset.

Yesterday, my wife (same account) was locked out as well.

I'm wondering just how big of an issue this is.


Something similar happened to me with one of my rarely-used Gmail accounts, got prompted for the security question after it took me until my 3rd attempt to get the password correct. I had set the answer to gibberish some 8 or 9 years ago, and had zero way of recovering the account. Found lots of people with similar issues on the Google product forums, all answered (not by Google employees, of course) that it was completely unrecoverable.

I wrote off the account at that point and migrated a few accounts attached, but 2 weeks later the system let me enter a phone number to be able to log in.

Moral of the story: pay for support if you want it.


If only Google would let you pay for support. Someone unintentionally locked out of their account would probably happily pay $50 to get back in.

I had to deal with this a while back because TFA was enabled on my account and I got a new phone and number at the same time so my login options were all broken, and the recovery flow didn't work for the same reasons. Thankfully this was a Google for Work account so there was a way for me to prove domain ownership and they restored my access. If this had been my personal Gmail account (not attached to the domain), I would have had no way to recover it, partly because I couldn't prove ownership so easily, but mostly because they have basically no support for personal accounts.


They do let you do that. For $60 a year, you can have 24x7 phone/email support via their Google Apps platform. No restriction on it needing to be for work AFAIK.


https://apps.google.com/pricing.html

Would that be the landing page? If so it asks a lot of 'company' type questions. Is there a personal landing page?


I think that's just a convenience thing. I'm fairly sure you can sign up without and they'll still give you support. I've had Google Apps for years now and it's been good.


Interesting. I was not aware of that. Thanks


I would say the moral of the story is you get what you pay for.


There's a paid option I think (I wouldn't consider that for myself, but stumbled upon it while searching for a way to solve this issue). But I'd also argue that

a) you're paying, if someone offers a free service that is shoving ads in your face all the time, tries to hijack your home page and lures you to install browser toolbars and whatnot. It isn't free, you generate revenue.

b) no service, free or not, should cause you to lose access to your data. While you might mumble "Backups! You should have those" that .. isn't exactly a decent reaction or something to tell a 70+ person either. Plus, the email address is an online identity. Tied into other services (password recovery there is now broken). Causing loss of future data, backups or not. If you provide a service like that, you should provide excellent ways to regain access to that in my world.

c) I'd probably (well, not me. I'd make her..) pay for a ticket. Given the option..


Yet who would pay for email; not sure there even an option.


Fastmail?


I managed to contact support myself to regain access to an Yahoo account. I remember Yahoo customer support being on Twitter to deal with these kinds of issues.


Honestly, what do you expect from support in this situation? To give someone access to an account without authentication?


Nah, I expect a human to do various things. Examples below, sorted by decreasing preference and increasing complexity:

* Compare the security question's answer with a brain vs. strict equality ("Is the answer provided by the person essentially the correct one, spelling ignored?"). Right now the account is (forever?) inaccessible although the correct answer is known. It's a name of a place. She could provide GPS coordinates. For some reason she cannot come up with her original way of answering the question and you're very heavily rate limited, so she was able to try a couple variations only.

* Potentially validate the former password (if that is stored somewhere) or metadata ("I changed the password on day X, local time was around ...")

* Ask specific questions about account access or emails (last resort, but still better than losing the account forever imo)


I can relate to a company not putting value on security, or thinking the cost of securing systems may be higher than the cost of getting hacked.

I once worked for a company where I inherited a RESTful API. It stored the company's core data, including private customer information. It had no authentication, completely open for anyone on the internet to read or update any of our data.

I alerted my manager about this and that made its way to the highest levels of the company. The decision was to create a backlog item. It took about a year before we got to it.

The reason we ended up finally fixing it was because we were contacted by a security researcher one day. He said he had found a vulnerability in our system, but wouldn't tell us what it was until we disclosed our bug-bounty terms (basically promising to pay him if he had found a real vulnerability). If we wouldn't do this, he was going to write a blog post about it.

My manager used some delay tactics to buy us some time, while we spent the next 24 hours slapping a bandaid on the API. Once we had fixed it and agreed to pay the researcher, he disclosed his vulnerability and it had nothing to do with our API. It was a minor XSS that couldn't leak any sensitive information.


That sounds a lot like blackmail


I wrote a few days ago about how easy it is to compromise your ethics when trying to save a company. The problem is that once you compromise once, its very easy to do it again.

https://news.ycombinator.com/item?id=12557163

The problem is, it's way too easy to look past the action you are taking because you can talk yourself into believing its for the greater good.

And this is a huge ethical breach by Mayer, if she did this way back then, it's pretty reasonable to assume there are some more skeleton's hiding in the closet.

I don't really think I'd be wanting to give Verizon a reason to reconsider the takeover......


You might be interested in the research on "normalization of deviance". Most of the solid research in in safety critical operations like aerospace, however plenty of people have made parallels to other industries.


I am not sure what end-to-end encryption would have done to defend Yahoo's users against the entity that broke in and hoovered up its databases. Similarly: the password reset situation is sad (understandable --- it would have cost them millions of users at a point where their declining user base was being carefully watched by the market --- but infuriating) but again, what difference would it have made with respect to the most recent breach?

There are just a few companies in the whole world who both run tens of thousands of servers and are equipped to go head to head with serious attackers. Yahoo isn't one of them. Has it ever been? No.


In the interest of discussion, would you be up for naming those companies, in your opinion?


Google, Microsoft, Facebook, Apple.

Maybe 3-4 other "smaller" big startups that have anomalously great programs, which I'm not going to name.


You're giving Apple way too much credit.


Or you're giving Google and Microsoft too much.


At some point Apple didn't even store Netscaler configurations under source control. Glad to hear that they got better.


Microsoft certainly, not so sure about Google. Anyone can get access to redmond intranet by spending a day searching for credentials on Github, harder to do with Google.


There's not one "security"; of these 4, each has some specific strengths, and some weaknesses.


Of the 3, which have you hacked in some form?


Seems like the banking industry has at least a degree of competency here as well. Who would bother with identity theft if you could just hack the banks and steal the money directly?


Strong disagree. The reason you don't see tons of bank compromises --- apart from the fact that banks don't routinely disclose breaches --- is that it's harder to monetize a breach than people tend to assume it is.

Think it through. So: you "hacked" a "bank". You're on their internal network. Talk me through "stealing the money directly".


>So: you "hacked" a "bank". You're on their internal network. Talk me through "stealing the money directly".

The attack on SWIFT in Bangladesh [1] earlier this year gives one example.

1. http://baesystemsai.blogspot.com/2016/04/two-bytes-to-951m.h...


Big respect for you usually, but I think you're completely wrong here. See my comment a bit upstream - having that sort of access to mortgage/credit card systems would a great way to steal money, or at the very least fudge things up left and right.

Internal access would also include access to low latency markets and internal risk management systems. Being able to see what kinds of trades - particularly in FX - are made across many, many international counterparties would go very, VERY far to make large amounts of cash.

At that same bank, many green engineers had the ability to do trades on behalf of many large counterparties. Lots of room for monetization there - or more.


I helped start a specialty practice at Matasano focused on trading firms and exchanges. That doesn't make me right or anything, but I'm pretty familiar with the attack surface we're discussing. I think the kind of attack you're contemplating is a lot harder to pull off than you think it is. (Not from a technical perspective.)

I have no trouble believing that you can make money from PII stolen from a bank breach. The issue is that the same PII exists in all sorts of other firms that are lower-hanging fruit, both from a technical perspective and from a "degree to which law enforcement will be invested in tracking you down" perspective.

In any case: banks staff decent-sized security groups, but they're generally nothing like the force Google can bring to bear on the same problems. There's a reason Chris Evans works at Google and not at some random bank.


Yep - I was actually hoping to see your talk on Starfighter last week, but work kept me in a different state. :)

My experience has been in finance for the past ten years at an ops/sysad level and can frankly say that security at these places may be a lot worse than you think. Much of it just seems to be out of laziness/not understanding best practice. Attacks would by no means be trivial, but they're certainly possible with (edit: relatively) low risk.

Keep in mind that the finance field is largely based on very, very old legacy systems that only upgrade as a last resort, including patching. Managing legacy systems on top of improper management of organizational complexity leads to some very, very poorly implemented security. It's pretty frightening.

Things I've seen in finance -

(edit: deleted long list that probably shouldn't stay within easy internet accessible reach)


Technical security at financials, especially in application code and especially in application code that is closer to infrastructure than to line-of-business or retail, is very bad.

But the business processes that are driven by that infrastructure tends to be surprisingly manual and/or reversible, and, for reasons having little to do with technical security, is heavily audited.

I think unless you're the online equivalent of the robbery crew from Heat, if you SQLI your way into a bank (or trading firm or exchange) and try to move large volumes of cash directly, what's really going to happen is you're going to end up in prison before you get a spendable dollar.

This is a better conversation over beer than on HN. There's definitely stuff you can do! But I don't think financial firms are low-hanging fruit.


Fair enough - I can definitely see how auditing on a non-technical level would "do the trick".

I'll definitely take you up on that beer ;). hubblefisher at gee mayl.


Isn’t Evans working for Tesla these days?


Yup. Forgot. Thanks!


Isn't that the point? If you can't monetize a breach of their network then it seems like they're doing their job.


Imagine that you have hacked a bank. You could try to transfer money directly from the hacked customers' accounts into yours (or one you have set up for that purpose), in which case the account that the funds have been diverted to will instantly be passed on to law enforcement, and any attempts to move money out of that account will result in a SWAT team showing up in your location and arresting you. Remember that banking is double-entry: any debit from an account is a credit into another account, with an audit trail of exactly how the money has flowed.

Or, you could take the names, addresses, social security numbers, occupations, and income levels of all the bank's accounts and sell them on the black market. Your customer could then open credit cards in the name of the breached accounts, adjusting the billing address to an insecure mailbox nearby or hiring local kids to rifle through your mail when not home. (Or just steal credit card numbers.) They can then intercept the resulting card, charge a bunch of purchases to it, and ignore the bills. They won't be found out until the target checks their credit report and notices a bunch of cards they never signed up for, possibly a year or more in the future. The target is responsible for clearing up their identity. The credit card company is responsible for the financial losses. The only way to track the criminal is through their string of purchases, and remember that's not the guy who hacked into the bank in the first place (who is probably sitting on a beach in the Cayman Islands), it's the guy who bought the data.

Not a hypothetical scenario. Data breaches of this type have been reported against Mastercard [1], Bank of America [2], JP Morgan Chase [3], and others, and the mailboxes of both of my previous apartment complexes have been physically broken into.

[1] http://www.advfn.com/nyse/StockNews.asp?stocknews=BAC&articl...

[2] http://www.bankinfosecurity.com/bofa-breach-a-big-scary-stor...

[3] http://www.lowcards.com/major-data-breach-jp-morgan-chase-hi...


An attacker could still at least get the bank's customers' personal information in that case.


That is presuming that the end target of the attackers has to do with money. Very often it is not. Consider all the health care breaches of the last few years.


There are many ways to monetize a breach.


Yeah. I'm not suggesting that it's an A+ job well done. But at the same time, relative to the target they are, seems like they're doing something right.


No, that is not the point.


I'm genuinely interested in why not? Maybe the way I wrote it was a bit flippant, but surely bank computers that control the flow of trillions of dollars are a huge target. The fact there has never (?) been a massive breach that resulted in billions beings stolen must be a sign that someone is doing something right. No?


I'd be interested in how many SOCs or FI security people you've talked to. It isn't perfect, and things will happen. But when I hear how medicine/education/retail secures their stuff (generically), I think the FIs are doing a bang-up job.


I think finance spends more on security than health or retail, but I don't think their outcome scales linearly with that investment.


  rm -rf /


This works on mainframes?


Hell no they don't.

I worked at a larger bank that used rot13 to encrypt the RW passwords for the internal databases it connected to. Since risk management in this case included mortgage and credit card systems, it would have disastrous if there was any sort of compromise. The dev's excuse was, "We didn't have enough time to do proper password storage."


I complained to Wells Fargo last year that they shouldn't be storing user passwords. Their response was to not worry about it because they are the ones responsible for fraud.


> I am not sure what end-to-end encryption would have done to defend Yahoo's users against the entity that broke in and hoovered up its databases.

Why? Wouldn't it mean the hackers are unable to decrypt the data of users who had a strong password? If the server has the ability to decrypt user data, is that really 'end-to-end encryption'?


If an attacker is in your system, and if the data in question ends up in a database, then it is either decrypted or trivially decryptable.


It sounds like the system you describe does not use end-to-end encryption.

To put it another way: WhatsApp claim to use end-to-end encryption for their messaging service [1]. If a hacker gained unrestricted access to their online server and database, could that hacker read any user messages?

[1] https://www.whatsapp.com/faq/en/general/28030015


> It is believed that the hack compromised personal data from the accounts including names, email addresses, telephone numbers, dates of birth, hashed passwords (the majority with bcrypt) and, in some cases, encrypted or unencrypted security questions and answers.

https://en.wikipedia.org/wiki/Yahoo!_data_breach

i.e. messages weren't stolen, account information was.


Can I just inject some perspective and say that the question would (should?) have gone something like this?:

"So we got 500 million passwords stolen. We're using bcrypt with an adequate number of rounds, so we only anticipate 1000 of those passwords ever being broken. Should we issue a mass reset?"

It's never black and white, you have to weigh things against each other.


Didn't the breach disclosure say "most" passwords were hashed with bcrypt? Obviously I don't know what everyone else got, but it can't have been better or they'd have said so...

I don't mean to detract from your point, good prevention beats reactionary resets. It just raised my eyebrows at the time as a strange weasel word in a claim that users were safe.


Now that you mention it, I remember that too. Seems weird, I don't know why you'd have some passwords hashed in other ways. Even if you've migrated, why not migrate everyone at once?


You need the user to login once to get their raw password to rehash it. Unless you like rewrapping old hashes in every new one as it comes along.


Yep, exactly. You wrap them all in the new one, and migrate when the user next logs in.


The users table would surely contain more things than just usernames and bcrypt-hashed passwords.


Sure, and even if the passwords stay secure this is bad for users.

But I'm specifically reacting to "hashed passwords (the vast majority with bcrypt)". That's the sort of thing that's usually code for "except the ones which are horribly secured and will be compromised in a week".


Referring to the infosec team as "paranoids" is a really bad idea. I have our infosec team report into me and they terrify me on a regular basis but they are not paranoid. They worry, the poke around, they find stuff and they fix it.


The team calls themselves "paranoids" since at least 1999. I worked with them and have only praise.

"We try to be somewhat lighthearted about security," [head of department] said. "As important as it is, I also think it helps adoption if it is not too serious." http://www.zdnet.com/article/at-yahoo-it-pays-to-be-paranoid...


I disagree. I know one of the former members, and I can assure that his peers in development think very highly of him. It was upper management that did not want to do foot the bill.

And reluctance to be paranoid is a virtue that we all should have, not just the security team.


That's the name the infosec team chose for itself something like a dozen years ago.


a wise board member once said "Congratulations, on [good Product launch]. The optimists in your team were right. But you should remember, it's clear that the pessimists on the team are the ones that made it so."


It's humor.


  “At Yahoo, we have a deep understanding of the threats facing our users and continuously strive to stay ahead of these threats to keep our users and our platforms secure,” 
Why do I always get the almost unresistable urge to yell at my flat screen whenever a corporate spokesdrone opens his or her mouth?

Is the ability to talk plattitude-gibberish a requirement for such a job?


Is the ability to talk plattitude-gibberish a requirement for such a job?

It's not just a requirement, it is their job. How would you re-phrase that sentence in a way that A) isn't an actual lie, B) doesn't admit to any wrong-doing, C) keeps your customers calm and D) keeps your shareholder calm.


I generally agree with this, and it's been my consistent experience too.

But since I'm in full-on Elon Musk fanboy mode right now, have you seen in him the exact opposite? He's extremely up-front about things, good, bad and ugly, and it's so refreshing.

Additional tangent: I suspect that's what drew so many people to Trump and Sanders. Though it's hard for me to put both of them in the same sentence together, one thing they have in common is, at least, the appearance of being straightforward. In Sanders' case, I think he is truly being straightforward.

</tangent>


He's extremely up-front about things, good, bad and ugly, and it's so refreshing.

As much as I love Elon Musk as well, I have to admit that some of comments around Autopilot and the accidents and possible problems surrounding it have had a distinct whiff of corporate new-speak to them.


I think I know what you're talking about, but to be clear, are you talking about his 'stern' tone about driver error? I won't disagree with that point.


Do you remember Ferdinand Piech telling Americans they didn't know how to drive when the Audi acceleration thing was happening back in the late eighties?


I don't.


There's got to be a book of these, where you can fill in your company name and other details, Mad-lib style.

"At [COMPANY] we take [THING WE SCREWED UP] very seriously. We will be launching an investigation into potential lapses of [THING WE SCREWED UP] and take steps to ensure it remains strong going forward."

"Our customers' [THING WE LOST] is extremely important to us. It is [COMPANY]'s top priority to recover this [THING WE LOST]."


Haha this reminds me of Grayson Moorhead's principles of business:

We must take special care of the list with each client's name and the amount of money he has invested. If we were to lose that list, we would be ruined.


Funny how many people are saying this is gibberish.

I don't think it's gibberish, I think it's meaningful (though terribly bloated), and a lie.

The PR-speak makes it generic enough that while it's a lie, it's a lie that can't easily be proven to be one. But it is one. They do not strive to stay ahead of threats: they grudgingly take some steps, without adequate resources.


You're looking at this logically and rationally. That's not how PR works and I believe this is poor PR because it sounds fake, generic, and uninspiring. So they basically would have been in the same position had they said nothing at all.

Which IMO greatly reflects Yahoo these days, they are simply maintaining the status quo. And that status quo is pure mediocrity from which they haven't shown any aptitude at reforming.

This security breach handling is no different from the rest of their business. Yawn.


...Yahoo these days, they are simply maintaining the status quo.

Was there a time, this millennium, when that wasn't the case? I've never seen a Yahoo service I wanted to use. In 2001, I would have been shocked to learn that Yahoo would still exist in 2016.


> Is the ability to talk plattitude-gibberish a requirement for such a job?

Literally, yes?


Yes, it is, as I'm sure you know.

Such nonsense talk is difficult to latch onto and attack precisely because it has the air of substance while actually saying nothing.


It reminds me of the scene from Foundation when analysis of a politicians multi day stay and conversation amounted to no words. He had literally said nothing.


It's a deep understanding alright. They deeply understand that their security is utterly inadequate so they have to strive extra hard to try to secure their services.

It appears they've been striving so hard they have given themselves a collective security haemorrhoid.


I have been thinking of Yishan-style CEOs with board of directors also tweeting a lot for a while now. It is time to move on.


That is their job, bruh.


>>"...said the company spent $10 million on encryption technology in early 2014..."

What does that mean, exactly? Is it really possible to spend $10 million on encryption or is that some kind of marketing spin on things? I'm genuinely curious about this.


You can spend $10 million on any project by putting a team of 30 engineers at 150K each on it and letting them buy and deploy 1000 IPsec gateways or SSL accelerators or what have you at a cost of $5500 each including a 15% maintenance contract and internal hosting costs.


I would guess that the bulk of the costs is installing the processing power that enables the encryption.


Ah, I just reread that article and now I see:

"The current and former employees say he inspired a small team of young engineers to develop more secure code, improve the company’s defenses — including encrypting traffic between Yahoo’s data centers"

So that makes more sense now, I guess it's possible to spend that much on "encryption". People and processing power are expensive I suppose.


They really are!


And the cost of the people to implement and run the IDS etc $10 mill doesn't go that far in terms of hardware and FTE's.


Maybe they bought 250 HSMs.


Its the same issue since Grog tried to hide the first rock from Og:

The “Paranoids,” the internal name for Yahoo’s security team, often clashed with other parts of the business over security costs. And their requests were often overridden because of concerns that the inconvenience of added protection would make people stop using the company’s products.

Infosec is never easy, and part of making things secure is that you give up conveniences for peace of mind. It's 2016 and I'm still a but surprised that people willingly open themselves and their data to hackers in lieu of stock holders and customer retention. It's unreal when you stop and think about it.


It's the same problem compliance departments at financial institutions face.

Compliance, by definition is bad for business. They will, after all, try to crack down on doing business with criminals, shady oligarchs or - dodgy dictators, which can be highly profitable.

In the end, though, the best working banks are those where compliance has real teeth.


If Yahoo end up being successful like, for example, Slack or Dropbox these security issues will not be at all discussed here.

I did not see any outrage when Slack security issues back in May 2105 [1]: majority of people were saying "but it is great software". Dropbox the same.

So Yahoo did a similar bet as all other companies (focus on features - we will fix security latter) but they lost that bet.

[1] http://www.makeuseof.com/tag/slack-hack-need-know-collaborat...


Perlroth is gawker themed Krebs wannabe. Everyone involved in this story should be ashamed for helping fuel an unsourced article that is purely CYA for the former security team. Shame.


The iron cliches:

- Security is a process, not a product.

- You always pay for security. Up front, after the compromise, or both, if you're unlucky or bad at your job.


If Yahoo had indeed positively identified the breach to have originated from a 'state-sponsored actor', it is possible that their thinking was something along the lines of "Resetting the passwords wouldn't help us much anyway against someone with so many resources."

Of course, I'm just speculating based on what I see in news reports. Perhaps the 'state-sponsored' actor was just PR spin to save face? I really just don't know what to think.


"Mr. Bonforte said he resisted the request because it would have hurt Yahoo’s ability to index and search message data to provide new user services. 'I’m not particularly thrilled with building an apartment building which has the biggest bars on every window,' he said."

How about an apartment building where everybody's shit keeps gets stolen then? Everybody tries like hell to move out, and the only tenants left are those with no place else to go. Which on the internet is nobody.


If this is true it taints Marissa and any business that hires her in the future, because it's hard to think of a more stark example of putting the interests of the user last.


Did she put the interests of the _customers_ last or first, though?


I stopped using Yahoo the first time I setup an account for a friend and they were already on Yahoo's spam email reseller list faster than I could disable the opt-out setting. There was spam waiting in the inbox on an email account less than 5 minutes old.

I appreciate some of what they've done for the larger community, but decisions like that which make users take such a distant backseat to the bottom line make me not want to be a yahoo user ever again.


See also: https://news.ycombinator.com/item?id=12563798

Quote:

> Whenever mega-hacks like the Yahoo! fiasco hit the news, inevitably the question gets asked as to why the IT security systems weren't good enough. The answer could be that it's not in a company's financial interest to be secure.


Reality check: Security (especially IT security) takes a back seat in 90% of businesses. The only exceptions are corps who gain significant power over govs and users by being secure (I think Google and Facebook here), and when regulations require a corp to do some kind of security fundamentals then these are applied as necessary to avoid fines.


Google hired hundreds of security engineers with six-figure signing bonuses, invested hundreds of millions of dollars in security infrastructure and adopted a new internal motto, “Never again,” to signal that it would never again allow anyone — be they spies or criminals — to hack into Google customers’ accounts.

Wow! Security starts at the top!


That begs the question; what was in the front seat? Has Yahoo achieved anything of note since the 90's?


Things are not going so well for Marissa. She's a technical person and should have known better.


I think she is financial wellness person and she is doing rather well for herself.


I think that beyond a certain level of financial success, it's the reputation and power that matters more. Marissa will never have to worry about money in her life. But power and reputation are ephemeral. Doesn't take much to ruin them.


Concretely, though, what consequence does this have in that respect? Is there a marginal party she's not going to get invited to because of the loss of rep from this breach? Is there an introduction she's not going to get?

Is there someone who matters to her who will deprive her personally (i.e. not Yahoo) of something due to having remembered this event?


Agreed. It explains Carly Fiorina as well. CF has sufficient money. She will never have the power+prestige she enjoyed as HP CEO.

I don't think CF, MM or the Ponytail, Jonathan Schwartz will work again.


For $0.5B you can buy some reputation :-)


I think she was formerly of Google too.


I'm not a big fan of regulation, but it feels like there is very little _internal_ motivation for a place like Yahoo to take security seriously.

Not sure what the solution is, but unless there is a financial reason to create, I don't think we'll see much change.


Keep in mind shit like this is what happens when people get fired and blame management, rumors & shit get started like this. We don't know the real story.


Companies should have a Chief Risk Officer who have a big component of their bonus based upon the success of their risk management strategies.


I just returned from DerbyCon. An amazing security conference that covers both attack and defense. All talks are on youtube. Here is a good summary of the powershell talks. Really good stuff.

https://blogs.msdn.microsoft.com/powershell/2016/09/27/power...


Newsflash: struggling company doesn't spend time and effort on things that don't directly make them money.


Google hired hundreds of security experts with 6 figure bonus... Is this kind of bonus norm in the Valley?


Interesting how internal politics works. Breach news coming right after aquisition. Blame the new owner.


How did they know it's the "Chinese military hackers" who's behind the attack?


Reading between the lines, it is the same attack that hit google way back when. Google disclosed it, and disclosed that 20 other companies were affected, and none of the other companies came forward.


Quite plausible, but what evidence is there to support that?


I don't have any other than there were 20 other companies who were not named at the time, Google was hacked by chinese military, now Yahoo claims china.

Deduction on my part, but no further evidence than that.


Sure, but China has also (allegedly) been responsible for many other breaches unrelated to the Google incident.


Because the Russian hacker blame game is passe.


The way Yahoo! has been running I think their front seat is completely empty.


Marketing by deception... It's postponing the inevitable.


Cannot wait for the class-action suit, I wonder if everyone across the world can join if it's based in the U.S. or if they would need to create class-action suits in their home country.


> Google hired hundreds of security engineers with six-figure signing bonuses

Who left jobs working at other places and in theory left them more vulnerable and drove up the costs for hiring as well.


I never would have guessed!


Can anybody cite a single one good decision Marissa Mayer took? Honestly?!


Good depends on the perspective. She made a bunch of people fairly rich by buying their apps, even useless ones.


There were probably lots of good decisions made when she negotiated her compensation.


I think she's secretly a double agent of Google, still on the Google payroll.


Leaving Google?


I think the logo looks great.


When she said she might go to prison if she defied the NSA?

http://nypost.com/2013/09/12/yahoo-ceo-marissa-mayer-feared-...

Wait no...that was a terrible one, too. Okay, I got nothing.


[flagged]


> She's nothing more than a skilled self-promoter who was lucky

Please don't post like this to HN. Your comment isn't great to begin with (it veers away from substantive discussion into a classic flamewar topic and, if you think about it, celebrity gossip), but the sweeping personal denunciation thing is just ugly.

This holds even if we assume you're correct (though: "nothing more"? you're talking about a human being) because it isn't about Meyer, but our own culture at HN.


> She's nothing more than a skilled self-promoter who was lucky enough to join the right company at the beginning of her career.

I'd argue that most Silicon Valley Success Stories boil down to "lucked into the right company early in career".


I agree with the "self-promoter" characterization (and support the other reply that most CEOs of such companies are so). However, the $7m figure for the holiday party is incorrect. I was at Yahoo then, and we were shown some numbers about the true cost. I was still angry that they spent even a cent on a party in December, then laid off a lot of people in February, cut people working on open source projects, and weren't funding any conference trips even at that point. All of those are more important to me than parties, but the Y! holiday party was well loved by many people there.


wow - that presentation is paints a pretty bad picture on her.


Does Twitter seem to take security seriously?


I haven't even read the article, but based on headline I'd say "AND 99% of ALL companies EVERYWHERE." The headline makes it seem like this is an unusual thing.

Nothing is secure, anywhere. A few companies actually prioritize it. Very few.

And I truly think the economy could not bear the cost if everyone actually tried to prioritize security above all else.


This is incredible. Hiding a breach like this is bad enough but to not even forcing password resets should be criminal. I think we're living in an age where information security is still in the cowboy stage of things. I think we're due for some tough regulations here. Clearly businesses do not have our interests in mind and in these cases our interest will conflict with theirs.


Regulations would be impossible to enforce. You can write up as many laws as you want but trying to enforce security constraints on new websites alone is an impossible task.


Good regulations do not need enforcement.

Example:

* if your company lets someone steal your users' data, your users can sue you and get lots of $$$;

* if you are the CEO of a company and you let someone steal your users' data, when your users sue your company, you have to pay X%.

Edit: formatting


or the CEO and the broad members concerned are flagged as a not a "fit and proper person" to a be a director - oops you have just fired your self kiss your share options goodbye


Impossible? No, but hard and requiring the willingness to do so.

Besides, it's not that difficult if the requirements are minimal best practices - the regulatory body need not even get involved until a breach has been traced back to someone who didn't follow the rules.

We're not trying to make security perfect, just deal with blatant disregard of user safety.

I suggest 3 to start:

1: No storing passwords in plaintext.

2: Hashed passwords must not be calculated with fast algorithms like MD5.

3: Hashed passwords must be salted.

These three things would have solved most of the major breaches everyone has heard of, don't take meaningful development time to implement, and so there is zero excuse for not doing them.


They could just go under a whistleblower equivalent program for security concerns, but I feat that we don't have such a program in place in the US


> trying to enforce security constraints on new websites alone is an impossible task.

Why 'new websites alone'? Enforcing security standards on new/small websites would be harder than on large ones, and the large ones are more important. There are already many cyber-security related laws that are in place and more-or-less enforced. Having these laws makes large companies invest money into at least attempting to follow them for fear of legal repercussions.


Who would enforce it on any website? The US government, or W3C, or IANA? Everyone talks about a free and open web and no one wants someone poking around behind the scenes. I don't trust anyone to enforce password rules for fear of exploitation. The user is responsible for their password security and it should stop there.

That being said, Yahoo should have force reset passwords.


> The user is responsible for their password security and it should stop there.

That would be true if the user created, stored, authenticated the password himself.


> Enforcing security standards on new/small websites would be harder than on large ones

Wait, you think the larger a property gets, the easier it gets to secure it?


I'm guessing s/he means that there are fewer total such sites, so any regulatory body wouldn't have as many individual properties to monitor.

That is to say, it's a reference to the effort of the regulatory body, not of the property managers themselves.


No, but the larger the company the more incentive it has to meet regulatory demands. There are also much fewer such companies so they are easier to identify and check.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: