> Is it time for us to simply accept that it's inevitable that, at some point, everything will be hacked, and hacked often?
I disagree. I’d take the Economists route, which is looking for the incentives that drive motivation. If companies were held to a higher standard of accountability, imagine how many would beef up their security. For decades, security researchers have been poking fun at how ridiculous some of these sites are at handling security, and nothing ever happens.
Now, imagine if there was severe economic accountability to a company that was hacked. Perhaps payouts to each person affected (in this case, to all 150m). I imagine you’d see security become a top priority very quickly at most companies.
As a developer, do you really want to live in a world where "security is a top priority" at every company? Does such a world even make economic sense after accounting for the opportunity cost of the time most that developers would otherwise spend actually building new products and features?
While companies could probably do better than they are right now, hacks like this are probably never going to be eliminated. There are too many companies and too many developers for nobody to make mistakes, even when they're being mindful not to. Investing in solutions that assume hacks will happen seems reasonable to me.
Yes, yes I do as a developer! The thing is that a lot of these "hacks" aren't even that sophisticated. A lot of them are engineers not paying enough attention. The security dimension of many, many products can be improved tremendously picking off some low hanging fruits. Ever since companies like Google pushed for HTTPS, it's proliferated all over the place. Just by Google emphasizing it and talking about the need for secure communication even inside one's own network, my own company started doing the same. Enabling HTTPS and SSL wasn't that hard, especially since companies Let's Encrypt came along. It just wasn't prioritized. Once it was, our engineering team made it super easy to get certificates from LE and we all learned standardized ways for securing our traffic. Security is often low priority because people are really bad at planning for unlikely events with potentially catastrophic consequences.
I'm not saying we can be invulnerable but we need to raise the lowest common denominator so that it's not a walk in the park to steal millions of records. You just need the weakest link to make everyone vulnerable but I do think positive collective behavior can counter that -- especially when you make it easy with things like Let's Encrypt.
> Yes, yes I do as a developer! The thing is that a lot of these "hacks" aren't even that sophisticated. A lot of them are engineers not paying enough attention.
I dont think you have quite thought it through. Do you honestly want to have to do code audit on all libraries you use? Freeze all versions? Have a chain of signoffs for every change?
I have briefly done consulting in a place like that -- developers were absolutely miserable. Think about every single corporate IT policy that exists and apply it not just to your desktop/laptop/phone but to what you do on that desktop/laptop/phone.
I dont think you have quite thought it through. Do you honestly want to have to do code audit on all libraries you use? Freeze all versions? Have a chain of signoffs for every change?
If developers demand that the tools they use are better built, then the market will deliver tools/frameworks/etc... that are secure from the start.
"Good" coding has become "good enough" coding, and the problem exists from the bottom of the stack to the top.
> If developers demand that the tools they use are better built, then the market will deliver tools/frameworks/etc... that are secure from the start.
This is never going to happen because what is considered secure in one place is not considered secure in another place.
> "Good" coding has become "good enough" coding, and the problem exists from the bottom of the stack to the top.
Because it is about risk management, not about absolutes. It is absolutely irrelevant that a smart samsung TV that I have in my office has garbage security because it is used as one thing and one thing only - dumb 48" HDMI monitor not connected to wireless network. Its Wifi antenna connector has been cut. It matches my risk profile.
It’s not “either or”, but as someone who has worked at various places along the spectrum of practices ranging from “default password is password” to DO-178B [1], I greatly prefer environments with strict and rigorous design, testing, change control, and security auditing. The chaos of moving fast and breaking things (and fixing them, and breaking them again, and fixing them again, then getting hacked and having to pull 24 hour days to mitigate...) is a recipe for burnout.
If DO-178B were to be applied to "internet" I would not be surprised if we were thinking that uucp is amazing invention in 2018.
I'm going to repeat it again - we do not have a security problem with software. We have a risk management problem.
There's absolutely no reason for Marriott store information on previous guests past certain statue of limitations. In fact, they could probably offloaded it to Iron Mountain after 180 days. Storing it online has a certain risk profile. That risk was not correctly evaluated ( probably not evaluated at all ) and hence it was not minimized.
Storing credit card information ( even encrypted ) after the card was charged and transaction creates another risk profile. It also was not evaluated and it was not mitigated.
Businesses are obsessed with data without understanding the risk.
'As a car designer, do you really want to live in a world where "safety is a top priority" at every company? Does such a world even make economic sense after accounting for the opportunity cost of the time most that designers would otherwise spend actually building new products and features?'
Most professions and companies are (at least in theory) held accountable for their impacts.
No car on the market is as safe as the absence of a car. Car companies make tradeoffs towards safety where it's reasonable and economical, but still fulfill their baseline mission, which is inherently dangerous. People are injured and killed in car crashes every day; car companies are not "held accountable" unless there's a specific defect and they should have known better.
Such as a company knowing better than to keep their servers patched, to have a process to make sure their servers are patched, to have a process that shows a list of servers that are _not_ patched, etc.
There are a lot of really stupid mistakes made in a lot of these data disclosures that a competent IT team (and dev team) can prevent from happening. The current state of things is that there are hardly any consequences for losing people's data, just make a bulk purchase of credit monitoring and call it a day. This is cheaper than actually hiring the right people and implementing the correct processes.
As a car driver, do you want to live in a world where "braking for pedestrians in crosswalks is a top priority" on every trip? Does such a world even make economic sense after accounting for the opportunity cost of the time that most drivers would otherwise spend moving toward their destinations?
Haha THIS is spot on. Sure, 1 person's address isn't the end of the world ... but 500,000,000 people's information in 1 incident is class action material
And it's not like I'm advocating that every single company needs bulletproof security that can stand up to nation-state adversaries with budgets bigger that the company, I agree with GP that it just wouldn't be economical.
To stretch the car/driver analogy, you could limit all cars to 10 mph so that they can stop fast enough when a deer runs into the road unexpectedly, but that's probably not worth the tradeoff.
Pedestrians, on the other hand, are a predictable fact of life that you need to deal with when you get in a car. So are bad people on the internet. If you put something on an internet connection and aren't constantly aware of that, you should not be putting it on the internet.
Car companies absolutely quantify risks and make decisions based on it. It is still more about bottom line than safety. When a version of a car fails some tests, they will estimate the cost of a recall versus the cost of a lawsuit. Whichever is smaller wins.
I really wish more developers had at least a basic ethical grounding and didn't just go "fuckit, revenue!". (Or, in larger companies, "fuckit, my boss told me")
And when you consider opportunity cost - even just double-checking you aren't affected takes a minute of time, as a consumer, that means this hack just wasted close to a thousand years of human life.
Where's the accounting for the opportunity cost of that?
There is no such thing as being "done" with security. You can as deep as you want, with as large a team as you want, and never be able to say "okay, we're secure now."
If basic ethical grounding requires security to be the top priority, and security work is inexhaustible, then it must be unethical to ever work on the product being secured.
No, but there is such a thing as "following best practices".
An ethical approach requires you to reason about which actions are moral, not to be "done" with something. As I said, even a basic knowledge would be really helpful.
It's easy to reference amorphous "best practices." As Tannenbaum said, it's nice to have so many to choose from. The real challenge is deciding which practices apply, and what authority to figure to recognize when determining "best."
I agree. But following best practices is a completely different thing from treating security as the top priority. Best practices include tradeoffs that balance security risk with cost and businesss needs.
There is no such thing as being "done" with safety. You can go as deep as you want with as large a team as you want and never be able to say "okay we're safe now."
If basic ethical grounding requires safety to be the top priority, and safety work is inexhaustible, then it must be unethical to ever work on the product being safe.
Absolutely correct. Safety is about managing risks, not eliminating them completely at all costs. An airline which truly saw safety as the top priority would never put a plane in the air. Making money is the top priority; safety (or security) is one consideration that influences how you go about it.
As an architect do you really want to live in a world where "structural stability is a top priority" at every company?
Does such a world even make economic sense after accounting for the opportunity cost of the time most that building designers would otherwise spend actually building funky new shapes?
Investing in solutions that assume buildings will collapse seems reasonable to me.
I want to unpack a few assumptions before responding.
1) There are 150 million vehicles which can be remotely controlled via the vehicle manufacturer's software, which has generally mediocre application security.
2) The software in question is vulnerable to SQL injection, allowing up to 150 million vehicles to be remotely commandeered by a small group of attackers.
3) No hostages are taken and no owners of cars are deliberately harmed, because this is an application security scenario and not a kidnapping scenario (which is orthogonal).
The scenario you've posed is oddly florid...thinking through it, no, I don't think the robbery of 150 million vehicles is as serious as a bridge collapse with 50 (presumably occupied) vehicles on it.
Speaking more directly to the point - I think this is a really poor comparison. Logistically speaking it's hard to take seriously the idea that 150 million cars would actually be stolen because of any single SQL injection vulnerability. SQL injection is really bad, but it doesn't directly result in injury or loss of life. It's also hard to conceive of a situation in which SQL injection has the potential to cause systemic collapse like you're describing...maybe SQL injection to a database containing credentials that have write access to a server which can launch ICBMs?
In the modal case, I think it's okay to admit that application security is not as serious a concern as architectural stability. But this entire discussion is pretty much a sideshow; we can just all agree that security needs to be taken seriously and that some bureaucratic scar tissue is okay to make that happen.
It is funny that after all posturing in this thread @throwawaymath's post which is one of the few discussing risk assessment is getting downvoted while posts throwing around absolutes and lofty goals are getting upvoted.
It's hard to imagine an SQL injection scenario ever being worse than structural buildings collapsing (Rana Plaza Bangladesh comes to mind), but anything is possible I guess.
Yes. If I have to give up my security and privacy so that you can have less friction, then I most certainly want to take that away. If the online business or organization can give you less friction or security without impacting my security or privacy, then I do not see an issue.
The situation is identical to you wanting to have an untamed lion in your back yard. Provided you have the right security in place to ensure it can't hurt me, your neighbor, then the litter box is your problem. If however you do not have the right protections in place, then I have every right to ensure the lion is removed from the neighborhood.
Yes, and it will feel weird, but developers who value privacy will have the insurance companies backing up their advocacy for storing less data on customers.
If you don't store valuable data, you won't have large premiums.
If your business model requires storing such data, you better have the revenue to pay the premiums.
As a developer, I want to live in a world where I can make a business case for security of something along the lines of "if we don't do this, we'll be hit by crippling fines"
The problem with the internet is that security is an after-thought. The solution is to build security into the communication protocols, and that involves data structures like merkle trees and blockchains.
Why do we need a blockchain or a merkle tree? We have TLS, SSH, PGP, a number of VPN solutions...blockchains and merkle trees are consensus and versioning protocols, not security protocols. Their use of cryptography is orthogonal to the traditional security goals of confidentiality and authentication.
The problem is that incentives aren't aligned. Companies don't care about your personal data they only care about your metadata, so they won't invest resources into protecting your personal security. TLS, SSH, PGP are all communications protocols, they provide no rules concerning value exchange, which is what account creation is. When you create an account on a 'free' platform, your de-facto making an exchange of data for value. The issue here is that the transaction is one-sided because there are no guarantees on personal security. If your account information follows you around the web in the form of a public key, then your in control of your personal security.
As a developer yes. If that cost can't be baked into building new products, either the developer needs to learn how to emphasize the importance, or that company needs to go out of business.
That's wrong for many reasons. Others have covered the simple fact that you couldn't start any app with lots of users and zero capital. The downside is huge. Barriers to entry become more impossible than they already are.
But that's not the worst of it. The Economist here is doing a static analysis, oddly enough. They're making the simple observation that if things cost more or have more risk, they get more attention.
That's if they have more risk today. Once you collect data, it doesn't go anywhere. Every bit that sits on your servers can easily be copied to another server, today, tomorrow, ten years from now. Do you know what all the bits are on your computers?
This isn't copyrighted DRM or porn. You could have a blob hashes and userids. If I put that on your computer, would you know? Could you be expected to find it? Know what it was?
As Facebook and the other platforms are demonstrating, this data continues to have value many years after it was collected. And once somebody gives some data to you, it's effectively both invisible and trackless. Over long periods of time, your cost becomes infinity to maintain this risk. Meanwhile, attack vectors get better and people come and go out of your offices all the time. Could you manage that risk? Forever?
I can't think of _any_ sensitive data on the web that's stayed safe. Why would attaching any amount of value change that?
That's hardly even a speeding ticket for Uber. As long as the fines are this low companies of sufficient size simply treat this as a cost of doing business.
It appears that the maximum fine is 4% of a corporation's global earnings[1] which could be a lot of money, but still "just a cost of doing business" at the same time.
Uber is somewhere around $10b gross revenue, so $400m fine for every breach. Sure it's "just a cost of doing business". It also means that it's better to spend $200m beefing up their security to reduce from 1 data breach every year to one every 5 years.
Marriot revenue is $23b, so that's a potential $920m fine.
IHG (say), who invest in security and don't have a breach, get to charge less for their hotels, or make more profit.
I thought the same thing, but I was corrected here on HN: if you read the same exact like you posted, it says "a fine up to €20 million or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater", so the they ARE allowed to fine you EUR 20 million.
Much more than "just a cost of doing business" for the majority of companies.
Fining the company does nothing for the user whose data got leaked. Identity theft isn't a matter of degree; deterring future leakage has zero value. Either there's enough information on the black market to impersonate someone, or there isn't.
So far the market has decided that she economics for protecting users and protecting data just isn't there and that's why we see what we see.
That's why GDPR happened. "Ok, if you're not going to do anything about it, we'll make you do something about it."
So you're not taking the economists point of view at least from the perspective of the free market rather you're thinking about which economic levers you could pull to effect change from a regulators point of view.
Either your information is known to an attacker, or it isn't. Great security "at most companies" in a hypothetical future doesn't help. You need security better than the best attacker, at every company, all the time.
That's a pipe dream. Instead we should take advantage of public-key cryptography, so that authenticating to one company does not leave behind infinitely reusable credentials for others.
I disagree. I’d take the Economists route, which is looking for the incentives that drive motivation. If companies were held to a higher standard of accountability, imagine how many would beef up their security. For decades, security researchers have been poking fun at how ridiculous some of these sites are at handling security, and nothing ever happens.
Now, imagine if there was severe economic accountability to a company that was hacked. Perhaps payouts to each person affected (in this case, to all 150m). I imagine you’d see security become a top priority very quickly at most companies.