With every leak of government produced malware I hope the issue get pushed a bit further onto the political agenda so a international treaty can be reached on software implemented weapons. There need to be some defined limits for what agencies can do and what happens when the weapons sooner or later are discovered and turned into problems like WannaCry.
Its clear to most politicians that its a problem if criminals use guns acquired from the military or police, and that its partial the fault of those agencies when it happens. We are not there yet with malware.
Your diagnosis is at odds with the basis of open security.
The primary thing that needs to happen is an accepting of responsibility by those who administer critical systems. Mathematically, what we call a developed "exploit" is really just an existence proof that something is already insecure (hence "PoC or GTFO"). The blame needs to be properly assigned to the developers/integrators of these systems for negligence (currently gross negligence - eg relying on Turing-complete languages) instead of scapegoating those who discover the emperor has no clothes, even despite their underlying motivations and lack of full disclosure.
Nasty exploits have always existed - the worrisome development is the scale of execution brought on by entities with such resources, and the trend of governments becoming overtly adversarial against the people, both their own citizens and foreign individuals.
Even if it were practical (see: China's attitude to imaginary property), having these entities form treaties (ie collude) with one another is not going to resolve this. If anything, treaties will form gentleman's agreements between allies, while enshrining the attacking individuals "of interest" as standard procedure.
I would not object to the idea that companies should be held liable for critical faults which get exploited. It was the suggestion that Schneier gave several years ago in order for security funding to be financially sound. However we very far from this reality, where its questionable if companies can be held liable for security faults in extreme cases like voting machines, airplane systems, and medical devices. I recall reading that Schneier more or less given up on it.
We need to get to a point where people in power positions are held responsible for the damages caused by government developed malware that is used in maleware like wannacry. That is hard to do without the issue rising on the political agenda.
> its questionable if companies can be held liable for security faults in extreme cases like voting machines, airplane systems, and medical devices
> We need to get to a point where people in power positions are held responsible
You're contradicting yourself. If it's possible to hold people in power responsible for what a single agency under them does (not even requiring their knowledge!), then it's certainly possible to hold them responsible for approving public use of insecure voting machines! (especially at a more local political level)
And if it's not possible to hold people in power responsible (which is a reasonable assumption), then the philosophy of distributed defense becomes even more important!
> the damages caused by government developed malware
Attempts to silence messengers never end well. And I don't think that changes, even and especially when the messenger is a well funded government. But normalizing that philosophy certainly sets the stage to silence demonize smaller messengers!
The problem with holding manufacturers liable is that I'm not sure it's the economically optimal solution. Fundamentally, let's break down exploits into two categories:
1) Shoddy programming creates exploit that is obvious to manufacturer (let's say Chinese Android TV stick makers)
2) Exploit exists despite manufacturer's efforts otherwise (let's say Microsoft)
Putting exploit risk on software companies solves #1. But, especially in critical industries, balloons the cost of their systems due to #2, because they must now cover a risk they can't even model (unknown unknown). Can you imagine what accountants and actuaries would do with "You may have a nation state targeting the firmware controller in an HDD to compromise your system"?
I think a more optimal situation would be the government mandating liability for known-problematic code standard lapses, but then providing a liability shield provision if the manufacturer can deliver a fix to a critical vulnerability in X days.
Ideally, we'd want any legislation to do two things that solve both of the above problems. Increase adherence to generally accepted secure coding standards (helps #1) and increase ability to deliver a timely fix to customers (aka codebase maintenance and agility; helps #2 and especially important in mass-use IoT devices).
> The problem with holding manufacturers liable is that I'm not sure it's the economically optimal solution
Either you hold someone liable or the effect will just be hiding the risk.
How about just requiring that the use of critical software systems be ensured against malware/failure in general? Seems like we want that anyway, and if we can't find anyone to insure a piece of software, it probably shouldn't be used in a critical system in the first place.
Importantly, it's the users of software in critical systems alone that need to be insured. Neither software vendors nor regular users need insurance. The insurance company, alone, should handle the job of shielding a critical system from the mistakes of software vendors. We need to allow software vendors to be able to make mistakes, or nothing would get made, ever (source: I'm a programmer).
And the insurer would be wise to spend some of the premium on bug bounties for the software they're ensuring (to minimize the cost of failure). In the end, all white hats would end up being employed by an insurance company, helping assess software security.
The difference between your approach and mine is that you propose to solve it by making rules (regulations), as opposed to adding a separate party that can absorb risk (insurance), thus shielding a creative industry -- software development -- from adhering to a list of rules, which surely will only grow in size.
Ah. Insurance doesn't act as a separate party to absorb risk in the way you're talking about.
They act as a party to amortize known risk, in exchange for a monetary premium set based on that known risk.
Without the government stepping in and limiting catastrophic liability to some degree (ideally in exchange for signaling the market to produce a social good), the premiums changed would be so large as to just suck money out of tech. There's no creativity shield if you're paying an onerous amount of your profits in exchange.
Which is why I said any solution has to be two part: (1) require risk liability on a better-defined subset of risk & (2) provide a liability shield on the remaining less-defined risk iff a company demonstrates an ability to handle it (aka prompt patching). This creates a modelable insurance risk market, therefore reasonable premiums, and still does something about nation-state level attacks.
I don't think "economically optimal" is a goal to strive for, especially under the current inflationary busywork treadmill.
In fact, the current regime seems pretty close to economically optimal - 1. cheap/quick to develop because required diligence is not done 2. incremental cost to keep the zombie going via "patches", lobbying, and legal threats 3. planned obsolescence when the defects can no longer be ignored.
The downsides only show themselves as mortal risk accumulating outside the system, hence the spectre of "nation-state" attackers. But the economic puppeteers will easily move to another country if this one collapses!
"But, especially in critical industries, balloons the cost of their systems due to #2, because they must now cover a risk they can't even model (unknown unknown)."
This is where "best current practices" and "standards of care" egg get the picture.
But to revisit the analogy about stolen police/military arms, are we to blame casualties/victims of these stolen arms for not being able to defend themselves?
Yes, those tasked with network and infrastructure security should be held to rigorous and constantly increasing standards. But there has to be some accountability from the state purveyors of this malware when it gets intercepted and weaponized. Publication of these tools is one way to help in lieu of that, but it becomes a toss-up as to whether the world will harden against published malware before bad actors, state-sponsored or otherwise, weaponized it for their own gain.
There is a world of difference between physical and informational security, so it is an improper analogy.
First, it is possible to perfectly defend (and conversely, a vulnerability means one is entirely vulnerable. Don't confuse mitigations that affect the probability of being vulnerable with actually being vulnerable. There is no middle ground as to whether something is secure or not, it is a binary proposition!)
Second, attacks are practically free and untraceable. And all the status-quo anonymity-destroying attempts cannot actually change this, as making "network identity" trusted would mean changing every system's attack surface into every other system - eg break a single person's credit card and use Starbucks wifi.
Even if a lot of the vulnerable machines still run insecure software there's just no reason why they all need to be networked as they are now. E.g. medical machines at the NHS running Windows recently got ransomware'd.
There's no intrinsic reason for why these machines need to be connected to even an internal network other than lazyness prioritizing convenience over security.
I don't know anything about your background or area of expertise, but you sound completely disconnected from reality.
> Your diagnosis is at odds with the basis of open security.
> The primary thing that needs to happen is an accepting of responsibility by those who administer critical systems. Mathematically, what we call a developed "exploit" is really just an existence proof that something is already insecure (hence "PoC or GTFO"). The blame needs to be properly assigned to the developers/integrators of these systems for negligence (currently gross negligence - eg relying on Turing-complete languages) instead of scapegoating those who discover the emperor has no clothes, even despite their underlying motivations and lack of full disclosure.
To paraphrase:
"I assert that I am intelligent by using the word 'diagnosis' and demonstrating that I also know of another word for 'exploit'. Therefore, if I sound arrogant it's because I'm actually just intelligent. Moving on:
When malicious hackers take advantage of an exploit to hurt people, it's not fair to blame the CIA, NSA, or any other agency who knew the exploit existed but chose not to disclose it to the people who wrote the software. They wanted the option of using the exploit themselves--which of course is perfectly fine--so you see, it wouldn't make sense for them to disclose it.
It would be silly to blame these agencies for keeping these secrets from the public and from the people who wrote the software, and equally silly to blame them for being unable to keep these secrets from falling into the hands of malicious hackers. Do not scapegoat these people, for they have done nothing wrong.
No, it's the fault of software developers who write buggy code! We need to properly blame the people who try to write secure code but make mistakes. After all, malicious hackers have to maliciously hack innocent people, since that's what they do. The CIA has to keep secrets because that's what they do. The CIA also has to keep the secrets in the pocket of their coat which they lost at the bar. After all, the CIA is just a bunch of people, and people make mistakes.
People make mistakes, but software developers are not allowed to. Any good developer knows how to write code that has no mistakes in it. One of the easiest ways to write mistake-free code is to program in a language that isn't turing-complete. I've been writing code my entire life and have never introduced a single security flaw into a system, because the only two programming languages I use are english and occasionally arithmetic. Any software developer who makes a mistake or uses a turing-complete language is guilty of gross negligence and should be punished and blamed severely. I see no need to provide any sort of rationale for the things I have stated."
...Imagine a certain model of commercial airliner that's been in widespread use for well over a decade. The planes have their quirks and some parts wear out and have to be replaced, but they are frequently inspected and repaired. One day, the wings completely fall off of every single plane. Anyone unlucky enough to be on one of these planes while they were in the sky dies. The people are shocked and the government pledges to find out what happened.
Some of the best aeronautical engineers in the world had worked for years to design these planes, and the plans had been scrutinized and approved by many people. The manufacturing plants were known for their high standards. Nevertheless, it was discovered that a flaw in the design had indeed been the cause for the wings falling off. The enormous bolts used to attach the wings to the fuselage were incredibly sturdy, but if you blasted them with a specific ultrasonic frequency, they would resonate in a wildly unexpected fashion and quickly explode.
A terrorist group claims responsibility for the attacks, and upon closer inspection of the planes it is discovered that the seat-back TV screens near the wings of every plane had been replaced with ones that contained devices capable of emitting the exact frequency needed to cause the bolts to explode. They were designed with a clock-based system that had been set two years in advance to trigger simultaneously on every single plane on that day. The terrorists had spent years patiently buying flight tickets and performing the replacements en route. Since the devices looked like ordinary tablets, they had no problem getting through security even though it took ages to get everything in place.
The inspection also uncovers a second set of devices, very similar in nature to the screens. These are far more elaborate-- the entire seat base contains a powerful ultrasonic emitter and an antenna tuned to the same communication frequencies used by the plane itself. It's designed in such a way that a special signal from air traffic control could cause the wings to fall off a specifically-chosen plane.
Due to the advanced nature of the second device, it's clear that it had to have been installed by people with far more resources and access to the planes and an intricate understanding of the plane's communication systems. Before the speculation goes any further, the director of the CIA comes forward and admits that they are responsible for the second devices. Having known about the faulty bolts even as the planes were passing final approval for use in commercial flight well over a decade ago, the agency had sent teams to install the systems under the pretense of doing security sweeps. The grim purpose of installing these systems was to give the agency a last-resort method of stopping a hijacked plane from flying into a building or crowded area.
Finally, the director admits that there had been a data breach three years ago, and though they couldn't be sure, it appeared that documents relating to the purpose and design of these devices were among the stolen data.
And now you come along to share your expert opinion.
You say that we shouldn't blame the CIA for knowing about the faulty bolts and installing systems to take advantage of them, instead of reporting the flaw to the company that designed the plane so that the problem could be fixed. After all, they put those systems in place to reduce casualties in a catastrophe.
You agree that terrorists are bad, but hey-- that's what they do, right? They aren't the real cause, just an inevitable outcome. No, there's another party who's really to blame...
Blame the people who designed and built the plane! Simple as that! If they hadn't built a plane with bad bolts, the CIA wouldn't have been forced to take advantage of the mistake and design their secret remote kill system. Those documents wouldn't have existed when the data breach happened, so the terrorists wouldn't have been able to devise their own plan to take advantage of the bad bolts. No bad bolts means no horrific catastrophe, it's as plain as day.
Since you are a world-renowned expert in everything, you are interviewed and asked about how to prevent things like this from happening in the future. Should we put some laws in place to prevent the CIA from keeping such dangerous secrets to themselves? Do they have the right to make their own internal risk analysis of whether it's in the public's interest for them to be able to build a secret system to remotely drop the wings off a plane, even though it means that people have been flying around for a decade in planes where the wings can fall off? Is it worth talking about the bitter irony that the CIA kept the bolt flaw a secret from the public and the plane company, but couldn't keep it secret from the terrorists? Are there legislative steps that might be taken?
You reply, "Nope! Of course not, how silly and stupid of you to say that. Laws don't do anything. The plane company built a bad plane, and they are to blame. Specifically the stupid engineer who picked that dumb bolt. We'll prevent this in the future by building planes where it's impossible for the wings to fall off. Anyone who knows the first thing about plane building knows that it's actually very simple to build planes where the wings don't fall off. In fact, I've been doing it for years, and anyone who doesn't use my method is grossly negligent. You see, I build my planes without any wings! Now just take a moment to look at this excellent proof I've written. You can see that it's impossible for the wings to fall off of a plane that doesn't have any."
By the way, I'm curious: Where do you point your majestic finger of blame in what happened with OpenSSL and Heartbleed?
(Ignoring the continual ad-hominems based on particular word choices)
Your comment seems primarily motivated by anger/frustration at the NSA/CIA/etc - an anger which I greatly share. Politically, I think the entirety of the NSA deserves the firing squad as the bunch of traitors that they are, but alas until the public comes out of the spell of their disinfo games then no action will happen on that front.
Speaking of disinfo games, which do you see as the more likely outcome from this current scare story of the week - these citizen-hostile government agencies are reformed and actually become responsive to the people, OR they court this fear about how bad exploit-finders are to acquire more power, especially the power to go after competing hackers?
That's the crux of the matter - when one chooses the wrong philosophical analysis, one can only go down a path where any "solution" compounds the problem. Responsible disclosure is not the law or even the full extent of ethics - it's a gentleman's agreement as to what is prudent and polite. Regardless of how bugs are fixed, who finds them, or their motivations, the fundamental open-society truth is that responsibility actually rests on buggy software itself, as opposed to the people who point out the bugs. Never mix that up, unless you'd like to get back to the dark ages where even good-faith full disclosure results in draconian legal thuggery!
In the context of your plane example, the company who designed the plane and marketed it for passenger use didn't even bother using a CAE program. When previously informed that the tail easily falls off, they added duct tape and a redundant tail. I've said nothing absolving the CIA/foreign fighters - all bad actors are to blame for their parts. But where that blame is focused matters, and blaming the whole situation on one bad actor (the CIA) will guarantee that the company keeps right on selling the known-defective planes.
I hope this helps the politicians realize encryption back doors can never be made safe. If even the NSA can't keep their top secret tools safe, generic 'law enforcement' surely won't be able to either.
What is your proposed test? Detecting a nuclear explosion or traces of chemical-weapons manufacturing is easy compared to figure out who made some malware component.
I don't disagree with you on any particular point. However, who do you suppose we trust to decide what is a software implemented weapon and what is not? Of course there are very clear black and white examples, but the in between is where we should be concerned. Consider the USA stance on export of encryption as a historical example of where it can go terribly wrong.
Its clear to most politicians that its a problem if criminals use guns acquired from the military or police, and that its partial the fault of those agencies when it happens. We are not there yet with malware.