Meh. The bug requires you to connect Windows to a malicious SMB server.
Now that everybody knows that, if anybody is really concerned, they can stop SMB connections from LAN to WAN by blocking TCP 139, 445 and UDP 137, 138.
Wait, when did everyone become aware of that? I'm willing to bet the vast majority of windows users have no idea. _Some_ people only know _because_ he released the bug.
I'm now aware, and I was able to block connections in my organizations firewall that protects a few thousand users. Not every single user needs to be aware for it to be effective.
Yes, but the point is you wouldn't be aware unless he released the info. He gave individual users an option to protect themselves in the absence of a patch from MS.
You're absolutely right! The researcher acted in a questionably ethical manner here by waiting to disclose the vulnerability. The only ethical approach is full and immediate public disclosure.
Full and immediate public disclosure seems irresponsible and counterproductive IMO.
The last thing I'd want as a developer or a manager is to wake up in the morning with a PR shit storm and angry users on my hands because some inane script kiddie found it appropriate to disclose a zero day without reaching out to me or my team first. Sure, some other guy might know about or find the vulnerability and exploit it by the time a patch comes out; it's guaranteed that they will if you release a 0day.
We can discuss all we want on what a reasonable delay to release a patch might be, but absolutely not on the notion that immediate public disclosure is the right thing to do. It wastes everyone' time, disrupts workflows, puts fellow developers, their managers, and their users under intense pressure and stress, all so some kid can enjoy an ego trip. To me it just seems gross and childish.
The last thing I'd want as a developer or manager is to wake up in the morning with a PR shitstorm and enraged users because I shipped some vuln.
What full disclosure does it put everyone on the same footing. Developers, users, and attackers all at once. It reduces the window for potential abuse as much as possible. As policy, it sharpens the incentives to be very careful in your development processes and improve security measures.
It's worth considering that this is actually a long-running historical debate. One of the commonly espoused positions is yours - contact devs privately, give them a reasonable amount of time to patch, then disclose after a patch. After all, it minimizes disruption to production planning and workflows and still protects users. Seems reasonable right? Everyone wins!
Catch is, it's historically been abused by companies more interested in their production schedules than the security of their users. Maybe that's not you! In which case, well done, you're completely awesome! However, this has historically turned out to be rather a lot of software companies.
Full disclosure, the policy I advocated for, seeks to short-circuit this. It offers maximum information to a maximum of people in a minimum of time. It pressures companies to fix their products rapidly and to ship better products in the first place. It also offers users the ability to be aware that they may be under attack and protect themselves in lieu of a patch which may or may not ever come into being.
At the end of the day, the question is this: who are you protecting with your disclosure policy? I would suggest that the policy you have advanced seeks to balance the interests of users and of developers/managers. It's perhaps worth considering that your users may prefer a policy that aligns your incentives more with theirs. Perhaps your customers might prefer policies that encourage a proactive stance.
> It reduces the window for potential abuse as much as possible.
Immediate public disclosure to everyone, including blackhats, reduces the window for potential abuse as much as possible? Cynical answer: You are technically correct … It reduces the window of potential abuse to 0. While at the same time it opens the window for actual (guaranteed) abuse.
Generally speaking, immediate public disclosure is harmful to the very people you want to protect with the disclosure because they are left defenseless to hordes of intruders that, before the disclosure, probably didn’t even know about the issue. By “put[ting] everyone on the same footing”, the developers scramble to release something, anything, that kind of works to mitigate the situation which causes the software quality to suffer.
Even high quality software with careful developers will occasionally suffer from security vulnerabilities. You put every company and individual under general suspicion of misusing responsible disclosure, and by immediate public disclosure you want to get back at them for having a security issue in their software. You don’t care about protecting anyone.
The key difference between before and after disclosure is that people are vulnerable and ignorant before, with no chance whatsoever to defend themselves. After disclosure, people are vulnerable and warned, with the potential to defend themselves. In both scenarios, there is the very real threat of attackers.
I care about protecting people. I hold the idiosyncratic belief that keeping secrets from the vulnerable does not make them safer. I understand that many people do not agree with this.
> After disclosure, people are vulnerable and warned, with the potential to defend themselves.
Only in your wildest wet dreams are people able to defend themselves. You maybe, but certainly not random Joe down the street. And that's assuming Joe reads tech news to begin with.
The only people who this significantly affects in practice are a) the black hats who now have a window of opportunity to do mischief, and - much more importantly - b) the devs who end up needing to patch software under intense pressure.
But anyway, as you pointed out, it's been an ongoing debate for decades.
It also affects professionals who read CVEs and posts to full-disclosures to learn that what mitigations are available. Those tend to be the people responsible for protecting whole networks, who are capable of deploying Snort signatures or roping off vulnerable boxes. Or just people who appreciate knowing that their servers might be vulnerable. I've been in a couple of those positions.
The standing assumption in security is that for any given vuln, the black hats already know. This is a defensive assumption, stemming both from the general unknowability of the subject and the frequent occurrences of it actually being demonstrably true. It's the devs who need to patch software under intense pressure, and the product organization that sets their priorities, and the growth hackers who just want things shipped now whose priorities could perhaps stand to gain from a little adjustment.
I've worked places where engineers would have welcomed that sort of outside pressure.
To put it another way, I do not believe that keeping people ignorant keeps them safe. I fully understand why some people might prefer to believe otherwise.
If we're gonna be brutally realistic about human nature: most people won't budge if they are comfortable and maintain an illusion that things are under control, unless there's external pressure. I have no links to scientific studies but IMO that's common sense and is widely observable.
I too find the actions of the researcher slightly questionable but he himself said this isn't the first time and he's sick of important fixes being delayed.
You know what? It worked. You might disagree with the approach, but what about the results? MS is absolutely gonna release a fix now, there's no denying that.