Hacker News new | past | comments | ask | show | jobs | submit login

This is the vulnerability exploited by EXTRABACON: https://xorcatt.wordpress.com/2016/08/16/equationgroup-tool-...



So it seems the dump contains at least one legit 0-day, and it's been in use for 3 years.


Which does at least HINT that it might be what it claims to be. That's a pretty impressive 0-day which they just gave away as a freebie, who knows what they didn't give away.

I will say we'll never get real confirmation if this was actually stolen from the NSA, but if the other bundle contains a bunch of nice original vulnerabilities people will presume it was.


Washington post got former NSA TAO employees to go on record (anonymously) confirming the leaked toolkit comes from NSA:

https://www.washingtonpost.com/world/national-security/power...


Good. Given that these tools no longer can be considered available only to the NSA, they might start working with vendors to close these particular set of holes.


I wonder how this leak affects their "vulnerabilities equities process".

The publicly available data would suggest that thus-far NSA-hoarded vulnerabilities are definitively known to actors who appear willing to act against US interests.

Vendor disclosure means those vulnerabilities can be patched and US interests can cease being vulnerable, but could also confirm NSA awareness of vulnerabilities - which could in turn cause attribution concerns for past or present operations the NSA is undertaking or has undertaken using these vulnerabilities (in addition to providing additional credibility to the leaker).

What a tangled web.


Worked with the US govt (selling to it) and can tell by browsing those files, there is a high chance it came from a 3 letter US govt agency. It was just by looking at stuff they reference, packages, tools they use. The language and phraseology in comments (excluding bundled software like requests and scapy of course). After many years you start to get a feel for stuff like that.


Yes, I think so, too.


Makes you wonder if they could have made more money by pretending to find them and reporting them to the respective bug bounty programs.


Bug bounties almost never pay market value for exploits. Only reason to participate in them is charity.


And legality. I'm not sure why people seem to entirely discount that portion. There's more reward by selling on the black market, but there's also more risk associated with that.


Yeah. Homeowners don't pay market value for me not robbing them, either. After all, think how much that jewellery is worth. And the damage of ID cards and passports.

A laptop alone could get me $250, but no one wants to give me even $10 for telling them their door is unlocked.


Most people only care about tangibles. When i politely advised about security holes, i was told that "we don't need people like you' or just called the police. I understand.


They discount it because it's not true. Nothing illegal about looking for vulnerabilities in products and being compensated for your findings. It's only illegal to attack someone else's deployment.


What's illegal about selling them? Is there an anti-security-consulting-market legislation?

In general what are some risks invovled (I am just not very familiar and wondering in general). Is it a tax issue, the chance IRS could come after you for undeclared income?


Depending on jurisdiction and the particulars of the sale and who you sold it to, I think it's possible you could be charged as an accomplice if the exploit is used in a crime. For example, if you had any reason to believe the individual or organisation you sold it to might use it illegally, and someone singles you out after they do use it illegally, I don't think it would be hard for a prosecutor to make a case. I also don't think under those particular circumstances that's necessarily a bad thing. IANAL though.


Nothing, there are businesses doing it in the US paying taxes on their income.


> Only reason to participate in them is charity.

Maybe believing that it's good when fewer vulnerabilities exist and when attackers are less able to exploit things? Does that count as charity?


...noun: the voluntary giving of help to those in need.


Getting a CVE on your resume isn't bad either.


> and it's been in use for 3 years.

At least 3 years.


This is why "responsible disclosure" is a joke. The flaws put in by these companies are not responsible. (Sometimes people make mistakes, but we're at the point of carelessness).


That may feel good to say, but as someone whose job it was to find these kinds of bugs in software from companies ranging from tiny startups to financial exchanges to major tech vendors, this is a kind of carelessness shared by virtually everyone shipping any kind of software anywhere.

That said, the term "responsible disclosure" is Orwellian, and you should very much avoid using it.


How is "responsible disclosure" Orwellian?


It's coercive. It redefines language to make any handling of vulnerabilities not condoned by the vendors who shipped those vulnerabilities "irresponsible", despite the fact that third parties who discover vulnerabilities have no formal duty to cooperate with those vendors whatsoever.

The better term is "coordinated disclosure". But uncoordinated disclosure is not intrinsically irresponsible. For instance: if you know there's an exploit in the wild for something, perhaps go ahead and tweet the vulnerability without notice!


Do you think there's a moral imperative for researchers to responsibly disclose discovered vulnerabilities?

I see it as a kind of Hippocratic Oath in the field.


No.


Maybe I don't understand you. Are you suggesting that, if you find a vulnerability in a piece of software, you aren't ethically obligated to confidentially disclose the vulnerability to the maintainer so it can be patched before the vulnerability becomes publicly known? If so, why? What is a person who found a vulnerability ethically obligated to do?


No, of course you aren't. Why would you be?


... because if you don't and someone malicious also discovers this vulnerability they can use it to do bad things? If I can get a vulnerability patched before it can be exploited, I can potentially prevent a hacker from stealing people's identity, credit card numbers, private data, etc. To have that opportunity and not act seems irresponsible.

I must be misunderstanding. Would you mind expanding on this more?


You are not misunderstanding. I do not in the general case have a duty to correct other people's mistakes. The people deploying broken software have a duty to do whatever they can not to allow its flaws to compromise their users and customers. Merely learning something new about the software they use does not transfer that obligation onto me.

I would personally in almost every case report vulnerabilities I discovered. But not in every case (for instance: I refused to report the last CryptoCat flaw I discovered, though I did publicly and repeatedly warn that I'd found something grave). More importantly: my own inclination to report doesn't bind on every other vulnerability researcher.


Well, I'm glad you do report the vulnerabilities you find. Maybe it's my own naive, optimistic worldview, but I profoundly disagree with your stance that a researcher is not obligated to report. I think it is a matter of public safety. If you found out a particular restaurant was selling food with dangerously high levels of lead, aren't you obligated to tell someone, anyone for the public good? If you don't, you aren't as culpable as the restaurant serving this food, but that's still a lot of damage you could have prevented at no real cost to yourself.

I understand morality is subjective, but that's my 2 cents on the matter.

EDIT: about the vulnerabilities you didn't disclose, I really can't understand why not. Why not just send an email to the maintainer: "hey, when I do X I cause a buffer overflow"? You don't even have to help them fix it. You probably won't answer this, but can you tell me why you wouldn't disclose a vulnerability?


I do not report all the vulnerabilities I find, as I just said.

I confess to being a bit mystified as to how work I do on my own time, uncompensated by anyone else, which work does not create new vulnerabilities but instead merely informs me as to their existence, somehow creates an obligation for me to act on behalf of the vendors who managed to create those vulnerabilities in the first place.

Perhaps you have not had the pleasure of trying to report a vulnerability, losing several hours just trying to find the correct place to send the vulnerability, being completely unable to find a channel with which to send the vulnerability without putting the plaintext for it on the Internet in email or some dopey web form, only to get a response from first-line tech support asking for a license or serial number so they can provide customer support.

Clearly, you have not had the experience of being threatened with lawsuits for reporting vulnerabilities --- not in software running on someone else's servers (which, absent a bug bounty, you do not in the US have a legal right to test) but on software you download and run and test on your own machine. I have had that experience.

No. Finding vulnerabilities does not obligate someone to report them. I can understand why you wish it did. But it does not.


I see you point about it being overly difficult to report vulnerabilities, especially legal threats, that seriously sucks. I guess I believe you have an obligation to make some effort to disclose, but if a project is just irresponsible and won't fix their shit, or will try to sue you, it's out of your hands.


Somehow my doing work on my own time creates an obligation for me to do more work on behalf of others.

Can't I just flip this around on you and say you have an ethical obligation to spend some of your time looking for vulnerabilities? If you started looking, you'd find some. Why do you get to free-ride on my work by refusing to scrutinize the stuff you run?


> Somehow my doing work on my own time creates an obligation for me to do more work on behalf of others.

To some small extent, yes, though how much work is up for debate. Maintainer's email and PGP public key is right there on the website? Yeah, I think you're obligated. No email you can find, no way to contact them, or are just outright hostile? No, I think you shouldn't have to deal with that.

But I feel like you agree with that, though maybe not in those exact words. After all, you've had to jump through all kinds of hoops to disclose vulnerabilities, been threatened with lawsuits for doing the right thing, and yet you still practice responsible disclosure in almost every case in spite of the burden of effort and potential risk. Aren't you doing it because you think disclosure is the right think to do? That's all I mean by obligation.

EDIT: sorry, not "responsible disclosure," "cooperative disclosure" or whatever term you want to use for disclosing the vulnerability to the maintainer.


I think it is a matter of degree. Here - not sure how this is handled in other countries - it is a crime if you come across an accident and do not attempt to help. And to me this is obviously not only the right thing to do because it is required by law but because there is a moral obligation to do so.

Nobody has to enter a burning car and risk his life but at least you have to call the emergency service or do whatever you can reasonably do to help. And it really doesn't matter whether you are doing your work delivering packages, whether the accident was the fault of the driver because he was driving intoxicated, if somebody else could also help or whatnot.

Discovering a vulnerability is of cause different in most respects - the danger is less imminent, the vendor may have a larger responsibility and so on. But the basic structure is the same - more or less by accident you end up in a situation where there is a danger and you are in the position to help to make the outcome probably better.

So I think one can not simply dismiss that there might be a moral obligation to disclose a vulnerability to the vendor on just the structure of the situation, one has to either argue that there is also no moral obligation in the accident scenario or argue that the details are sufficiently different that a different action - or no action in this specific case - is the morally correct or at least an morally acceptable action.


Accidents and vulnerabilities are not directly comparable, so a position on vuln disclosure does not necessarily imply a particular position on accident assistance.

I would feel a moral obligation to help mitigate concrete physical harm to victims of an accident. I feel no such obligation to protect against hypothetical threats to computer systems.

Chances are, you recognize similar distinctions; for instance, I doubt you feel obligated to intervene in accidents that pose only minor personal property risks.


That is also my point of view, severity and other factors matter. But that also seems to imply the same thing for vulnerabilities - discovering a remote code execution vulnerability in Windows might warrant a different action than a hidden master password in an obscure forum software no one really used in a decade. The danger is still more abstract but it can still cause real harm to real people.


I would personally disclose RCE in Windows, not least because I think Microsoft does a better-than-average job in dealing with the research community.

But I need to be careful saying things like that, because it is very easy for me to say that, because I don't spend any time looking for those kinds of flaws. Security research is pretty specialized now, and I don't do spare-time Windows work. I might feel differently if I did.

I would not judge the (many) researchers who would not necessarily disclose that flaw immediately.


IF there is a vulnerability, it might already be in use by hackers. People need to know about it immediately, so they can defend themselves (by closing a port, or switching to a different server or something). Companies need to be encouraged to find and fix this kind of thing without waiting for a embarrass them by finding it.


I object strongly to your claim that I practice "responsible disclosure", for the reasons stated earlier in the thread.


There is no such thing as responsible disclosure. The concept is nonsensical. Also, you're overestimating the consequences of a single bug. The boring reality is that bugs rarely matter.


When you say obligation, do you actually mean that? An obligation is enforced by some sort of penalty, either legal (ultimately a threat of violence) or social (public shaming). There is no incentive for meeting an obligation outside of avoiding punishment, so why would individuals and private enterprises do any infosec work?


You assume that your own research machine can't be compromised, nor are the communication channels of the organization at fault.

So, it won't be fixed.

Hopefully only one or two people know about the same flaw you found...

Oh, but you would know ahead of time if concrete physical harm could possibly come to the victim of an accident?

Well good for you! You should probably be in charge of defending all infosec research, since apparently you can't be hacked.


[flagged]


Then you misunderstood your own logical conclusion...

You said (and I quote):

  Can't I just flip this around on you and say
  you have an ethical obligation to spend some
  of your time looking for vulnerabilities?
No. No, you can't. Unless you could convince me that my Dwarf Fortress skills have a similar magnitude of real-world affect as the vulnerabilities I've discovered on my own and decided to pocket for one reason or another.


By your logic, I am better off not doing vulnerability research in my spare time --- as is virtually everybody else. How is that a good outcome?


No. By my logic, you are better off not doing vulnerability research in your spare time if you have to worry about the legal ramifications of your actions.

The ethical conundrums are unavoidable, and those calculations are indeed difficult.

The legal consequences are artifice, and by agreeing to those (while ignoring externalities and not going public), you are likely putting others at risk.


This is a fascinating exchange. Now I wonder how much of the general population, or even the tech-but-not-security population thinks like this.


To your second question: because some projects are fundamentally irresponsible, and providing vulnerability reports to them means making an engineering contribution, which decreases the likelihood that the project will fail.


The meaning of the words "responsible" and "irresponsible" extends beyond "formal duty".


I'm sure that's true, but that's not responsive to my argument.


I obviously thought so otherwise I wouldn't have said it.


The only responsive argument I can come up with based on your original comment depends on you not knowing what the term "responsible disclosure" means, and instead trying to back out its meaning from the individual words "responsible" and "disclosure". But that's not what the term means.

A good shorthand definition for "responsible disclosure" is "report to the vendor, and only to the vendor, and disclose to nobody else until the vendor chooses to release a patch, and even then not until a window of time chosen by the vendor elapses."

Maybe you thought I was saying "the only way to disclose responsibly is to honor a formal duty to the vendors of insecure software". No, that was not my argument. If you thought it was, well, that's a pretty great demonstration of how the term is Orwellian, isn't it?

Or I could be missing part of your argument (it was quite terse, after all). Maybe you could fill in some details.


this is a kind of carelessness shared by virtually everyone shipping any kind of software anywhere.

I don't feel wrong saying that all of those are irresponsible. There are some people who write good code, who at least make an effort to avoid vulnerabilities, and those are the responsible ones.


If you find one of them in the wild, take a picture, so we can have some evidence they exist.


They exist all over the place. OpenBSD, DJB, Knuth, at companies I've worked for, you'll find people who care, and code responsibly. The rest of you need to get your act together.


Someone mentioned selling vulnerabilities on the black market as a better alternative than doing these "responsible disclosure" and bug bounties. What's your take on that? Is it a better route to take?


For the most part I think selling vulnerabilities on an actual "black market" is intrinsically unethical, and makes you a party to the bad things people who buy exploits on an actual black market do with them.

Thankfully, the black market doesn't want 99.99999% of the vulnerabilities people find.

I have friends who have sold vulnerabilities to people other than vendors. I do not think they're unethical people, and I don't know enough about those transactions to really judge them. So, it really depends, I guess. But if it were me, I'd be very careful.


It's dangerous, and might be illegal, so be careful if you decide to do that.


Yep here's Cisco's statement on this: http://blogs.cisco.com/security/shadow-brokers


...secure asymmetrical (public-key) cryptography...

Hmmmm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: