There are security companies that buy these kind of information from you (like antivirus companies), so that they can patch the breaches themselves and proudly announce they discovered a breach and only by using their software you can be protected.
I don't know how legal it is, and I understand that the breach finder wants to publish his findings himself (for "reputations points" maybe ?), and he might lose this right by selling an info, but at least he's getting something out of this. IANAL, but i'm pretty sure you could get in trouble for publicly posting information on how to hack a public service (or pretty much anything for that matter)
when I posted my comment the reddit post wasn't edited to say that it's a rate liming bug. Indeed, nobody is going to buy such a thing. Pretty useless for any kind of purposes, black-hat or not.
If only the company is put in danger and they stubbornly refuse to resolve the issue, I'm not exactly sure why anyone would work so hard to convince a company to do this. The job of reporting the issue is done, a corporate decision has been made. If that decision is to remain vulnerable, as long as it does not affect users directly, why bother?
Unless, as others suggested, you can legally make a profit out of it, then by all means! Otherwise, just let it go...
1) It can be difficult to know whether customers are (or could be) affected. Just because the author can't find the case doesn't mean someone else can't.
2) If the company refuses to fix this broken window, they may find other broken windows that aren't worth fixing, which may affect users. By releasing the vulnerability, one can force the company to become more conscious towards security in the long-term.
> If that decision is to remain vulnerable, as long as it does not affect users directly, why bother?
Because if that company is storing sensitive information belonging to others (emails, credit cards, etc), it would be irresponsible to not disclose it. Chances are someone else found out and has been actively exploiting that vulnerability.
He is very, very unlikely to be sued provided that (i) he didn't explicitly agree to a contract forbidding security research when he acquired the application, (ii) he acquired the application lawfully, (iii) he at no point solicited business from the vendor of the application, (iv) he didn't exploit the vulnerability in any way that could be construed as having caused direct damages to the vendor, and (v) he is scrupulously honest and careful about how he writes the finding up.
Contrary to popular opinion on HN, finding vulnerabilities in software you yourself run on your own computer is rarely fraught. We hear about the exceptions in the news because they're exceptional. In reality, people publish vulnerabilities all the time.
The same thing obviously CANNOT BE SAID about finding vulnerabilities in other people's web applications. Finding web vulnerabilities without permission is highly fraught. You can easily find yourself both civilly and criminally liable for doing so.
From an ignorance and slightly tongue in cheek POV...
...is there a difference between discovering a new exploit and discovering a company is open to an old or well known exploit? This sounds like the latter.
I'm all for disclosure of a newly found exploit because by doing so you are informing every one who might have the problem and that allows them to take action, etc. But if this is just one business who refuse to fix a known problem then, well, that's their stupidity, no?
See, the bit that bothers me is that publishing the "news" that one company is vulnerable has to be a bit iffy. Its like publishing a list of buildings that don't have good door locks or something. We don't see that in the real world, so why would it be reasonable for the IT world? I mean, there is no legitimate list of vulnerable buildings created by white hat burglars, is there? Its never been legit for such burglars to gain access to a building and leave a note describing the poor security on the CEO's desk.
I've had "Surely You're Joking" on my Kindle for almost a year now and have never read it, but every time I see anything written about Feynman I realize that I'm almost certainly missing out. He sounds like the most interesting man.
>I mean, there is no legitimate list of vulnerable buildings created by white hat burglars, is there?
But the interesting question is not whether such a list has ever been written. The interesting question is whether such a list is legal to write.
Maybe such a list would be beneficial in the long run. Anyone who has practiced lock-picking knows that most lock-based security is little more than an elaborate honor system.
> I'm all for disclosure of a newly found exploit because by doing so you are informing every one who might have the problem and that allows them to take action
You also assume that it is the company that will suffer and they are the ones that have to take action. A lot of companies are public facing companies that store and maintain sensitive customer information. I thought the main reason to disclose the research is not to help the company not lose millions at the end of the quarter but to warn their customers that this company can potentially leak your information.
> Its like publishing a list of buildings that don't have good door locks or something.
It is like publishing a list of buildings that store others belongings (like a bank) that doesn't have locks on them. You want to disclose that fact because chances are someone else found the vulnerability and is exploiting it. It would actually seem very irresponsible to not disclose it in that case (after say it turns out many people's stuff goes missing).
I don't know how big the company is, but after a certain bigness, all of the people who could fix problems like this have moved on. The only people left are managers who fix "problems" with lawyers. A classic "when all you've got's a hammer" situation.
They might not be refusing to fix the problem. They might actually be unable with the tech talent they've got left.
If you contacted them non-anonymously first, you made a mistake, because they can and will sue you if you disclose it. Judges don't understand computers and US courts are all about draining money from someone, so they still might ruin you out of spite even if you disclose it in a way that there's no proof it was you or if someone else who discovered and released it on his own.
The correct way would be: 1) discover a vulnerability 2) contact them anonymously 3) if they don't fix it, anonymuosly release it to general public
That way, you can still help them while protecting yourself. The third step is optional of course.
You almost sound like you're laying down an ultimatum to the company, you've done your job by notifying them so let sleeping giants rest. If it's a known exploit I don't see any reason to publish your findings, if it's something you've come across that hasn't been published than by all means publish away.
The linked post is talking about a DoS vulnerability of the service. It doesn't impact other entities than the service provider (beyond the obvious potential for service outage of its users). I think telling them about it is all that's required. Either they fix it or they don't, that's between them and their users.
Did you reach out to each company and tell them, or did you assume that by creating a public blog post about them and submitting it to Hacker News they were bound to find out?
I'm curious what we could change legally to make this less an issue. There's a clear conflict of interest between doing a public good by disclosing a vulnerability and not wanting to risk (at worst) the FBI coming after you or (at best) losing clients. I would certainly consider it unethical to know of a vulnerability and not disclose that information publicly, but there are so many hurdles to doing so that I don't blame some people (especially those who are less established) for not doing so.
It almost makes me feel that there should be a law requiring disclosure of vulnerabilities.
The FBI is not going to come after you for publishing a DOS vulnerability in a mobile app; in fact, you could find and publish remote code execution in an extremely popular application (say Instagram or Twitter) without even telling the vendor and still not be in any trouble. People do it all the time.
Most of the stories you hear about people getting in actual trouble over vulnerability research involve web vulnerabilities. You cannot hack someone else's web site to make a point, even if the underlying point is unimpeachable ("this application is insecure and people should know about it").
He could just leave them alone and do nothing. It's their service and if they don't want to respond then let them leave the vulnerability open. It doesn't affect user privacy so there is no duty to fellow users as there is in some other cases where a vulnerability being left open means people could be losing private information on an ongoing basis.
Couldn't this vulnerability simply be published without mentioning who is directly affected? E.g. "under x and y circumstances, it is possible to do z and everyone is advised to check and correct this".
If this is not an option it means it is something very specific of that company, and what would be the purpose on releasing the vulnerability to the public?
I think you're supposed to exploit the vulnerability in relatively innocuous but deeply disturbing ways, get banned, then complain about how you only meant well, then be lauded on Hacker News as a martyr who should have been embraced by the hacked company.
Or rather you contact them. Then they ban you and possibly send the FBI after you for "illegally accessing a remote computer system" or other such crime and then you are punished for all your work. If you tell them you will disclose your research on a certain date they'll go after you for extortion.
I wrote this before and I'll say it again. I don't believe in "White Hacker" as a label. Corporations do not do well when their vulnerabilities are exposed. They don't have a way to handle "White Hackers" unless they are the ones hiring them. Most will strike back and punch you in the face no matter how good your intentions are. So if you already spent the time researching and finding the vulnerability, just disclose on a security forum or if you want to profit, sell on a black market.
I don't believe it is extortion since all he is asking them to do is fix their own vulnerability. I believe extortion requires the demand of money or services in exchange for action/inaction.
Doubtful, or a lot of consumer demands are technically extortion. In particular, the model jury rules for extortion tend to refer specifically to property (usually money).
I believe you mean "White Hat Hacker"... I think everyone gets the gist of what you mean but just wanted to clarify in case someone's thinking you're a racist hating on "Whitie" or something :)
I've heard the phrase "white hat" used frequently to describe hackers. I've never heard the phrase "white hacker".
About 526,000 results
http://www.google.com/#hl=en&q=%22white+hat%22+hacker
About 65,000 results
http://www.google.com/search?hl=en&q=%22white%20hacker%22
I prefer the homakovs of the world rather than the Anons (they would take full advantage) of the world. To have one vulnerability that could lead to another is undesirable. Homakov's actions could be considered aggressive, but sometimes that's exactly what is needed in order to push something. (no pun intended)
Nothing. If they're unwilling to fix it, they'll end up facing the consequences when someone less scrupulous than yourself discovers it. If you do publish it, odds are they'll issue a DMCA takedown and try to sue.
If you do publish it, odds are they'll issue a DMCA takedown and try to sue.
My experience is quite to the contrary. Even Intel, as poor as their security response was, didn't try to take legal action against me. (I was lucky that I was unemployed at the time, though...)
But that is an interesting attitude. Instead of being indignant that they didn't offer to pay you for doing their security research for them ( or at least publicly thanking you) you just seem glad that they didn't sue you.
It is like volunteering to help someone and then just being glad they didn't beat you up in the end.
So it seems like there is not much benefit to doing this (there is a benefit if you prevent other people information from being stolen) but immediately there is no upside. You either get ignored or you get sued. If anyone gets sued by a company who has a full department of lawyers on retainer, it is guaranteed they'll pretty much have a bad time.
Security research is exempt from the DMCA. Even before the exemption, the DMCA applies only to vulnerabilities that circumvent content protection schemes.
I don't know how legal it is, and I understand that the breach finder wants to publish his findings himself (for "reputations points" maybe ?), and he might lose this right by selling an info, but at least he's getting something out of this. IANAL, but i'm pretty sure you could get in trouble for publicly posting information on how to hack a public service (or pretty much anything for that matter)