Hacker News new | past | comments | ask | show | jobs | submit login
What to do when a company refuses to fix a vulnerability I disclosed to them? (reddit.com)
91 points by moooooky on April 5, 2012 | hide | past | favorite | 69 comments



There are security companies that buy these kind of information from you (like antivirus companies), so that they can patch the breaches themselves and proudly announce they discovered a breach and only by using their software you can be protected.

I don't know how legal it is, and I understand that the breach finder wants to publish his findings himself (for "reputations points" maybe ?), and he might lose this right by selling an info, but at least he's getting something out of this. IANAL, but i'm pretty sure you could get in trouble for publicly posting information on how to hack a public service (or pretty much anything for that matter)


Nobody is going to buy a rate limiting bug in some random mobile application. Actually: nobody is going to buy a rate limiting bug at all.


when I posted my comment the reddit post wasn't edited to say that it's a rate liming bug. Indeed, nobody is going to buy such a thing. Pretty useless for any kind of purposes, black-hat or not.


The link mentioned responsible disclosure and it's wikipedia page ( http://en.wikipedia.org/wiki/Responsible_disclosure ) mention two of the primary players who buy vulnerabilities as you mention.


If only the company is put in danger and they stubbornly refuse to resolve the issue, I'm not exactly sure why anyone would work so hard to convince a company to do this. The job of reporting the issue is done, a corporate decision has been made. If that decision is to remain vulnerable, as long as it does not affect users directly, why bother?

Unless, as others suggested, you can legally make a profit out of it, then by all means! Otherwise, just let it go...


I think this raises two issues:

1) It can be difficult to know whether customers are (or could be) affected. Just because the author can't find the case doesn't mean someone else can't. 2) If the company refuses to fix this broken window, they may find other broken windows that aren't worth fixing, which may affect users. By releasing the vulnerability, one can force the company to become more conscious towards security in the long-term.


> If that decision is to remain vulnerable, as long as it does not affect users directly, why bother?

Because if that company is storing sensitive information belonging to others (emails, credit cards, etc), it would be irresponsible to not disclose it. Chances are someone else found out and has been actively exploiting that vulnerability.


It appears he wants to publish the vulnerability (might be a novice security researcher) without getting sued.


He is very, very unlikely to be sued provided that (i) he didn't explicitly agree to a contract forbidding security research when he acquired the application, (ii) he acquired the application lawfully, (iii) he at no point solicited business from the vendor of the application, (iv) he didn't exploit the vulnerability in any way that could be construed as having caused direct damages to the vendor, and (v) he is scrupulously honest and careful about how he writes the finding up.

Contrary to popular opinion on HN, finding vulnerabilities in software you yourself run on your own computer is rarely fraught. We hear about the exceptions in the news because they're exceptional. In reality, people publish vulnerabilities all the time.

The same thing obviously CANNOT BE SAID about finding vulnerabilities in other people's web applications. Finding web vulnerabilities without permission is highly fraught. You can easily find yourself both civilly and criminally liable for doing so.


I would adjust "other people's web applications" to be "in other people's deployments."

For example, it is fine to take someone else's commercial web app, install it on your own server, and beat it up.


That is a good point, thanks for amending.


I agree with you. Given some of the stories we've seen lately, my approach, after disclosing the vulnerability once, would be a three step process:

1) Do nothing. 2) Fuck 'em. 3) Not my problem.


From an ignorance and slightly tongue in cheek POV...

...is there a difference between discovering a new exploit and discovering a company is open to an old or well known exploit? This sounds like the latter.

I'm all for disclosure of a newly found exploit because by doing so you are informing every one who might have the problem and that allows them to take action, etc. But if this is just one business who refuse to fix a known problem then, well, that's their stupidity, no?

See, the bit that bothers me is that publishing the "news" that one company is vulnerable has to be a bit iffy. Its like publishing a list of buildings that don't have good door locks or something. We don't see that in the real world, so why would it be reasonable for the IT world? I mean, there is no legitimate list of vulnerable buildings created by white hat burglars, is there? Its never been legit for such burglars to gain access to a building and leave a note describing the poor security on the CEO's desk.


  Its never been legit for such burglars to gain access to a building and leave a note describing the poor security on the CEO's desk.
Unless, of course, you happen to be Richard Feynman. Which most of us aren't.

http://www.silvertrading.net/articles_lagniappe_01_richard_f...


I've had "Surely You're Joking" on my Kindle for almost a year now and have never read it, but every time I see anything written about Feynman I realize that I'm almost certainly missing out. He sounds like the most interesting man.


You are missing out on a readable book divided into short chapters. It's basically all anecdotes. Easy to intersperse with your other reading.


>I mean, there is no legitimate list of vulnerable buildings created by white hat burglars, is there?

But the interesting question is not whether such a list has ever been written. The interesting question is whether such a list is legal to write.

Maybe such a list would be beneficial in the long run. Anyone who has practiced lock-picking knows that most lock-based security is little more than an elaborate honor system.


> I'm all for disclosure of a newly found exploit because by doing so you are informing every one who might have the problem and that allows them to take action

You also assume that it is the company that will suffer and they are the ones that have to take action. A lot of companies are public facing companies that store and maintain sensitive customer information. I thought the main reason to disclose the research is not to help the company not lose millions at the end of the quarter but to warn their customers that this company can potentially leak your information.

> Its like publishing a list of buildings that don't have good door locks or something.

It is like publishing a list of buildings that store others belongings (like a bank) that doesn't have locks on them. You want to disclose that fact because chances are someone else found the vulnerability and is exploiting it. It would actually seem very irresponsible to not disclose it in that case (after say it turns out many people's stuff goes missing).


I don't know how big the company is, but after a certain bigness, all of the people who could fix problems like this have moved on. The only people left are managers who fix "problems" with lawyers. A classic "when all you've got's a hammer" situation.

They might not be refusing to fix the problem. They might actually be unable with the tech talent they've got left.

My advice? Don't look like a nail.


If you contacted them non-anonymously first, you made a mistake, because they can and will sue you if you disclose it. Judges don't understand computers and US courts are all about draining money from someone, so they still might ruin you out of spite even if you disclose it in a way that there's no proof it was you or if someone else who discovered and released it on his own.

The correct way would be: 1) discover a vulnerability 2) contact them anonymously 3) if they don't fix it, anonymuosly release it to general public

That way, you can still help them while protecting yourself. The third step is optional of course.


You almost sound like you're laying down an ultimatum to the company, you've done your job by notifying them so let sleeping giants rest. If it's a known exploit I don't see any reason to publish your findings, if it's something you've come across that hasn't been published than by all means publish away.


The linked post is talking about a DoS vulnerability of the service. It doesn't impact other entities than the service provider (beyond the obvious potential for service outage of its users). I think telling them about it is all that's required. Either they fix it or they don't, that's between them and their users.


Public disclosure won't help btw. half of sites here didn't fix anything(http://homakov.blogspot.com/2012/03/hacking-skrillformer-mon...)


Did you reach out to each company and tell them, or did you assume that by creating a public blog post about them and submitting it to Hacker News they were bound to find out?


Did you read his blog post and see that he did report the vulnerabilities and noted which companies fixed it?


I read the comment he wrote on HN where he said he didn't. But if Egor Homakov says he did, my next question is "who did he report it to?"

I've been doing this for awhile, maybe there's useful advice I can offer him.


I'm curious what we could change legally to make this less an issue. There's a clear conflict of interest between doing a public good by disclosing a vulnerability and not wanting to risk (at worst) the FBI coming after you or (at best) losing clients. I would certainly consider it unethical to know of a vulnerability and not disclose that information publicly, but there are so many hurdles to doing so that I don't blame some people (especially those who are less established) for not doing so.

It almost makes me feel that there should be a law requiring disclosure of vulnerabilities.


The FBI is not going to come after you for publishing a DOS vulnerability in a mobile app; in fact, you could find and publish remote code execution in an extremely popular application (say Instagram or Twitter) without even telling the vendor and still not be in any trouble. People do it all the time.

Most of the stories you hear about people getting in actual trouble over vulnerability research involve web vulnerabilities. You cannot hack someone else's web site to make a point, even if the underlying point is unimpeachable ("this application is insecure and people should know about it").


He could just leave them alone and do nothing. It's their service and if they don't want to respond then let them leave the vulnerability open. It doesn't affect user privacy so there is no duty to fellow users as there is in some other cases where a vulnerability being left open means people could be losing private information on an ongoing basis.


Pastebin, then on a disclosure list.


Seems pretty straightforward to me -- since it's a DoS that doesn't put users' information at risk, just publish it without naming the company.


That's the first sensible and ethical suggestion in this thread.


Couldn't this vulnerability simply be published without mentioning who is directly affected? E.g. "under x and y circumstances, it is possible to do z and everyone is advised to check and correct this".

If this is not an option it means it is something very specific of that company, and what would be the purpose on releasing the vulnerability to the public?


You should contact them. If that fails, make a commit to the Rails project.


There should be a scientific journal for this sort of thing.


Take your business elsewhere is step 1. Your information/service is not guaranteed if they aren't willing to protect it.


Everything takes time and money. It may get fixed, eventually. What does 'refusal' amount to?


Move on, work on something new.

I recommend two shots of wheatgrass and a smoothie.


I'm sure Randal Schwartz can offer some advice on this.


Name and shame.


If you really want them to fix it, whatever you decide, be anon about it. If you want your name attached to it, move on.


Use It.


Illegal. And juvenile.


Illegal maybe. Juvenile? If illegally making use of hacks is juvenile, someone better inform the Mexican Zetas. Very politely.


I think you're supposed to exploit the vulnerability in relatively innocuous but deeply disturbing ways, get banned, then complain about how you only meant well, then be lauded on Hacker News as a martyr who should have been embraced by the hacked company.


Or rather you contact them. Then they ban you and possibly send the FBI after you for "illegally accessing a remote computer system" or other such crime and then you are punished for all your work. If you tell them you will disclose your research on a certain date they'll go after you for extortion.

I wrote this before and I'll say it again. I don't believe in "White Hacker" as a label. Corporations do not do well when their vulnerabilities are exposed. They don't have a way to handle "White Hackers" unless they are the ones hiring them. Most will strike back and punch you in the face no matter how good your intentions are. So if you already spent the time researching and finding the vulnerability, just disclose on a security forum or if you want to profit, sell on a black market.


If you tell them that unless they pay you or retain you as a contractor by a certain date that you'll publish, you are in fact extorting them.

People who have found vulnerabilities and also been naive about the law have run aground on this before.


Do you have any examples?


I'm worried that if I start Googling this I'll lose a couple hours of my day to a "researching vulnerability extortion" jag.


I don't believe it is extortion since all he is asking them to do is fix their own vulnerability. I believe extortion requires the demand of money or services in exchange for action/inaction.


Isn't fixing the vulnerability a demand of services?


Doubtful, or a lot of consumer demands are technically extortion. In particular, the model jury rules for extortion tend to refer specifically to property (usually money).


I believe you mean "White Hat Hacker"... I think everyone gets the gist of what you mean but just wanted to clarify in case someone's thinking you're a racist hating on "Whitie" or something :)


Sorry, of course you are right. And it is too late to 'edit' the comment. Thanks for pointing it out.


Are there really people in this community who don't know this?


I've heard the phrase "white hat" used frequently to describe hackers. I've never heard the phrase "white hacker".

  About 526,000 results
  http://www.google.com/#hl=en&q=%22white+hat%22+hacker

  About 65,000 results
  http://www.google.com/search?hl=en&q=%22white%20hacker%22


You know what? I totally mentally replaced the word "white hacker" with "white hat", and only realized it after you pointed it out.


I prefer the homakovs of the world rather than the Anons (they would take full advantage) of the world. To have one vulnerability that could lead to another is undesirable. Homakov's actions could be considered aggressive, but sometimes that's exactly what is needed in order to push something. (no pun intended)


The world does not divide into those two kinds of people.


Who said it did? I surely did not and did not imply that at all. I simply expressed my preference of the interests of two kinds of people.


We can still agree homakov doesn't deserve this kind of lingering resentment on behalf of the OP.


I don't agree that there is resentment. That comment seemed to choose its words carefully to avoid judging.

But ideally this isn't going to be a subject you & I are going to end up having to argue about today.


Nothing. If they're unwilling to fix it, they'll end up facing the consequences when someone less scrupulous than yourself discovers it. If you do publish it, odds are they'll issue a DMCA takedown and try to sue.

Speaking from experience...


If you do publish it, odds are they'll issue a DMCA takedown and try to sue.

My experience is quite to the contrary. Even Intel, as poor as their security response was, didn't try to take legal action against me. (I was lucky that I was unemployed at the time, though...)


> didn't try to take legal action against me

But that is an interesting attitude. Instead of being indignant that they didn't offer to pay you for doing their security research for them ( or at least publicly thanking you) you just seem glad that they didn't sue you.

It is like volunteering to help someone and then just being glad they didn't beat you up in the end.

So it seems like there is not much benefit to doing this (there is a benefit if you prevent other people information from being stolen) but immediately there is no upside. You either get ignored or you get sued. If anyone gets sued by a company who has a full department of lawyers on retainer, it is guaranteed they'll pretty much have a bad time.


It is like volunteering to help someone and then just being glad they didn't beat you up in the end.

I didn't publish the hyperthreading vulnerability to help Intel. I published it to help Intel's customers.


Building a reputation can still be valuable. (E.g. Colin's work on hyper-threading and side channels did help me decide to sign up for tarsnap.)


Security research is exempt from the DMCA. Even before the exemption, the DMCA applies only to vulnerabilities that circumvent content protection schemes.


do nothing, it's none of your business, why bother?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: