Hacker News new | past | comments | ask | show | jobs | submit login
A Funny Thing Happened on the Way to Coursera (webpolicy.org)
124 points by boynamedsue on Sept 5, 2014 | hide | past | favorite | 32 comments



Here is Coursera's official response: http://blog.coursera.org/post/96686805237/response-to-report....

We have already implemented fixes and mitigation strategies for all of the vulnerabilities - including completely disabling type-ahead hinting for email address on our instructor interfaces (instructors must now enter the complete email address of a learner in order to manually enroll him or her in the instructor's course) and rate limiting and referrer header checking on APIs to slow down and stop enumeration attacks by third parties to discover learner enrollment status for courses.

Finally, we would like to thank Dr. Mayer for reporting these security problems and helping us make Coursera a more secure and privacy conscious platform for our learners.


Tech debt is like monetary debt -- you still have to pay it back, and quick. When it's security / API related debt, you have to pay it back even quicker, because if you don't, someone inevitably forecloses on your metaphorical house and repossesses your metaphorical car.


The analogy I like is, "If you have too much technical security debt, someone will eventually report you to the credit agency"


> I reported the issue to Coursera on Sunday, and I have not yet received a response. Possible remediation steps include rate limiting (again), referrer checking, and configuring APIs to always return the same HTTP status.

Wouldn't the 2nd issue (the cross-origin data leak) be better off solved by having a CSRF token instead?


That would presumably solve this issue. Perhaps that is what they will end up doing -- hard to say since they haven't addressed this issue yet. The author was guessing at solutions that might help (though, as you've noted, he was mostly off base).


I don't mean to say his suggestion is bad. For what it is worth, I just want to clarify this :) I too, am not too good with security stuff either (still learning!) and thanks!

Seriously though, he's the kind of privacy person you wouldn't think you'd be listening to a law textbook because he obviously is pretty good with web technology.


Hey, you've got other interests aside from coding, don't you? Some people like to bike, some like to play games, some like to study law. Some like to do all three ;)


Accepting a role in an organization, only to turn around and immediately publicly humiliate and embarass them (for no good reason[1]), is just about the biggest dick move one can make.

I hope this guy finds his 5 seconds in the limelight to be worth sacrificing his common decency to.

[1] Because they appear to be in the process of addressing the issues in a timely manner.


Well I don't think it's really a role in an organisation, so much as a user of that organisation's product from what I can see. The author appears to be teaching some course on Coursera later this year, not working directly for Coursera.

It's probably more like a Facebook user finding a security exploit, disclosing (responsibly or not as has been discussed in this thread), then posting a blog on their findings, rather than one of Facebook's team posting a blog talking about how bad Facebook's security is.


Hiding the IDs doesn't have to be a security feature at all. Maybe they just didn't want to publicly show how many students they've got, simply for marketing reasons?


There is a benefit in having an id used in APIs that is difficult to iterate through sequentially or otherwise anticipate. I'm guessing the "internal" id is a surrogate key in a database, which normally wouldn't be used in other contexts.


Was Coursera contacted before this was published?


1. "I reported this issue to Coursera last Thursday; to the company’s credit, in less than a day ..."

2. "I reported the issue to Coursera on Sunday, and I have not yet received a response."

3. "I notified Coursera of this issue last Thursday."

Kind of, but not very responsibly.


How would you define responsibly in this case? Isn't a week enough time?


There are norms for this type of thing. Normally a few weeks is the minimum that people are expected to give.

At worst they don't fix it in a timely manner. And at that point giving them a heads up that you're about to publish. Then you can publish the bug, the timeline and communications, which should serve as a ref flag to their users, government regulators and the community.

However, you need to bear in mind publishing before they fix the issue puts existing users at risk by publicizing a flaw that can be leveraged by bad guys. You need to weigh this against any slowness to fix the issue (99.99% of companies will fix the issue). Protecting users can be a tricky line to walk in this scenario.

Also, Security fixes might seem simple from the outside, but there might be hidden complexity or dependancies that you don't see. Or worse your report is only the tip of the ice berg and there is a much larger issue that they will have to tackle all at once (and the company won't share these related issues in order to protect their users and reputation).

Giving them the extra time will achieve the goal of getting the bug fixed and protecting users. If you suspect the bug is actively being exploited you can always email them and share your concern/frustration with the timeline.

My core advice is try your best to communicate with the company and inform them of your thoughts and concerns.


I think it's reasonable to expect a response or acknowledgement in a week, not necessarily a fix.


Totally agree. However, I feel the correct response in that situation is to criticize them when you publish, rather that publishing before the fix.


I think waiting for Coursera to resolve the issue isn't too much to ask. They seem cooperative, and it was a holiday weekend.


If there was irresponsibility in the disclosure, it was in waiting so long, presumably allowing many more people to sign up and become exposed to Coursera's negligence.

The cargo cult of "responsible disclosure" needs to die. Responsibility lies with us, the developers.


Well, yes and no. Ultimately yes -- Coursera should have been more diligent about considering the implications of their API endpoints.

On the other hand, telling the whole internet before they've addressed the items he told them about, and potentially opening them up to more scrutiny (likely less white-hat like) is not an especially helpful thing to do.


Coursera is a large, high-profile site. I am quite certain it did not take a law professor's blog post to bring black-hat scrutiny to it or any other site, server, or technology deployed on the public Internet.


It may or may not have. These holes have been present presumably since the launch of these APIs. However, the author has now made public specific vectors of attack. You may be right that hackers have already been aware of them. In either case, does making these publicly known benefit Coursera, or its users in any way? I can't think of how it could possibly help, but I can certainly see how it might hurt -- anyone who comes across that page now might feel the urge to further 'explore' these findings.


It's telling that your first concern is for Coursera (which deserves no concern at all) and only then its users.

There are definite benefits for Coursera's existing users -- at the very least, they now know it is vulnerable to cross-site attack and can be sure to log out before visiting other sites.

Another set of people clearly benefiting are those I've already alluded to, who now know not to sign up for Coursera.


I'm not really sure how you infer the first line there. I'm in no way defending Coursera -- clearly they need to run their services through some better security checks.

I still generally disagree with your second point -- informing users of a security breach/flaw could (and should, even now after this article was published) be done by Coursera. In this situation, they should be the ones who come forward to their users and A. describe what the issues at hand are and B. describe how to avoid falling victim to them. The author of this article doesn't provide any suggestions for the non-tech-savvy.

Regarding users not signing up, perhaps you're right. It does prevent them from potentially losing their private information. In all likelihood, though, users who don't sign up after reading this article will never sign up. Yes, I realize this primarily hurts Coursera, so in this case, my concern is for them. It also means that potential users miss out on whatever they might gain from the site. A better option that Coursera itself might offer is a temporary "hey, we aren't accepting new users right now -- check back in a week" or something.

And again, I do not believe Coursera should just forgiven for something like this. As I mentioned, I've never been on the site, probably never would, and now am even less likely to do so, as I have no faith in them. I still don't believe that publishing open security holes is the right solution, unless they specifically said something along the lines of "yeah, we're not gonna fix that."


Your exact words were: "does making these publicly known benefit Coursera".

Whether it benefits Coursera isn't just the last question anyone should ask, it should never be asked. It's nobody's responsibility to provide any benefit to Coursera.

Your argument that "[Coursera] should be the ones who come forward" seems out of place, save as another attempt to deflect attention from Coursera's failings. That someone has a duty to act does not generally preclude others from acting.

"The author of this article doesn't provide any suggestions for the non-tech-savvy." implies that we should be less concerned for tech-savvy users. Why would that be, exactly? Do they have less to lose, or is this another attempt to deflect responsibility?

Your claims to not be defending Coursera sit uneasily with clear attempts to deflect responsibility from them.


If you pick just those words, then yes, I'm only considering Coursera. If you take the comma into consideration, sure, you could even claim that I'm taking their best interest before their customers. It's a stretch, but if we want to pick apart words, sure.

Why would you never ask if it benefits Coursera? There are people who have an interest in them not failing, whether it be people making money, or people using the service. It seems you're only considering the people with money at stake. What about the people who rely on Coursera for the educational content?

I don't understand how Coursera being the one to come forward in any way deflects attention from their failings. It would literally be them bringing attention to the issue. Maybe I'm misunderstanding you here, but to me it seems that the company coming forward show not only responsibility, but maybe even some humility -- acknowledging their failure rather than trying to cover it up. And maybe you'll find that a bit of a stretch, but I'll try to give some credit for attempting at least.

That said, the message they did publish (see the current top comment on the post) doesn't really seem to do a great job of stating what actually happened, or what they're doing now, so it's not the best I've ever seen. It's also going to be 'too little, too late' in many people's eyes, as they now look like they're trying to backtrack to cover themselves after being exposed by the author of the article.

The "precluding others from acting" thing is what I've been trying to say this whole time. No, his findings, and being told that they are in the process of fixing it does NOT prevent him from publishing his findings (clearly, as he did). However, they made clear they were working on it (or at least enough so that he acknowledged they were), and it seems to me that he has now kind of cut their legs out from under them, exposing their failings while they were working on them and before they made the announcement themselves. It just seems tactless to me, regardless of who it benefits. I don't mean this as "oh, poor Coursera, they've been made fool of on the internet" or anything like that, it just kind of rubs me the wrong way. Take that for what you will, clearly you don't feel the same way.

Regarding the less tech savvy, you've got it backwards. I in no way mean that we should be less concerned for less savvy users -- just the opposite. If we're going to expose a vulnerability that affects them, we should go out of our way to defend them (or, the responsible party should). The author of this article does not do this, and consequently leaves them exposed without any opportunity for remediation.

I really think you're misunderstanding me. I think Coursera should have all the blame, and I think they should be the ones responsible for coming forward with their problems, what they're doing about it, and what their users can do about it. If they had failed to do so, then yes, absolutely someone should come forward and warn the public. That wasn't the situation here -- when contacted, they immediately began working to fix the problems.

The only thing they didn't do was immediately announce to everyone that there may be security flaws. Should they have? Perhaps, but then at that point, they're making themselves a target until they complete their fixes. Announcing the problem after the fix seems a pretty standard procedure to me. So again, to make clear and alleviate any remaining doubts you have, yes, Coursera screwed up. I think everyone knows this by now. They've also gone through the steps they needed to take to fix the problems now (as far as we know). I still don't think it was responsible for the author of the article to release this before they had completed those fixes, though.


It's amusing that you read my statements exactly backwards, but think I'm the one misunderstanding you.

You're reciting all the standard arguments in favor of "responsible disclosure". You're literally saying nothing new. I've heard it all a thousand times. It's crap.

The longer vulnerabilities are hidden, the longer users are left at the mercy of black hats, unable to protect themselves, and the less incentive there is for developers to act.

You see it even here, where the developer "acted", but only after being exposed. You even acknowledge it, but fail to reach the logical conclusion.

This scenario has played out over and over again throughout history. Corporations will never act in the best interests of anyone but themselves. The people holding them to account are not the villains.


I love how the security researcher, and not the website with the sloppy code, is at fault according to you. You live in a strange world, my friend.

Responsible disclosure doesn't really exist, bugs may be used anywhere, any time, and it's perilous to assume that there is a window of "safety" for fixing security bugs.


I love how all blame can be assigned to a single entity, and it's not possible for multiple entities to act irresponsibly. You live in a strange world, my friend.

Edit (since you added more):

> Responsible disclosure doesn't really exist, bugs may be used anywhere, any time, and it's perilous to assume that there is a window of "safety" for fixing security bugs.

It is true that you can't assume that there is a window of "safety" for fixing security bugs, but on the other hand, once an exploit is published widely, you know for certain that it's in the hands of everyone. Prior to that you can only speculate.

Now, I know that there is a dance between "give them time to patch it before you guarantee that everyone has their hands on the exploit" and "giving users full disclosure so that they can take their own steps to protect themselves." I don't really see how that works here though. Thinking about it:

1. Current users are just screwed. They can't protect themselves in any meaningful way. Their information is already in the system. [Note: they are a little less screwed without disclosure because there is at least a possibility that no one else has found the exploit yet]

2. New users know to wait to get on the site until after fixes are announced.

That's about it. This isn't some exploit in (e.g.) GnuPG where notifying users potentially prevents them from sending encrypted messages that (e.g.) the NSA could be reading.

Edit 2:

I missed the CSRF attack. In this case, it makes sense to notify users so that they can protect themselves. But users don't need to know the details of the attack to protect themselves. They just need to know that they shouldn't visit other sites while logged into Coursera. A blog post saying "details to follow..." could post the write-up after waiting a reasonable amount of time for a fix.


I never assigned blame.


> is at fault

According to Wikipedia:

  Blame is the act of censuring, holding responsible,
  making negative statements about an individual or
  group that their action or actions are socially 
  or morally irresponsible, the opposite of praise.
I would say that claiming (via sarcasm) that the company that did the "sloppy coding" is "at fault" qualifies as blaming them. You're defacto blaming them by deflecting "fault" away from the security researcher to them.


You assigned blame to the website.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: