Hacker News new | past | comments | ask | show | jobs | submit login
CSRF in Doorkeeper OAuth2 gem (homakov.blogspot.com)
249 points by homakov on Dec 17, 2014 | hide | past | favorite | 79 comments



Is there a browser extension (or browser) that could either open each new tab "clean" (no cookies or anything) or even better lock it on the first domain visited?

Example:

I open a new tab with example.com, and with this a new session would be created which only contains data relevant to example.com. Anything I open later on in this tab won't see anything else. I can always open link in a new tab to start a "relevant" session (though that would require browser that can work with tabs, i.e. not Chrome).


Most browsers have a setting to disable 'third party cookies' - e.g. if you're on www.evil.com and that page sends a request to www.example.com the example.com connection won't send cookies, or store received cookies.

You can set this up easily in Firefox [1] and Chrome [2]. It will break a handful of things, for example some sites embed comments from facebook, disqus, google+ etc. So for example when you visit www.youtube.com you won't be able to comment, as comments are in iframe loaded from plus.google.com which you can't log into without third party cookies enabled.

IMHO this is no great loss, and I block third party cookies all the time.

Of course, it's still possible to do certain attacks by redirecting the entire browser window or opening a popup window.

[1] https://support.mozilla.org/en-US/kb/disable-third-party-coo... [2] https://support.google.com/chrome/answer/95647?hl=en-GB


What will I do without my YouTube comments?? /s

In all seriousness I blocked Third Party Cookies in Chrome and never looked back. Nothing of value was lost.


I did this after I learned that Safari on iOS does this by default. If in all of my usage of my phone browser I didn't run into issues, I don't see why I should on the desktop.


Developers often make special exceptions for iPhones that they won't make for e.g. linux users.


How so? You can't really get around not using third party cookies by doing something different, and if you can, are you really going to detect the device type rather than whether the UA accepted third party cookies? I guess anything is possible. All I know is that turning off third party cookies has not affected me in any way over the past three months. Then again, I block advertisers at DNS and via ABP, and they are the main user of third party cookies, so I guess you could say that the ads didn't properly work for me for a while.


Yup, I made a browser that did that and used it for a while, did even one better. Rather than per tab it was flexible and by default it was locked to the domain you were on. I created an experimental browser with as much new stuff as I could, but sadly users don't care that much. I mean just look at Chrome and how horrible the history control for users is. Even really basic stuff like delete every history entry from site X or for the last N minutes is missing. Where is the outrage?

If anyone is interested, here is a much more fleshed out description of what a next generation browser could have: http://benjamin-meyer.blogspot.com/2009/08/next-generation-d... If a company out there is looking to see some browser R&D work done send me an email.


I just skipped through it quickly, but have it opened in a tab to read later :P

For some bookmarks/history and window/tab management I recommend checking old Opera (<=12). I'd argue it's still the best browser when it comes to UI/UX, unfortunately it's dead[1] and not really usable nowadays. For no-url experience, see Yandex Alpha[2].

1. there's an open source project to get that feel back http://otter-browser.org/ (not quite possible without touching the Blink)

2. http://browser.yandex.com/future/


Yeah I have played around with those and either written myself or been part of a team that wrote at least a half dozen browsers that were released and dozens of little R&D browsers. I have also seen many many browsers that offer very little beyond Chrome/Firefox come and go. Yes they might have a very good idea, but it is not enough to get someone to change browsers. A tweak, climbing higher up the local hill, but not fundamentally changing the game. A few of the big points in the article I wrote have to do with the idea that private browsing as known today is flat out wrong and not what users want, having a split view, and robust local access control. Living with a hacked up browser for a month that did these really showed that they were a big deal. But alas after an acquisition my new (former) employer forbid me from working on it so it had to be shelved. But even today I don't know if what I wrote about is going far enough. For most users browsers are like internet speed. The important thing isn't what speed you are connecting, but that you are connected period.

Unfortunately (or fortunately depending on how you look at it) it isn't 2004 where almost no browser development was occurring and you could have something "new" like Firefox come to light and grow quickly with very little features against almost no competition. These days you have many large companies funding development teams to keep what we have today working and make marginally better each day. To make a new browser you can't just take Chrome and tack on a tiny tweak you need step changes. That is what my 2009 article was about and with a few more years of thought and experienced that is what I have been messing around with lately, making a browser where users abandon Chrome/FireFox/IE not because it is a tiny bit better because it is so better it isn't about switching, but about moving to a different product.


Chrome supports multiple users pretty well, but it's per-window not per-tab. (I think that might be better anyway, it seems like it would be easy to lose track of which tab is logged into what)


Do you just mean the "incognito" feature, or is there another way that Chrome deals with "multiple users" (multiple sets of cookies?), cause if there is I'm interested!


Chrome added the concept of different user profiles recently, there's more info here: https://support.google.com/chrome/answer/2364824?hl=en-GB


Indeed and once you enable it, it's very easy to switch between profiles. It's one feature I miss since switching back to firefox.


> It's one feature I miss since switching back to firefox.

You can use multiple profiles in Firefox. There is even an addon to make the switch easy: https://addons.mozilla.org/en-us/firefox/addon/profileswitch....


Do you still need to restart firefox to switch users (or switch accross all open windows at once), instead of being able to simultaneously have different users in different windows, like the new Chrome feature seems to do?

It's the simultaneous different 'users' (ie, cookie stores, mostly) in different windows that is exciting to me.


Firefox allows multiple profiles to be in use at once (in different windows). You need to pass the -no-remote flag when you start up the other profiles, or else the new firefox process just signals to the original to open a new window and then closes itself.


I'm using MultiFox now and it works, but it's not nearly as seamless. I'll give the addon a try.


Awesome, this is something I've wanted for a while and didn't realize Chrome had finally added it!

It lets you have different users in different windows simultaneously, or it makes you switch the entire environment all together?

Thanks!


I just so happened to see such a Firefox extension[1] announced[2] on planet.mozilla.org the other day, though it might be a bit more manual than you are envisioning, and I haven't personally tried it out, yet.

[1] https://addons.mozilla.org/en-US/firefox/addon/priv8/

[2] http://143th.net/post/105342885096


There's a Firefox extension called Priv8 [1] which is meant to do that.

However, given that FF doesn't have sandboxes and SOP is (AFAIK) the main isolation system, I wonder whether it's possible for one origin to somehow sniff the presence of different sessions in the same client via, say, some JS info leak.

[1] http://143th.net/post/105342885096/priv8-is-out


There is a suggestion for this on the Firefox Developer Tools uservoice

https://ffdevtools.uservoice.com/forums/246087-firefox-devel...


Doesn't "private navigation" work for this?


You can disable third-party cookies in most browsers.


that is a great idea. why hasn't any browser done this?


Hey there! You may upgrade to 1.4.1 or 2.0.0 to avoid this. Thanks for posting.


This is why it's increasingly important to not only test your code, but also develop useful vulnerability scanners. I know, it sounds impossible to write a complete vulnerability scanner, but many companies would pay for such a thing.

Existing solutions are either lacking or way too difficult to operate.


That's true, checking all `rake routes` for proper CSRF handling is a trivial job but would help to spot the bug in 5 seconds.


Any sufficiently advanced vulnerability scanner contains an contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of a type system.

Seriously, the solution isn't better scanners or better tests, it's better languages. Make whether something is taking a potentially-destructive action part of its type, and then the compiler ensures that all such routes are appropriately protected; you can do this today in e.g. Spray (which I'm using in production, so this is not some ivory-tower theoretical solution).


This suggestion eliminates bugs that are introduced by unsafe typing, sure. But race conditions, logic flaws, misconfigured tools and almost every class of web vulnerability wouldn't be eliminated just by implementing this.

Most vulnerabilities do not derive from a single faulty tool or framework, but the incorrect combination of several (faulty or not!) tools over a sufficiently large attack surface.


> This suggestion eliminates bugs that are introduced by unsafe typing, sure. But race conditions, logic flaws, misconfigured tools and almost every class of web vulnerability wouldn't be eliminated just by implementing this.

How often are security flaws a subtle, complex thing that couldn't have been caught by a sensible type system? Almost all of the flaws we hear about, at least here, are the really simple dumb mistakes that a better language absolutely would have caught.


Do you mean a language with a very strong type system like Haskell, where you could create a web framework in which things like user input have different types associated with them? Those can offer some real advantages by never mixing untrusted input with page output and various calls to other systems.

But you don't get a lot of web application security advantages when you compare, say, Python and Java.

Of course, a very weak type system like PHP's can definitely introduce additional security flaws.


A common pattern (at least getting there) in Scala/F#/Haskell is to have paired types, UserInput and ValidatedUserInput. All the business logic requires the Validated kind. A programmer can be certainly be lazy and just return the supplied value, but this is a conscious choice to do the wrong thing.

You can't accidentally forget to validate form parameters. It won't compile. This is pretty easy to do in Java. On the weaker side, I would assume php and python have something like perl's taint mode. It's such a simple thing and it avoids so many xss problems.


I'm mostly comparing with Scala because that's what I use most of the time. But even in Java it's very possible to get the same safety advantages - it's just that working with that level of type detail gets extremely cumbersome, so for most projects it's not worth it.


Agreed. I try to explain sometimes that I'm merely tired of experiencing the same problem, over and over, for years.

I would like to experience the next problem, please. If only for variety's sake.


Types are not that useful for security - it's as easy to create a MutatingController in Ruby as it is to concatenate some strings containing SQL and a string containing raw input in Haskell.

Better typing can help, and Ruby programs are more likely to be "stringly typed" than Haskell programs in practice - but the difference is a lot more subtle than one might expect.


The issue is that Ruby can't give you the tools you need to catch such things before runtime. Haskell can.


Assuming the application code is completely secure, then that's a big part of the problem solved. But there's also the application server and any other services running on the machine hosting it.


So what, we give up entirely?

Most times I'm aware of a site being hacked it's been through a hole in their application code, not the stack they're running on top of.


I was not suggesting we give up entirely. I'm just pointing out that it isn't a complete solution of the problem. But I agree that application code is probably the bigger problem to solve.


There is a fine line between automating fully-automatable penetration testing tasks and automated scanning.

Unfortunately, there is a very idealistic drive to feature-creep penetration testing tools away from "useful when used by a professional" to "half as useful when used by anybody and full of false positives."

The real solution is not more or better security software, but rather more secure coding practices. People generally don't accept this idea, but the state of software security is such a moving target that no amount of automation short of strong AI will find every bug. The onus is on developer education.


> "useful when used by a professional" to "half as useful when used by anybody and full of false positives."

Where I work we have a desire for both.

We want developers to be able to scan their own code -- preferably automatically as part of a CI process -- for things that are clearly wrong, without needing to be appsec experts, and clear out a lot of the low-level brush.

We also want complex tools that are used by appsec to be able to do their jobs faster.

This is timely since I'm about to do a search for tools for this. I'd love, for example, a way to see the permissions for all lines returned by "rake routes", whether those are resolved by CanCan or where CSRF is disabled or whether it's been overruled by some skip_authorization_check.


I think it is impossible to write a complete vulnerability scanner. If by complete you mean that your application should be guaranteed "secure" after running the scanner. Security has always been a moving target. However, I've heard great things about the job that Acunetix[0] does.

[0]


> Existing solutions are either lacking or way too difficult to operate

Give Netsparker[1] a go.. couldn't get much easier then this!

[1] https://www.netsparker.com/



Have you looked at IBM AppScan? It does a pretty good job.


Can someone provide a better "forwardable" explanation of CSRF than the wikipedia version of "let me f-ing google that for you"?

I already unsuccessfully searched for a XKCD treatment.

Something I can forward to non-devs not written at dev level. Also no videos (who has time for that?). It sounds like a graphics arts type project, like a xkcd topic, so I guess something like that is what I'm looking for.

The wiki explanation is pretty good, just curious if anyone has anything better for non-technical-ish people.


"It's expected that I can send you a link to facebook.com. But it's not okay if I can send you a link that immediately deletes your account."


Isn't this easily mitigated by making all request that change state (update data in any way) as post requests?


No. Check out the example in the article, an attacker can make your browser submit a form with a POST request using JavaScript.

It's slightly harder to exploit, as the attacker can't just send you a link to facebook.com, but they can send you a link to example.com which has the form and uses JavaScript to submit the form.


The way CSRF works is that I put a form on evil.com that submits to example.com. If there's no CSRF protection, example.com will accept that form submission as if it had actually come from a page on example.com. GET/POST has nothing to do with it.


<form method=POST action=FACEBOOK><input name=DELETEMYACCOUNT></form>

<script>document.forms[0].submit();</script>

(The above code may contain bugs.)


While I believe that allowing CSRF is terrible practice, as a user of Doorkeeper, I think the problem here is Digital Ocean's atypical usage of OAuth2. When you request an access token for a resource owner in OAuth2, you are supposed _actually authenticate_ the owner. According to the OAuth2 spec[1], username and password are REQUIRED fields. Allowing clients to generate tokens based off of cookies is reckless.

Useful CSRF exploits depend on the server to trust session data to authenticate client actions. OAuth2 is designed for allowing external (third-party) applications to communicate with you. Cross-site requests are an expectation in OAuth2. If you ignore the spec and skip proper authentication, you're in a bad spot anyway.

[1] https://tools.ietf.org/html/rfc6749 (page 37 & 38)


    > According to the OAuth2 spec[1], username and password are REQUIRED
    > fields.  Allowing clients to generate tokens based off of cookies is
    > reckless.
It's possible I'm not understanding you correctly, but the section of the spec to which you linked is describing the "Resource Owner Password Credentials Grant", just one _possible_ flow for requesting an access token. In that same section the spec reads:

    > The authorization server should take special care when enabling this
    > grant type and only allow it when other flows are not viable.
(read that section here: https://tools.ietf.org/html/rfc6749#section-4.3)

If you read the authorization grant overview section (https://tools.ietf.org/html/rfc6749#section-1.3), you'll see that the spec also defines an "Authorization Code" flow (https://tools.ietf.org/html/rfc6749#section-1.3.1) – this is what most sites implement.

Also worth reading is the section of the spec dedicated to security considerations (https://tools.ietf.org/html/rfc6749#section-10). There is an entire subsection regarding the password authentication flow you're referencing. Choice excerpts:

    > This grant type carries a higher risk than other grant types because it
    > maintains the password anti-pattern this protocol seeks to avoid.  The
    > client could abuse the password, or the password could unintentionally be
    > disclosed to an attacker (e.g., via log files or other records kept by
    > the client).

    > The authorization server and client SHOULD minimize use of this grant
    > type and utilize other grant types whenever possible.


I can't name a single OAuth2-enabled website that does this. Facebook, Twitter, Dropbox, Google, LinkedIn, ...

They all let you approve if you are already logged in.


Usually those implementations redirect the user to a separate authentication system. OAuth2 only handles authorization and not authentication. Upon successful authentication, the user gets redirected back to the OAuth2 request which then generates the authorization code.

When the user is already logged in via a cookie set by the authentication system (i.e. an existing valid session), they don't get prompted for a password again; the authentication system will simply redirect to the OAuth2 request url. The typical OAuth2 implementations shouldn't be reading the authentication cookies directly.

The "password flow" in OAuth2 is really a special case for those who want to bypass the separate authentication system and use OAuth2 directly for both authentication and authorization.


GitHub and LinkedIn do this, I believe.


I can say that recently LinkedIn has asked me to reauthenticate on multiple occasions in the same session. I've had the same for Google but I have not tried recently. I'm aware that Twitter and Facebook allow you to do so, but I propose that none of the above give scopes without authentication that allow you to perform actions that charge an account.

That said, I agree that some of the giants are fine with using cookies for auth in OAuth2. And while that indicates that this is a possible use case, OAuth2 is capable of being used in many ways and Digital Ocean's usage still doesn't make much sense.


Am I missing something? While there does not seem to be a reason to allow CSRF here, doesn't the fact that you need to have a client Id and client secret for the oath endpoint make it so that this is a non-issue?

https://github.com/doorkeeper-gem/doorkeeper/wiki/authorizat...

I don't see a way to get the access token without having the client ID and client secret.


you just need to create a client. Then you will have ID+secret. Most providers allow anyone to get them


So this only applies to those that use doorkeeper to act as oauth providers, not to those that allow logins through other oauth providers, right?


Sure, Doorkeeper is oauth Provider gem, not client.


As long as we keep treating security as a purely technical problem the state of software isn't going to improve significantly. Responsibility tends to catch up with things. It's mostly a question if it's going to benefit software or not.


I disagree. Preventing vulnerabilities is not primarily an "attitude problem." Writing code is hard enough as it is, and writing flawless code that can withstand hostile attacks is really hard. Odds favor vulnerability existence, and when bad ones are inevitably discovered in production systems people always claim the whole process is broken.

That being said, this one is pretty bad.


The demand for "safe"[0] software is going to reach critical mass sooner or later. The question is if it's going to be solved by good tools, processes and education or app stores, insurance ratings and regulation. Software isn't much different from other types of infrastructure.


well, except for the rate of change


This is pretty bad, but I believe it is only possible to get an access token if you also allow malicious/non-trusted users to either create new OAuth client registrations (`Doorkeeper::Application` models) or modify the redirect URI of an existing OAuth client registration. The reason is that the access token is delivered over a redirect, not in the response to the form POST. Doorkeeper does check for a valid redirect URI.


Even if it is critical for some, for me it feels exactly how open source works: someone uses an application in a way that was not intented or thought about. And it turns out, that there is a problem with that approach. Someone will blog about it, write a bug ticket or send a pull request. And gone is the problem and the application got better. That's applied wisdom of the crowd.


[deleted]



Oops, I deleted the comment. I had filed a bug[1], without realizing 2.0.0 was released a few hours ago with the fixes in it.

https://github.com/doorkeeper-gem/doorkeeper/issues/524


Right. When Sergey Belove originally reported this to DigitalOcean, one of our engineers reported and fixed this in upstream Doorkeeper.


Another show of the blatant amateurism of the Ruby community. I don't even count the number of pull requests on gems considered cannon I've had to do since I started coding in that language.


Stop being a dick. Security problems exist in any language, and I've seen no evidence that they're any more prevalent in Ruby projects than elsewhere.


Oh boohoo. Security is difficult, not just in Ruby.


I think you probably mean canon.


No, cannon. Like, really solid and made out of explosion-resistant iron.


Ah, sorry, my mistake.


Pretty sure he meant "canon" in that context.

1) A general law, rule, principle, or criterion by which something is judged 2) A collection or list of sacred books accepted as genuine

(http://www.oxforddictionaries.com/definition/english/canon)


whooosh


"Nothing flies over my head. My reflexes are too fast. I would catch them".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: