Hacker News new | past | comments | ask | show | jobs | submit login
OAuth 2.0 and the Road to Hell (hueniverse.com)
305 points by dho on July 26, 2012 | hide | past | favorite | 71 comments



It saddens me to see OAuth 2.0 in this state. As someone who’s made really minor contributions to 2.0 (and thus listed as a contributor in the spec), I have been really looking forward to it being finished and ready for production use (where production use means no more drafts). I stopped following the mailing list last year because most of the threads seemed all too familiar or out of my realm of knowledge to contribute (read: enterprisey).

I run a 1.0a service provider and write clients against it. I’m thinking about wading through the current 2.0 draft, picking out the relevant parts to small startups with an API, and publishing a post about how to implement the sane parts of the 2.0 spec.


Yes, that would be great, as I'm not sure there are any good docs on writing a OAuth provider.


Please do so! I'll help with what I can and little I know.


FWIW, just implementing the server-side version of FB's or Foursquare's implementations will basically get you to the same place.


please write this up. I gave up trying to figure it out myself.


I'm in as well.


count me in


I am in.


Having written client code for multiple OAuth2 implementations, I can tell you: it's a total clusterf$%k, and for exactly the reasons Eran outlines: the oauth spec is a giant ball of design-by-committee compromise and feels exactly like the disaster that is XML web services and it's technologies.

We would be better far off it a single company/dictator (like, shudder, facebook) came up with a simple, competently designed one page authentication mechanism, provided some libraries in the popular languages and we all just went with that.


The earlier drafts were much more like that. They were largely a collaboration between a few web companies who had deployed OAuth 1.0 with Dick Hardt who had written WRAP at Microsoft. One of the major design goals was producing a protocol simple enough that client developers would not have to use libraries.

I was pretty happy with this result since we could write a simple page like https://developers.facebook.com/docs/authentication/server-s... which conformed to the spec (http://tools.ietf.org/html/draft-ietf-oauth-v2-12#section-4....) and was an easy to implement explanation of authenticating a user.

But the OAuth 2.0 spec we were working off of is now eighteen months old and as Eran said the vast majority of those contributors have drifted away from the effort over this past year :-\


I think the obvious next step was / is to make OAuth 1.0A into OAuth 1.1 by mandating TLS/SSL and declare the SSL-mimicking parts like the timestamp, nonce optional (i.e. ignored). Anyone can just do it by fiat, since it will be backwards compatible with OAuth 1.0A clients. They'll just send the proper timestamps and nonces, but you are ignore those fields.

I found those fields were 90% of the problem with OAuth 1.0A implementations. Maybe there's security value in those parts in an SSL environment I am missing, but I doubt it since SSL does the exact same thing.


I've worked on standards committees off and on for many years, and his experience seems typical of the issues that crop up. There are many problems with standards groups and the way that they work.

One major one is that, often, the participants come from different areas with different perspectives and visions of the outcome. Since participants rarely go to work with a firm set of agreed upon requirements or use cases, it leaves each member room to craft their own understandings of the goals. I've seen way too many working groups attempt to create a 'master' spec that takes into account all possibilities. Or, alternatively, clusters of people form from similar problem domains, and powerful clusters can take the work off course.

A second major problem is that there often is a lack of real user participants. Standards work is as close to the dullest engineering work one can get. Worse, it seems to attract certain types of engineers that love building specifications. Because of this, usually the real users flee immediately. This usually leaves a body of over-educated, overly-technical people to argue a lot over sometimes irrelevant details. Those types of people are definitely necessary for the standard to work in the end. But, because the real users flee, their influence is usually unchecked.

A third reason is that working groups rarely seem to use iterative and incremental life cycles. There's rarely any working code, often little V&V, and participants and users often can't experiment with their ideas. As we know, what's good in theory, sometimes doesn't work well in practice.

I think there are systematic reasons much standards work fails. The 'design-by-committee' outcomes arise from 1) lack of firm use cases to bound the work, 2) dissimilarity between participants, 3) lack of real user participants, 4) lack of iterative / incremental cycles.


"There's rarely any working code,..."

Once upon a time, the IETF claimed to be interested in "rough consensus and running code". "Rough consensus" went out the window a long time ago; I suppose running code had to follow sometime.


If I were ever involved in a standards process, my gut feeling is that I'd want working code for all serious proposals. Does that seem feasible?

Perhaps the WG would produce a set of use-case prototypes which proposals are built on.


Or at least require each and every voting member to implement the standard in a freely chosen programming language. On second thought... Might not work so good for large standards as HTML5 though, writing your own parser, renderer and layouter :)


Or maybe only browser makers should be deciding what HTML is?



This is simply brilliant.

It's fascinating.

I sometimes think that these people who puch for the "second system" really have no idea the problems they are bringing about. That is, it is innocent.

Hate me for saying so, but there are just a lot of people working in software who lack a sense of wisdom. Folks like Fred Brooks who can see the madness are few and far between. Even rarer are those who both see the stupidity and take action to stop it.


I think there's a "human factors" problem in action with second systems. The first system needs to be allowed to age before the second one becomes worthwhile - simply because more understanding will exist of both the problem domain, and how it was solved before.

And if it's different people doing the second one, watch out.


The first generation of a technology is fueled by necessity. It's often a tiny project with only a few guys, and they portray it as "hey, I made this thing, if you wanna use it too, cool"

The second generation is often fueled by things which were lacking or missing in the first version. At this point, the first generation is widely adopted, and now everybody and their brother wants their say in it.

Design by committee is bad. Design by brainstorm is good. Formality breaks everything.


Don't miss this cool insider look on Netscape's second system failure from jwz and Brendan Eich: http://gigamonkeys.wordpress.com/2009/09/28/a-tale-of-two-re...


Hang on a minute, from where I'm standing as a client developer OAuth 2 is much better than OAuth 1.

Firstly, reducing the burden of tricky and unnecessary crypto code on the client is useful.

Secondly, some of the article's points don't even make sense, like saying tokens are necessarily unbounded, which isn't true. The issuer can easily include the client_id in the token and check for its revocation when used, as it did in OAuth 1. The same is true for self-encoding: clients don't have to issue self-encoded tokens and can instead issue unique id-style tokens with long expiry times. As for refresh, that's unfortunate but issuers could easily work around it if the OAuth 1 way was preferable.

In short, OAuth 2 is simpler to implement for the client in exchange for being slightly harder on the issuer, whilst also being more flexible. Yes, it relies on SSL for security. So does your bank.


Tricky and unnecessary crypto code: you mean HMAC, or something else? I've written code for Amazon EC2 that used HMAC and it wasn't too bad, and I'm now trying to evaluate whether to use OAuth 2, OAuth 1 or something else. Is there other cryptographic coding in OAuth 1 apart from the HMAC signature?


I could have been clearer. It's not the crypto primitives that are tricky - as people say, HMAC is a library. It's the need to assemble a irritating number of parameters and build a signature of the correct ones for each call, and provide nonces, and I don't know what else but it depends on me not to screw it up and its too big for me to be confident of that, and I'm not sure I trust library authors. Granted, it's not rocket surgery and there are libraries, but OAuth 2 for a client is literally 25 lines of code – look at https://developers.facebook.com/docs/authentication/server-s... – so I don't care if someone can implement HMAC in 12 lines of code.

My point was that OAuth 2 improved in a number of ways for clients and is at least as flexible for the issuer as OAuth 1, so I think the author is just disturbed by the trust of SSL for security, and the crappy, slow standardisation process, and ended up going overboard.


The biggest problem I had with OAuth 1 was that if you messed up anything chances are all you got back from the server was "Signature Invalid".

You're then stuck trying to find out where you had gone wrong with no guidance. The last time was due to an incorrect content type on the post. A coworker accidentally had the key and secret the wrong way around.

Both scenarios has the same error and you're often stuck groping around for a solution.


That is hard to fix with crypto stuff. Good public test cases as part of the spec help, and worked examples.


No, the HMAC-SHA1 signature is the only crypto needed in OAuth 1.


Like literally every programming language should have an implementation of HMAC-SHA1 around and lacking that, at least a SHA1 implementation. It's pretty trivial to build HMAC-<HASHFUNCTION> if you have <HASHFUNCTION> around. A working python implementation is literally 12 lines long.

Now if you don't have a SHA1 implementation in your programming language, that's a different problem.


Huh, strange then, I really didn't find it that hard. Then again, the EC2 docs are pretty good.

Just decided with my boss we're going to roll with a hybrid approach. Spent too much time reading through all these standards and not enough time thinking and coding.


I don't understand any of this. Microsoft develops and publishes 'protocols' (used lightly) and everyone hates them because they are pushing workable code out on everyone else...

Bunch of people in a committee take three years trying to build the security token system to end all security token systems and have yet anything to show for it and we are sad?

Why are people trying to do this anyway? oAuth is just an idea. Hey here's a really good way to handle things and if you do it this way it has some really great benefits.

Why aren't these things like javascript frameworks where everyone has an idea. I don't think it's practical that every sdk and framework will use one security system that was agreed upon. It's just not going to happen. Everyone has unique requirements.

I think he's just upset that more people have concerns and needs and nobody can compromise to solve all of them. Well yeah. Naturally. They wouldn't be needs if people could just overlook them for someone else's idea on how to do it. They would just be problems people are looking for someone else to solve.


<quote>Microsoft develops and publishes 'protocols' (used lightly) and everyone hates them because they are pushing workable code out on everyone else...</quote>

"workable code" is a tangent and really has nothing to do with the reason we avoid Microsoft protocols. More to the point we want to avoid APIs with elements that facilitate vendor-specific implementations i.e., lock-in. Like MS' OOXML, Oauth 2.0 has special interest written all over it.


"I don't understand any of this. Microsoft develops and publishes 'protocols' (used lightly) and everyone hates them because they are pushing workable code out on everyone else..."

Have you ever tried to write an interoperable authentication system using Active Directory? I'm particularly thinking of the UDP LDAP query and the multiple-byte-order (little-endian and big-endian!) response.

"Hey here's a really good way to handle things and if you do it this way it has some really great benefits."

Because it doesn't really work unless everybody does it the same way.


Have you ever tried to write an interoperable authentication system using Active Directory? I'm particularly thinking of the UDP LDAP query and the multiple-byte-order (little-endian and big-endian!) response.

That doesn't disprove my point. Just because you don't like their approach doesn't mean they don't get points for having an approach. So far oAuth is vaporware and not consistent in almost every implementation yet still effective because it's just an idea.

Because it doesn't really work unless everybody does it the same way.

I disagree. It's not hard to adapt to using oAuth+Twists for a given provider. It's not like it's some secret handshake nobody knows and you can't get into the cult meeting. It's just signing data and exchanging tokens. We don't need a universal standard. We need a universal understanding of the problem we are trying to accomplish and various recommendations for how you might solve it. I think the work on oAuth is already complete.


"It's not hard to adapt to using oAuth+Twists for a given provider."

I'm not sure, but I suspect that might actually be my point.


Why aren't these things like javascript frameworks where everyone has an idea.

Because OAuth is a protocol designed to enable systems developed independently and as such it's useless unless there's an high degree of standardization. It's like saying "why can't we all use our custom version of IP/TCP/HTTP/TLS". It simply wouldn't work.

Everyone has unique requirements.

Not really; the reality is more "Not everyone has the same requirements", which still leaves very large groups that do have the same or similar enough requirements; in fact, we've seen that with OAuth 1.0(a).


"Because OAuth is a protocol designed to enable systems developed independently and as such it's useless unless there's an high degree of standardization. It's like saying "why can't we all use our custom version of IP/TCP/HTTP/TLS". It simply wouldn't work."

Yeah I totally disagree. It could be like any other system, just have a .NET dll, a Ruby gem, whatever to facilitate the basics of that protocol. There's nothing amazing about oAuth. It's hardly a protocol in it's own right. It's just an agreement on transferring some data (some signed, some not signed) on top of another protocol. There's no magic sauce. You don't need standardization because anybody could build a Ruby gem to support any variation of it. Whether people choose to do that is a different question.


You're missing the part where its whole point is to be interoperable. You're exactly right that it's simple, that anyone could write any variation on it in a few lines of ruby - which is exactly why it needs standardization. Because otherwise every website will have its own authentication system, and if you want to let people log in with five different kinds of accounts then guess you're writing five different sets of code.


We at Zapier have seen it all when it comes to API's. IMO, the biggest poison around OAuth are options. Optional grant types, various refresh token options, miscellaneous state and scope options, etc...

What is the point of a standard that cannot be implemented the same way twice? It's insane.

That said, most smaller vendors stick to the sane bits, its the big guys like Intuit or Microsoft that over-engineer their auth and pull out every fiddly feature in the spec.


> With very little effort, pretty much anything can be called OAuth 2.0 compliant.

This was both a sigh of relief but also slightly horrifying when I was working on an oauth2 server in Node. It encourages a lazy "implement the parts we care about and that are required, and take shortcuts for unspecified things." I thought the separation of access tokens and refresh tokens was wrong because once you're just giving the client an encrypted string to avoid certain DB lookups later you can put whatever data you want in it (the spec doesn't care) including managing refreshes, revokes, etc. I like the idea of expiring tokens of course, but it would simplify the client significantly to just replace the currently used token with a new token issued by the server if one is returned. I recall the 'standard' flow is "request with access token, fail, request new access token with refresh token if you have one, maybe succeed, maybe get a new refresh token, if succeed request with new access token". Having the access token manage the data to refresh itself is simpler. I'd agree it's bad that token security is reduced to cookie security by default, really the whole rant is spot-on.


Might OAuth WRAP make a comeback? Bret Taylor wrote about it years ago as a simpler approach:

http://webcache.googleusercontent.com/search?q=cache:lDQVFky...

(his blog seems to be defunct, hope that's not permanent)

Seems like OAuth WRAP has been officially deprecated in favor of OAuth2.0 but given these issues...


Yes, this is sad. And it happens all the time.

Exactly the same thing happened to the whole Semantic Web effort at W3C. It basically got overtaken by enterprise and now it is of little interest or use to regular web developers.


No.

The W3C semantic web efforts were run by academics, and were never of interest to enterprise OR to web developers.


Yes, academics too. Also I am seeing lots of very loud enterprise software people around those standards. Yes, those are very small enterprise software companies.


This stuff is used within silos (Watson, bio/medicine research), but expanding it to the web is hard because entities that collect data want to monetise it, and do not see the value in reducing friction.


somewhat, shocking. i guess i'll be moving away from Oauth, feels like a relief and scary at the same time. what are some good alternatives?


And now we have N+1 standards.


there are no popular alternatives. Either you are super smart and design one from scratch… or you stick with OAuth and hope that some of the addressed problems will be resolved.


We need an open open-auth for the proleteriat by the proletariat!


Just stick with 1.0a?


And if you use Python, rauth will make the experience much easier: https://github.com/litl/rauth/#readme.


If you're consuming an API you use whatever the platform decides on. In that case, this library is helpful. However, I think the real open question is: if you are providing an API, what do you choose?


I think at this point you choose either OAuth 1.0a or to roll your own. If it's not in your future to be a public API, I'd go with rolling your own if you are comfortable writing that kind of code.


> The enterprise community was looking for a framework they can use with minimal changes to their existing systems, and for some, a new source of revenues through customization.

That right there is what killed OAuth 2.0. From day 1 these members didn't have the specification as the highest priority. They were only thinking of how the specification could serve their own ends. This isn't unique to the enterprise world, but that mindset has more than its fair share. The web community represented the group that put the specification as it's highest priority. When the specification was perverted, they left.


Interestingly I am developing a single-signon provider as we speak, and I chose OAuth 1.0. I did it mainly because the libraries (jersey-oauth in my case) seemed more mature for 1.0, and the fact that 1.0 is a standard, whilst 2.0 is a draft at the moment.

I do realize that it's slightly more complicated for the client developer, but all things considered I think documenting my API in the best possible way will outweigh the perceived disadvantages.


Something similar is happening to HTML5 in the W3C: http://blog.whatwg.org/html-and-html5


I witnessed one guy implement an OAuth 2.0 provider completely wrong (he was accepting user credentials as client credentials, or something similar.) This guy was smart, and just couldn't understand the spec.

Upon reading the spec, it seemed that OAuth2 is really just some rough guidelines. Pick and choose what you need for the particular flow you're implementing.


I'd love to see OAuth 2.0 forked into a version that takes into account Eran's comments. It's be nice if the forked version explicitly mention that it will not make any compromises to support the enterprise and that the burden is on the enterprise to support the ways the internet works.

If the enterprise world wants a standard, let them make their own.


Design by committee fails again.

Maybe next time it'll work!


We should form a committee to discuss how to fix design by committee. Sound good?


Heh, why not form another committee with just the web guys and without the enterprise guys and create your own OAuth 2.0? You'll get something done before the enterprise guys and they'll be inclined to use your working protocol.


What does WS-* mean?


http://en.wikipedia.org/wiki/List_of_web_service_specificati...

In short, it's collective name for many Web Services specifications that are still heavily in use in "enterprise" world. It's a mess.


WS-* is big because it solves many, many problems. It's also big in the chatty sense because it's XML-based.

That does not make it a mess.

WS-Trust, WS-Federation etc. have already solved problems that OAuth 2.0 attempts to solve, which doesn't make it bad, as Eran states. Bad is subjective.

Whether someone chooses to use it or not depends entirely on the requirement. Choice is good.


Choice is often not good. Consider these:

- Choice in plug type for electrical devices - Choice in power adapter connection for cell phones - Choice of driving on the left side or the right side of the road - Choice of location for the brake pedal in your car.

Most anything that is used to interface two items, as is oAuth, is better with fewer choices. Choices increase complexity and cost for both sides of the connection and reduce the number of solutions that are compatible. However, for some reason this fact is often lost on technologists.


It's a reference to the WS family of standards, which are a huge group of enterprisey web services standards. A good illustration of the family is http://www.innoq.com/soa/ws-standards/poster/innoQ%20WS-Stan...


just been reading about the debacle with xml and w3c snd this whole oAuth business sounds like it's heading down the same path. Seems like the browser vendors had the ultimate say when it came to html. in terms of oAuth i wonder where the ultimate power lies because those would be the factions that need to do something right now to deal with this mess.


What was the supposed purpose for oath 2.0?


I would have been happy with things the same but simply a better "two legged" version to allow better interoperability with trusted services from native apps. Having oAuth 1 kinda bound to the browser has been a minor headache for my team. But for the most part I think oAauth 1 works well. It just needs some official specifications for usage in non-web apps to put an end to all of the inconsistent solutions that people are creating out of necessity.


Again it's very hard to read the small grey text on white background.


The CSS is actually using "dimGrey" for text. DIM! Ironic


pluginitis strikes again




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: