Hacker News new | past | comments | ask | show | jobs | submit login
When REST isn't Good Enough (braintreepayments.com)
94 points by williamdix on Nov 20, 2012 | hide | past | favorite | 92 comments



We (Zapier) are in a particularly good place to comment on this. We have implemented about 70 very diverse web APIs in-house. Early on, we used client libraries because they were easy and convenient. That decision bit us, hard. (Most) client libraries are un-maintained, opaque, and difficult to extend.

We spent many months ripping every client library out down to the raw requests. We are in a unique position because of how many APIs we have to support, but the overarching advise we give to people designing APIs:

If you are deviating from the norm, you're doing it wrong.

REST APIs are nice because you can infer how to communicate with the API out of the gate. APIs are a pain and you only introduce headaches by not doing things in a sane, conventional fashion.


Can you respond directly to some of the points raised in the article? For example, ensuring SSL certs are validated?


Sure. I'll start with this: as far as client libraries go, Braintree is doing it right. Because it's the only thing they support, they are incentivized to make sure it is feature complete and bug free.

SSL: It is not the onus of the vendor to ensure people are properly securing requests. I have seen horrendous things done in the name of "security" but it only adds headaches. Fix this at the client level. SSL everywhere is sufficient. By extension, vendors often introduce mechanics because they want to make the world better: http://xkcd.com/927/

Platform Support: I think the counter-argument is community client libraries, which the OP is correct: they are worse. The stability + scaling + support issues mentioned by OP can usually be solved by better API design and better architecture behind the scene.

Backwards Compatibility: OP claims you never have to update your code if you use their client library. This is likely true. Good API design would dictate not breaking exposed endpoints unless there is new functionality to be had. In this case, to take advantage of any new API features, you'd still have to update your code. No cost savings are had either way.


>SSL: It is not the onus of the vendor to ensure people are properly securing requests.

Two responses: first, Braintree customers may appreciate knowing that Braintree provides a safer implementation than whatever the (third)contractor that they hired to build thier web app might throw together.

Second, fraud does directly impact their business, regardless of the entry point used by the attacker. Even if they don't get stuck for the money directly, they are going to lose time talking to customers and helping with investigations.

They don't have to do either, but they may find that they can make some types of customer relationships profitable that would not be for a competing business that doesn't do as much hand-holding.


I can understand that a payment gateway takes security more serious, as it can directly affect their business if clients handle SSL wrongly.

Maybe a kind of chaos-monkey for API connections could help? Every now and then the API returns a wrong SSL certificate. If the client ignores this, they could notify the customer, or directly block further access until the problem is solved? :)


The big problem I see is that SSL requires an unbroken chain of trust all the way along. If I have client-side code connecting to my server, which is then connecting to a third party server, all of those links have to be secured. As someone else pointed out, it`s nice to make your payment processing secure, but at some point the developer also has to step up and ensure their half of the bargain is upheld. If I`m sending credit card details to my backend over HTTP for some reason, no API in the world will stop me.

Even looking at Stripe; they provide a JS library so you can do client-side encryption. However, if you include that JS library on an incorrectly SSLed page, a malicious party could replace it without you knowing. Even with extensive hand-holding, if a user is going to implement SSL badly, there will be security risks.

The only sure-fire solution is for the developers of SSL libraries to come together and implement sane default options, and clear documentation of how not to do things. Looking at the docs for OpenSSL, it`s impossible to easily discern what counts as a sane configuration. The same problem propagates itself into language-specific libraries, where the dev wasn`t quite sure what options to tick to begin with. And so it goes.


What about some sort of client library generator, similar to how .NET languages can generate code from other .NET WSDLs?


Is there a good book/resource on learning REST API ideas? I'm looking for a beginner's resource for someone who doesn't really grok the ideas behind REST and the motivation for REST. But also something that gets technical very quickly - I've been programming for long enough that I pick ideas up very quickly.


I'm sort of with you, though I've read quite a bit at this point and interacted with a number of APIs of varying degrees of "RESTfulness".

The biggest question I wish the various tutorials would address is: why is REST a good idea? Why is this particular way of doing things better than others?

And here's one thing I believe is true, that I have literally never seen in a REST tutorial: sometimes REST is not the best way to go. Sometimes an RPC architecture is better.


"sometimes REST is not the best way to go. Sometimes an RPC architecture is better."

I think that the full versions of there acronyms make a pretty good job of explaining what is best when.

"REpresentational State Transfer": obviously, it transfers state, i.e. information about a certain resource at a given moment. "Remote Procedure Call": obviously, it calls a procedure, which may involve several state changes and other activities. Of course, a procedure may be masked behind a REST endpoint (e.g. when you POST some data and do some procedure before you end up with a certain state) and vice versa (e.g. a simple getter), but you may pretty much view REST as the SQL of the Web, and RPC as stored procedures.


>but you may pretty much view REST as the SQL of the Web, and RPC as stored procedures.

Indeed, but that just raises the question: what would possess you to write an app out of SQL calls rather than general functions? So then why do the Web equivalent thereof?


You don't, of course. What you do is build an app which is using REST to store data somewhere online, and the app itself may be in the user's browser, on a mobile phone or anywhere else for that matter.


Definitely. Just pass that memo along to the "apps should work purely through REST API calls" crowd.


I'd recommend this book http://www.amazon.com/RESTful-Web-Services-Cookbook-Scalabil...

It's amazing, really. It's not just a recipies list, it explains a lot of very important REST cases, and also when to use or not to use REST.


Not your typical book, but it's very good: http://designinghypermediaapis.com/


Interesting. I wasn't sure about buying it, but ended up purchasing. Only after realizing that it was by Steve Klabnik, a name I recognize, probably from HN. I wonder why there isn't more info on the front page, even a sample/TOC would be nice.

I'll be going through it tomorrow, if I remember I'll come back to post a mini review.


This. After seeing the URL posted above, I clicked the link just to see if it was Steve's book. Disappointed he doesn't put his name prominently on the landing page.


Note that this isn't a critique of the REST architectural style; rather, it documents their decision to prefer supplying clients with prebuilt libraries rather than a public API that their respective language communities can use to build their own. You could read the source to any one of their client libs and infer the rough structure of their services, so it's not all that private; they're simply choosing to leave what's there undocumented. It's an interesting approach, but not without its drawbacks.


At Twilio, our REST API is fully documented but we still encourage people to use our helper libraries as much as possible.

It's not because we'd like to document the API less, but because there are related things - http connections, auth, etc - that need to be handled and it's easier to have the library do them. Further, the library serves to wrap the API in the idioms appropriate for the language. This may include data structures, but also includes things like variable names.

That way you can focus on your app in your chosen.. and only dig into the abstraction if you choose to.


I have no knowledge of what BrainTree does, but if I'm picking a service with an API, I'm picking the one that is easiest to learn (preferably there is little that needs to be learned), I'm not picking one that requires me to switch languages -- or just as bad, locks me into one of a few languages I'm currently using.


BrainTree does payments. They have concerns about security that most apps do not. Who really cares if bugged timeout/retry logic makes your app post twice to facebook? Probably no one. Who cares if bugged timeout/retry logic makes your app bill them twice? Probably everyone.

BrainTree and you may not make the same decision, and you both could be right.


Braintree isn't the only HTTPS API a developer will consume. Rather than bury the knowledge of HTTPS best practice in the black box of a library, why not exemplify it in language-specific examples of plain-old REST API documentation?

If the boilerplate for manually getting an SSL connection right in a given language is that obtuse, the "just use our library" pitch is even more compelling.

All this said, I have used Braintree's libraries and they are extremely well executed.

Not exposing REST API documentation externally (it surely exists internally, right?) just feels like a cop out.

(An argument I'm surprised was left out: it's easier for support staff to work with customers integrating with a good library than some home rolled REST client. I can imagine this to be true, but maybe a case of premature optimization if the majority of big API players expose and document their REST APIs anyway?)


I don't like REST. Making a call to a server should be like calling a function anywhere else - you have your parameters and return value. In REST the parameters are spread over 3 places, the action, the url and the post parameters themselves. It's a much better design to combine all your parameters in one place and keep things simple, for example -

REST version: UPDATE /course/324234 { description: "This is a level 1 course" }

Improved version: POST /course/UpdateDescription { courseId: 324234, description: "This is a level 2 course" }

In this case POST is always used for api calls. Course is the namespace, UpdateDesciption is the method name, and parameters are kept all together as JSON.


The REST version makes it clear that you are dealing with operations on a resource. The URL identifies a thing in a consistent way, while the method and corresponding data allow you to manipulate it.

The biggest benefit comes when building a cache architecture. RESTful API's (when properly implemented) come with built-in assumptions about idempotency. You end up in a place where you can easily cache GET requests while reasoning in a very consistent way about where changes to a resource will be made.

It's a pattern that comes with a lot of really useful benefits.

Now all of those same qualities can be built into other architectural styles (such as the one you propose). However, I find that it becomes much harder to reason about the role of operations and what they're actually doing when you lose that clear distinction that REST enforces.

I do want to quibble with one thing as well:

In REST the parameters are spread over 3 places, the action, the url and the post parameters themselves

That's true for any modular system isn't it? You have the object you are operating on, you have the operation, and you have the data. Object-oriented systems and even most structured languages (like Javascript for instance) make this distinction at some level. Why is that not appropriate for web architecture?


Just because something is GET doesn't mean it can be cached. It's not consistent, and every API call needs to be evaluated individually. The thing with REST is that it treats all functions as a variant of a CRUD operation, while functions may do more than just CRUD or a combination of CRUD operations in a single API call. Imagine if all your javascript functions had to be prefixed with GET/UPDATE/DELETE/etc.. you would feel constricted, and for what reason? Imagine if your function names had data in the call path MyCourse.32423.Delete(); that doesn't look very good does it? 32423 would look alot better inside Delete().


GET requests, by definition, should have no side-effects. Which makes them cacheable. Many browsers cache GET requests by default unless cache-control tells them not to.

You can most definitely write a GET request that contains side-effects, but that's because you're doing it wrong (tm).

Imagine if your function names had data in the call path MyCourse.32423.Delete(); that doesn't look very good does it? 32423 would look alot better inside Delete().

That's not data, that's an object. "MyCourse.32423" is the actual resource. It would actually look something more like:

MyCourse32423.Delete();

Which seems pretty reasonable to me if we're trying to map this into programming language semantics (which I'm not even sure we should be doing).


I cringe a little bit whenever someone tries to present caching as simple or straightforward.


I don't see that anywhere in the post. All I saw was "GET"s shouldn't modify state and should be cacheable. That seems perfectly in line with the spec.


The only problem I have with your comment is that REST is not specified, it's a style:

http://en.wikipedia.org/wiki/Representational_state_transfer


GET being cacheable/idempotent is part of the HTTP spec, and has nothing to do with REST: http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13...


I meant the HTTP spec which is where GETs specified.


>It's a pattern that comes with a lot of really useful benefits.

Sure, but people can implement that pattern without buying into the whole clumsy REST edifice.

For example, a strict reading of REST (and RESTers support such readings!) would tell me that if I have an API that takes two cities and returns the distance between them, then I must expose a URI pointing to every combination of cities.

See here: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...

>>A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) ... From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations. ... [Failure here implies that out-of-band information is driving interaction instead of hypertext.] [emphasis mine]

So, 1000 cities, you must send a million links over the network rather than 1000 possible parameters plus the formatting.

Now, some RESTers assure me that, no, you can somehow communicate to the user that [prefix] / [first city] / [second city] will get you that answer; that a server can "instruct clients on how to construct appropriate URIs".

Fine, but then we're right back to custom, author-documented APIs and RPCs: "to do that, format the call this way". Right back where we were before trying to force-fit everything into a CRUD mold with increasingly bizarre tables just to make all the calls work.

Or maybe a REST purist would tell me that I'm supposed to link a URI for the starting city: [root]/distances/[start city]/, and then from there link them to possible choices of the second city: [root]/distances/[start city]/[end city]/ .

Fine, but why jump through two pointless hoops, when I know what I want, and I prefer to just send one request rather than pointlessly spend bandwidth navigating a path I don't care for?


> 1. [send a link for each possible combination]

> 2. instruct clients on how to construct appropriate URIs

> 3. link a URI for the starting city and then from there link them to possible choices of the second city

All three of these are possible RESTful solutions, and I can imagine situations where each of them might make sense.

But what you're probably looking for is something like this:

  <form><select name="city1">...</select><select name="city2">...</select></form>
(Using URL templates is a potentially simpler way to achieve the same thing). This is, of course, your solution 2.

There are tradeoffs involved in using this style. Are they worth it? That really depends on the larger system and all kinds of details which you've left out.


That is not a RESTful solution, though: what resource is it acting on? What are the four CRUD operations for it?

You have to come up with some added, unnecessary, implementation-exposing abstraction like "CityPair". (In which case "delete" could be ill-defined if, as would be wise, the distance is computed from a lat/long or road table lookup, and so there is no actual database entry that corresponds to the distance between the cities.) That's the problem with REST: it doesn't avoid the complexity of RPC; it just crams it into ever-more-creative resource types.

Most API users (sorry, "consumers") would prefer they just be told the format in which to ask for the data, not have to re-discover it through gradually-exposed paths each time.


> what resource is it acting on?

The distance between the two cities.

> You have to come up with some added, unnecessary, implementation-exposing abstraction like "CityPair".

This is a really weird thing to say. No, you don't. I have no idea why you would think that.

> What are the four CRUD operations for it? [...] In which case "delete" could be ill-defined

Not every method has to be valid for every resource. It's a total non-issue that you can't delete a distance.

(And POST/GET/PUT/DELETE is a very different concept from CRUD.)


>>what resource is it acting on?

>The distance between the two cities.

Don't be cute. What server resource is it acting on? The value of the distances is (potentially) computed by some encapsulated algorithm -- you're not acting on that resource. The server resources that are touched are the two cities, and then whatever it does behind the scenes.

>This is a really weird thing to say. No, you don't. I have no idea why you would think that.

Because I'm not GETing a city, so I can't use the GET operation on the city resource; I have to make another resource to GET. Fine, it doesn't have to be a city pair: but there is a many-to-many mapping, which in REST-favoring frameworks (e.g. Rails), requires a separate table. By reducing everything to CRUD, you must create a new resource (type) for each new operation.

>(And POST/GET/PUT/DELETE is a very different concept from CRUD.)

That's a non-standard definition of "very different", considering that POST is create, GET is retrieve, PUT is update, and DELETE is delete.


GET /cityDistance?city1=Chicago&city2=Austin

Also, while POST can mean "create", it can also mean "append" or "process some arbitrary request."


>GET /cityDistance?city1=Chicago&city2=Austin

Yep! Works like a charm, until you have to expose a URI pointing to every combination of cities (or indeed, combination of any parameter set).

And I know (like I said before) you can fall back on "no, just tell the user where to put the parameters and you won't have to do that!" ... which is just re-inventing the RPC -- and satisfying users that don't want to navigate a long session just to find the URI they want, every time they send a request.

>Also, while POST can mean "create", it can also mean "append" or "process some arbitrary request."

Used correctly, it doesn't mean (that the sender is requesting that you) "process some arbitrary request"; it should only be used for non-idempotent operations. Close enough to summarize as "create" (appending is certainly creating something in this context!), and generally, for something to have different effects when repeated, you have to create something. PUT/update and DELETE/delete are idempotent specifically because the changes they make aren't creations.

In any case it's clearly an abuse of the term "very different concept from" in the GGP's comment "And POST/GET/PUT/DELETE is a very different concept from CRUD".


Nobody ever said that you should use REST if it doesn't suit the problem you're trying to solve.


But there are people that believe REST suits every API problem, even when accumulated wisdom says it doesn't suit the problem (cough Roy Fielding).


No, he makes it quite clear that it's a solution designed for a particular problem space. You're debating a straw man.


Indeed: the problem space being APIs. Feel free to point me to an example of fielding pointing to an API problem that shouldn't be RESTful.


No, the problem space being "scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components".

This merely happens to overlap significantly with the requirements of public APIs.

I have difficulty imagining your distance calculator needing any of those things, though.


>No, the problem space being "scalability of component interactions, ...

>I have difficulty imagining your distance calculator needing any of those things, though.

Right, because no one's stupid enough to use (or stick to the use of) REST for a distance calculator or any other algorithmically generated information. But make no mistake, scalability of component interaction is an issue, just a solved one (for those that see REST for what it is).

The solution is: specify an input scheme (for a hand calculator: put the first number, then plus, then the second number, then equals) and let the user choose the inputs. This saves you from the (intercomponent-unscalable) combinatorial explosion in which you have to give the user a link to every possible computation as they navigate the interface, and which is the REST method.

So, any exposed function in which you can't feasibly blast every possible input set over the network is REST-incompatible, so I guess the serious RESTers don't think you should do it. Which kinda makes it little more than a footnote.


I'm pretty sure Google's home page doesn't include every possible search term for you to select from, and that's a perfectly RESTful example of an exposed function (search). Forms are a very powerful hypermedia construct.


In the context of a website, maybe you can have a helper like that to avoid the REST bloat. But the architecture is for arbitrary APIs, existing outside of web pages, in which I don't have a neatly visible form. All I'm allowed to do is give the user URIs to choose from.

Some kinds of apps (esp those that can't tolerate the overhead of REST, like for mobile) need to know how to format a Google search request without navigating through a session on Google site, but just knowing what it should look like, and formatting it that way. REST would restrict you to pointing them to google.com and following links; it prohibits you from saying, "hey, you can have your app just point to google.com, then '?q=', then your search terms connected by +'s".


Part of the debate is what problems REST is suited to solve. While most evangelists will never come out and say, "you should use X for everything," they will try to fit it into whatever situation it will "reasonably" fit into, which is probably a bigger domain than the problems it's "best" for.


The thesis lays out in detail what REST provides:

"REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems."

The existence of these properties should be fairly evident when you examine the constraints.

I think large part of the confusion comes from not recognizing the difference between "Any system can be built in a RESTful style" (this is true) and "Every system should be built in a RESTful style" (this is a straw man).


That tagline may explain the benefits of REST, but it doesn't really explain at all which problems REST is well-suited for and which not.

I've found that REST design is great for problems where the most visible abstraction is a document or an object (with operations or functions as secondary). It doesn't really fit problems where the most common abstraction is a function call (with the arguments a secondary concern).


We're talking about two totally different kinds of problems.

You're talking about wire formats and structures that feel natural to use for particular kinds of calculations. I'm talking about recognizing properties that we might need a particular system to have (ease of use being only one of them), and ensuring them.

The line I quoted tells us exactly which problems (in the sense that I'm talking about) REST is suited for: the ones where those benefits are needed (and where the tradeoffs aren't to costly).


Not true.

GET /distance?cities=San+Francisco,Los+Angeles would suffice. city1=modesto&city2=fresno might be preferred if you wanted to support form submissions, too.

Query strings are perfectly restful.


That just shows it's HTTPful, not RESTful. In this case, that protocol involves being told how to construct a URI rather than it being presented from a menu of choices discovered by following URIs in the API.

I agree that it's much more convenient to have well-defined fill-in-the-blank. I disagree that it matches the requirements of REST, whose proponents always tell me "If you document the URIs or how to build them, you're doing it wrong."

http://stackoverflow.com/questions/1164154/is-that-rest-api-...


> involves being told how to construct a URI

The crucial thing that you're missing is that it's being told how to construct that URI in-band. Not in a piece of documentation that has to be hard-coded into the client.


So, once again, it turns out that REST reduces to RPC plus some advisable practices as soon as you point out how the original REST schema is woefully impractical.

Not-REST: You can algorithmically generate requests for the information you want based on documentation.

REST: You can algorithmically generate requests for the information you want based on documentation that's provided in the session. (Oh, and forget all that crap we said about URIs having to be presented to the API consumer as part of session's hyperlink navigation.)

So, now I can turn RPC into REST just by slapping a link to the documentation on every response (and maybe moving verbs to the appropriate HTTP request type, even though that has nothing to do with CRUD[1])? Gee, why didn't you say so?

[1] http://news.ycombinator.com/item?id=4811969


No.

You're not listening, and you're not arguing in good faith.


I could say the same of you. Please tell me how I failed to accurately characterize the difference between RPC and REST with respect to defining requests whose entire working URI set can't feasibly be sent over the network.

Alternatively, explain how REST can simultaneously meet the constraints of "avoid combinatorial explosion of possible URIs to explicitly present" and "every resource is accessible by following server-provided links" and "avoid unnecessary bandwidth usage".


If thats truly required (i dont think it is), the second example could be shown via introspection.


> Making a call to a server should be like calling a function anywhere else - you have your parameters and return value.

What you've described is RPC, which has been around for ages. Why do you think REST became a popular alternative to RPC?


Because the de-facto transport for it is HTTP, which a) People know b) Is simple and predictable c) Has had support from every language on the face of the earth, for ages and thus d) Takes 1 minute to get up and running with, as opposed to, say, SOAP etc. e) Fits the CRUD model fairly nicely, which comprises the majority of web apps.

Nothing to do with one man's phd thesis or anything (that came YEARS after he wrote the HTTP spec and after most RPC systems). If I had a penny for all the theses that proposed something reasonable which was ignored.


You missed: gets through proxies and firewalls.

Unless that is what you mean by (b).


Congratulations, you just invented JSON-RPC (http://www.jsonrpc.org/).


I'm not saying how the parameters or return values should be formatted other than the fact they're in JSON format. JSON-RPC requires you to conform to their specifications.


APIs that ignore the realities of distributed computing and that each have different special-snowflake wire formats requiring tedious documentation: the worst of both worlds.


I'm not arguing against implementing an RPC if that's what you need. My original post describes a super set of JSON-RPC.


Your post implies a pseudo-free-for-all and defeats the purpose of having a conversation about a standard exchange format.

You basically have said "I want to make it work the way I want it and I don't care if it's non-standard and unintuitive.". You're welcome to do that, but everyone who needs to work with it will hate you. And when you've been working on it for 2 months, or take a two month break, you'll come back and hate yourself too...


If your course has 20 member variables, now you have 20 update methods? If you want to update multiple, the client then needs to do multiple round-trip calls to your API which is time-consuming: if it's a 350ms round-trip per call, updating all 20 fields takes 7 seconds. And unless you go out of your way to offer per-object locking (which over a stateless connection has its own challenges) you prevent users from doing atomic updates to multiple fields at once. This pushes any rollback mechanism onto the client, exactly where it should not be.

With your design, if your goal is to "combine all the parameters in one place to keep things simple", you could go one step further and have:

POST /course/update {courseId: 1234, field: "description", description: "hello"}

Taking this to its logical extreme, you can truly combine all the parameters in one place:

POST / {object: "course", operation: "update", courseId: 1234, field: "description", description: "hello"}


Like any other programming language if you need to update multiple fields, you pass multiple parameters to the function. The URL should be equivalent to namespace/class/method while the parameters are just that - parameters you can pass in JSON format.


You're putting the burden of learning implementation details of your system onto your customers. That won't be comfortable for either them or you. Should you need to deprecate a method you'll never be able to remove it. So be prepared to write a lot of { "Error": "ErrorCodeThatOnlyMakesSenseInTheContextOfThisAPI", "Description": "DEPRECATED" }


I'm with you on this one. Don't get me wrong, I think REST is great too but mostly just because it's something we can agree on.

In the end a route/url maps to a method with parameters anyway. I wish JSON-RPC was a bit more popular. http://www.jsonrpc.org/specification


> Making a call to a server should be like calling a function anywhere else

You can make calling a function on a remote machine look and seem (superficially) like calling a local function, but they will never have similar behaviour. A network is very different from a motherboard.

http://www.tbray.org/ongoing/When/200x/2009/05/25/HTTP-and-t...


Here's where I've found REST very useful:

It makes for predictable APIs for consumers and it means API producers have a checklist of things to implement to consider their interface 'complete'. It also provides a consistent way to think about how to expose an interface (or in fact, how to expose entities).


IMO the biggest benefit of REST vs RPC is reduced coupling between the client and the server. With RPC client needs to be explicitly aware of all methods and parameters that the server supports. With well designed REST interface this is not the case, you can have a generic client that does not need to be explicitly coded to support the specific service. REST interfaces are more difficult to design than RPC interfaces, but they are also more elegant and simpler. Roy Fielding (REST father) argues that elegant and simpler != easier.


Rich Hickey (Clojure) has a presentation about the differences between simple and easy. If you have the time, it's worth it.

http://www.infoq.com/presentations/Simple-Made-Easy


REST makes useful to be able to do:

- UPDATE/course/324234 - GET /course/324234 - DELETE /course/324234

with predictable results.

Of course, it works better when you have a clear hierarchy of resources, and/or it make sense to do UPDATE/GET/DELETE to the resources with clear results.

I think it works for some cases very well, but I don't think is always the way to go. That said, there are a sensible number of "RESTful APIs" that are not RESTful at all, they just uses HTTP.


Braintree have made an interesting choice in keeping their REST API private and requiring customers use their client libraries. I think they could both have client libraries that they promote as well as a public REST API. Breaking down their reasons:

Security - agree with them on this, the more they can help their users make their systems secure the better. Not sure if it should preclude a public REST API but certainly motivates for having a good client library.

Platform Support - another good reason to have the client library, essentially encoding best practice in the client. I've certainly seen customers abuse features of our APIs. Again, not sure if it should mean keeping the REST API private. Certainly it's a good idea to be defensive on both the client and server (e.g. for queries that request too much data, rate limiting etc.).

Backwards compatibility - Here I disagree with Braintree. I think it should be just as easy to manage backwards compatibility purely on the server side.


I'm a developer at Braintree. The challenge with maintaining backwards compatibility on the server side is handling the wide variety of possible inputs into the system. It's hard to write tests that account for all of them.

I've seen a couple of cases where backwards compatibility can be broken in unexpected ways. For one application that we built at Braintree, we had a client that was sending us an application/x-www-form-urlencoded POST body without the Content-Type request header. We upgraded the version of Rails that this app was using, and it broke that integration because Rails made a change where it wouldn't parse the POST body without the Content-Type header. Unfortunately, we didn't have any test cases in our test suite that made POSTs without a Content-Type. We were able to identify the issue and resolve it quickly, but it was a surprising bug. With client libraries, we can test every version against the upgraded app and know that all clients will continue to work.

Are there interesting request profiling techniques that can be executed on production traffic to analyze requests? I think the challenging part of backwards compatibility is making sure unintentional use cases, that were never intended to be supported, continue to work.


Thanks for responding! And I'll update my response to the backwards compatibility to say that I see the value in having a closed set of clients that you need to test for backwards compatibility.

That said, these are some of the things that I've done to help with backwards compatibility through only the serverside API:

* Track production requests and use them as test cases

* Build a large suite of test cases against the API

* Build the API in a statically typed language (yeah, I know, contentious, I love Rails but there's something about an externally facing API that makes me want to use a statically typed language).

* And then the ultimate -- build the API as an app that talks back to the business logic... essentially, it's your 'client library' but deployed on the server between the actual code and the client. Then, never change the part of it that faces the client, only the mappings on the the real business logic.


If you are going to make the decision and commitment to supply client SDKs instead of a RESTful API why choose to implement the back-end as a http service? It is now hidden entirely by abstraction from clients and there are lots of efficiencies to be gained by using a custom protocol.

Did you evaluate whether a RPC pipe or distributed filesystem was a better fit?


This has been my approach to internal service design as well. It's far easier to make sure people integrate properly by providing a rich, well-documented API client library than it is to rely solely on the REST-based one. REST APIs are hard to document, and require a lot of orthogonal boilerplate to even make the requests properly.

Making a client library is also a great way to "dogfood" your REST API - you know a developer will write code, not HTTP calls, to interact with your service. The client API allows you to test out that code and make sure your API is well designed.


"require a lot of orthogonal boilerplate to even make the requests properly"

What kind of boilerplate would that be? For me one of the big advantages of a RESTful interface is that they are really easy to call from cURL without much need for building up a complex context.


I found the translation between native code and a REST API to be what I'd call boilerplate. You need to build a URL, build the request (usually converting some model object into JSON), make the request in a non-blocking fashion, receive and parse the response into either a successful response, probably converting JSON back into a model object, or a failed response, which needs to be mapped to some native representation and passed back into the business logic.

Contrast that with a client library which can do all of that for you, for example: taking a native model object and calling one of two callback functions provided, letting you concentrate on the business logic of your app.

I'm not arguing that a client library is the right way to go, but I certainly understand where the grandparent is coming from RE: boilerplate code. Also, this is going to be heavily influenced by the language and libraries you're using.


Except company B uses a single callback, returning an object that contains an error code when not successful. Company C uses a blocking library, and requires you to pass in your model object as a dictionary. See where I'm going with this?


Why not provide a C api? This would make it easy for community language bindings to capture the benefits you describe in languages you aren't natively supporting (such as Go, Erlang or Haskell).


It's a shame there isn't some simple standard (de-facto or otherwise) way of documenting REST APIs such that an idiomatic client library for each language could be generated automatically.

The OPTIONS method is a really underused part of HTTP and would be great for this purpose: http://zacstewart.com/2012/04/14/http-options-method.html


Check out github.com/RestDoc/specification, it's a project that got started after a few people (myself and Zac included) realized we were all solving the same problem. It's been languishing in relative obscurity because most of us don't have the bandwidth to properly promote it, but more contributions are very welcome!


Yes, there should be a WSDL for REST APIs.



There are several good points in the article. Particular the one about https.

That said as a clojure dev I'd much rather work with a nice clojure http library like clj-http than have to use their provided java library.


The platform support argument is not very persuasive here. The beauty of open source and the community is that when you combine a popular service with common platforms that are likely to be used, it is likely someone in the community will release the needed api. Why not provide some community support for the platforms, but allow the users to release an api where they know their needs better than you do?

On the other hand, the security argument is quite strong. Improperly implemented SSL handshaking, especially when dealing with payment transactions like they do, has the potential to be devastating. But this is where interacting with the community that is making the api's would be key. There's a lot of value to be had when company and community development work together. They could contribute the ssl code themselves to the open source client projects.


I don't really care for this. The promise of web technologies, which has finally become reality over he past few hears, is obviating the need for client libraries. This benefits. Oth providers and consumers alike. The benefits cited by the OP are not that compelling. Most (all?) other APIs allow/encourage direct access so for a large number of developers, that's a strong preference.


The title should add "by itself" at the end, as they are still using REST - just adding client libraries.


This headline is atrocious, this article has so incredibly little to do with REST mechanics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: