Hacker News new | past | comments | ask | show | jobs | submit login
Your REST API should come with a client (silota.com)
107 points by gane5h on April 28, 2014 | hide | past | favorite | 80 comments



Relatedly, you'll want to ship first-party or "blessed" libraries for the big programming stacks as quickly as feasible. Start with the one your team uses and then roll out to the ones which are big with in your customers' industries. They're vastly easier to consume for many of your users than a standard REST API is. (Compare: Twilio::Sms.send(from, to, "This is a message.") with Httparty.post("https://api.twilio.com/2010-04-01/Accounts/{AccountSid}/Mess..., {:From => from, :To => to, :Message => "This is a message"}). Of course, in an actual application, after the third time writing that I'd start working on a buggy poorly tested wrapper for half of the Twilio API, or use someone's OSS buggy poorly tested wrapper for half of the Twilio API. I actually did this prior to Twilio releasing a first-party Ruby API and will be crying tears about that timing for years to come.)


Gah, I have mixed feeling about this.

When you're using their internal language it's often great. Any other and it's often a nightmare. For example, it's very common to see people write a bloody awful .Net wrapper that are completely non-idiomatic and a complete pita to use.

Often they're written in a way the author clearly doesn't understand OO or hasn't kept up with C# and still thinks it's just like Java so writes extremely old fashioned code. And namespaces. They want you use a million namespaces. It's a minor thing, but a completely unnecessary complication. They could stick everything in one namespace and get no conflicts.

And then, because they've put out client libraries, they don't document their API properly.

Google, as usual, are the worst offender, their .Net library is really bad and incredibly overcomplicated. It does make you wonder about all the hype of 'best' engineers.

The other problem is that they think you'll be using their API one way, when it needs to be another and their code just gets in the way, but because they don't have a snippet without using their library, you end up having to ILSpy their library and then get greeted with shockingly bad code with millions of pointless interfaces that only get used once, because, again, they don't understand .Net.


Google, as usual, are the worst offender, their .Net library is really bad and incredibly overcomplicated. It does make you wonder about all the hype of 'best' engineers.

Maybe their "'best' engineers" are working on anything but .Net? My impression of the Google culture is that they're more focused on platforms for which .Net is not a factor.


Tend to agree. Many individuals could write the libraries for ruby, python, JavaScript, and PHP. And it's likely that would cover 3/4 of your users needs. .net java and even some of the functional langs don't usually justify the dev time and experience gap.

* elaborate - small teams don't always have the experience or desire to support these langs. better left to the community


You live in cuckoo land if you think ruby/python are more widely used than Java/C#.


in the context of restful apis?

seems like it's mostly web apps that are building rest apis. Most web apps are ruby/python/php. I'm pretty sure, but could be wrong. My hypothesis is based on the builder of the web app uses langs they are familiar with when they build out a selection of client library. The claim of a lack of .Net libraries seems to justify my thinking


Funnily enough Mailgun[0] has started to feature codesnippets for CURL/Ruby/Python/PHP/Java/C# on their homepage for sending mail.

[0] http://www.mailgun.com/


You're typically going to copy/past a snippet either way. Doesn't seem like a huge win.


Maybe that was a poor example, since I picked something which is one line either way.

How about this message signature:

Twilio::PhoneNumbers.buy_a_number!(options) #returns the new number or raises an error

I have this implemented in the bowels of Appointment Reminder, because back in the day Twilio did not ship that functionality with the API. The "snippet" to do it requires about 30 lines. They're also probably the wrong 30 lines, because e.g. if Internet gremlins bushwhack one of the HTTP requests, it dies uncleanly. I'd have been mighty obliged if it was one line which worked atomically. (It is, to my understanding, now that there exists a decent first-party Twilio library. I recall showing that method on a slide at Twilio HQ one day while begging for that first party library to get created.)


I still feel like clients defeat the whole purpose of using simple technologies like HTTP verbs and JSON. But I can see that a) if your constituents want them you probably have to provide them and b) there might be some marketing benefit. Other than that, I think it's a shame.

And the reasons in the post are not particularly compelling. 1) Batch requests are usually unnecessary or benefit from a call optimized for batching. 2) Caching rarely needed, potentially dangerous and can be done elsewhere. 3) Throttling can/should be performed elsewhere and no way to prevent DOS anyway. 4) Timeouts are usually easy. 5) GZIP rarely necessary. 6) Dangerous to let someone else's code do it.


Whenever I use a third party REST library that doesn't have an SDK, usually the first thing I do is wrap it all up in what would be the SDK client.

This way I can bring into my application the following: - Parameterised methods with code doc (so when I reference it I can see what's what in my IDE). - Exception handling. - My own batch methods in the absence of it in the API. E.g. book delivery date API = get delivery slots for address, select appropriate delivery slot matching the date, book it. All this can be one client method which has an exception for when things go wrong.


"5) GZIP rarely necessary."

if the server already has the bytes gzipped, it is often pure win to ship the gzipped bytes: consider that the client may be able to finish uncompressing the gzipped response earlier than it could otherwise have received the last byte of the uncompressed response.


That's a neat idea. But in the case that the server doesn't have them zipped, I'm not sure that the blanket statement made in the blog post is really right, either.


I don't think the idea in the post was to perform throttling in the client. The idea is to provide a best practice handling of throttling in the client (e.g. what to do when receiving a 429). Since the 429 is mentioned in the article, it's a reasonable assumption that the throttling happens outside of the client.


If REST client libs didn't suck, this wouldn't be needed.

It's a load of effort to correctly consume from a RESTful webservice currently.

I should be able to get going in my REPL with something like:

    >>> from rest import client
    >>> proxy = client(url)
    >>> print proxy.resources
    ['foo', 'bar']
    >>> help(client.foo)
    Help text from the rest service...
    >>> client.foo(123)
    321
    # repeat calls transparently handle caching


Have you tried Hammock? It fakes it very well:

  >>> from hammock import Hammock as Github

  >>> # Let's create the first chain of hammock using base api url
  >>> github = Github('https://api.github.com')

  >>> # Ok, let the magic happens, ask github for hammock watchers
  >>> resp = github.repos('kadirpekel', 'hammock').watchers.GET()

  >>> # now you're ready to take a rest for the rest the of code :)
  >>> for watcher in resp.json: print watcher.get('login')
  kadirpekel
  ...
  ..
Your "resources" example only works if there's a /resources endpoint, but client.foo(id).GET() works just fine.

https://pypi.python.org/pypi/hammock


This is the closest i've seen. Excellent link, thanks!


Thanks! I though I was the only one.. The RESTful API is designed to be really simple for clients. If you designed yours well, your users won't need your client API.

For instance, on iOS we have the wonderful RestKit Client[0]. If you create your own client it means I would have to write specific cases for your API and miss all RestKit features. Don't get me wrong, I could still use the RestKit with your API, but when I see an API client available, I always think "this API may be bad designed, it needs specific code".

[0](https://github.com/RestKit/RestKit)


A few things off the top of my head which are different across many RESTful APIs which prevent a unified client from being possible:

- Referring to other data paths in the API in a standard format

- Authentication. Shy of OAuth, it is totally different on each API. Most APIs avoid OAuth because its a PITA

- Partial PUTs vs PATCH vs sub-resources

All sorts of minor things are different across rest APIs, and that results in client libraries needing to be widely different.

We are in need of a standardized API format which we can build compatible server & client libraries against. Something like SOAP for the JSON era. There are a few out there, but none have really gone anywhere. Are any of these extended-REST wrappers in production by any big companies? I'd love to be corrected.

If not, maybe a high-profile company with a really nice API design could publish a standard on their API structure and refactor out their transport code to provide us with these libraries. If successful, they could be known for introducing a widely-used transport layer for the web industry.


Authentication shenanigans and needing to use PATCH in a sane way are much better reasons for using a custom client than all six reasons listed in the original article...

I don't get what you mean about data paths, though. Even if you had a hypothetical good REST client you'd still want a single point to do the URL string assembly to save typing and do some validation but this doesn't really require a whole different client.


One example I know of is OData[1]. The URI structure is standardized, all query operators are standardized, and the media type is JSON based on Atom. They also have one of the nicest batch request support I've seen (using multipart/mixed content type).

Needless to say, OData is a standard pushed by Microsoft, so the ecosystem for libraries is is mostly .NET, although they provide a JavaScript library as well (datajs).

[1] http://www.odata.org/


This kind of thing is why I'm predicting a return to WS-*, or else a reimplementation of it on top of REST. The XML backlash was mostly correct but having a standard, well-specified way of creating HTTP APIs and generating clients for them from a single endpoint is a baby we threw out with the bathwater.


The horribly-named HATEOAS is one possible solution: http://timelessrepo.com/haters-gonna-hateoas


Maybe, if the library support is there. Right now HATEOAS seems to mean replacing your domain-specific action tags with boilerplatey <link rel=...> and not a lot else.


Yes, if HATEOAS were actually done, all you'd need to do is have libraries for handling the content types used in the API, and with a suitable generic client library everything else would come for free.

But virtually no one does that with their APIs, and virtually no one builds client libraries on the assumption that services will do that, so there's something of a chicken-and-egg problem.


I wonder how to implement that. This kind of stuff is generated from WSDLs in C#/VS, but it's an interesting idea to do it in a REST service.


Like WADL?

https://en.wikipedia.org/wiki/Web_Application_Description_La...

I guess I really don't get REST, shipping a REST client for a specific API is a thing. If you're gonna provide a client, who cares if it's really REST after all?


https://pypi.python.org/pypi/wadllib/1.1.4 looks quite complicated. I'd really like to be able to do some_object.some_remote_method_call(parameters) like the parent.

I guess REST is there to make it easier for other people to implement your API. SOAP was hard without libraries, I can hit your REST endpoint with curl.


"I can hit your REST endpoint with curl."

I must admit that was my initial reaction when I saw the title of this article - I tend to regard curl as the universal RESTful API client. I don't think I've written or consumed a RESTful API where I didn't spend a fair amount of time interacting with the system through curl.


cURL works fine for RPC-style or non-RESTful clients too. Instead of "-XPUT" you make your URL end with /foo/addfoo or something. And who cares if the FooID is in the URL or in postdata? In fact, cURL makes it easier to add another data parameter than change the URL.


Its very difficult if not impossible to make a client which behaves correctly for all services. A simple example to demonstrate my point: the Facebook FQL API limits the maximum size of a result-set at any point in time to 5000. Therefore, you have to work around that limitation in a very specific way that no generic client can handle properly. That is partly why the advice given in the OP is so good: because different services can and do handle each of those points in a different way.


Agreed; REST clients/frameworks should have ways of handling all six of these problems--none of them are specific to a particular API.

If the client lib has an API-specific way of handling "caching", say, then the number of ways of handling "caching" is O(n) in the number of APIs you consume. If you use the REST APIs directly, then the HTTP's standard, debugged, documented caching mechanism is the only one you ever need to know.


Maybe I'm being a bit dim, but could you explain what "client.foo(123)" does in this client - it appears to be a resource and 'callable' - so is this doing a GET on that resource, what about other methods?


Yeah good question.

I think GET would be a reasonable default here, perhaps 123 is a query string parameter.

For update / POST, create / PUT etc. it's less clear cut in my example language of Python since all we have is a callable, we don't have constructs such as "new" to map behaviours to (repurposing "del" isn't possible).

Perhaps there's an extra parameter:

    client.account(id=123, name="something", _method=PUT)
Maybe there's a postfix operation:

    client.account(id=123, name="something").PUT()
Maybe the verbs are seperate:

    from rest import client, PUT
    PUT(client.account(id=123, name="something"))
I think the postfix system probably makes most sense. I'm sure there's other ways i could come up with.



Providers should definitely provide clients. This is one of the things I worked on at Twilio and it's extremely important for onboarding new customers. Support as many languages and frameworks as you can sustain and make them first-class (just as well documented as REST, native to the language, etc).

However, I've also seen the other side of this working at IFTTT where (at the time) we had a gemfile a mile long. That got really hairy. I now try to avoid using clients. I go into it in great detail here: https://www.youtube.com/watch?v=dBO62A3XaSs and we've talked about this many times on trafficandweather.io if you want to learn more.


Is Twilio really rest? Versioning and formatting info in URLs isn't REST-like. Plus the docs suggest you construct URLs, which isn't RESTful.

I love Twilio's API, I just don't understand the REST part. I fail to see how "PATCH /accounts/123/ <postbody>" is better than "POST /accounts/updateaccount <postbody accountid=123 ...>", especially when hidden behind a lovely client.


It seems they try to provide a somewhat hypermedia oriented API.

What I really don't like is they don't accept my accept headers, and I have to change the extension on the end. Going to /.json feels like ilk to me.


For some additional insight on accept based conneg. The guys who created it, really wish they hadn't.

http://www.alvestrand.no/pipermail/ietf-types/2006-April/001...

"Regarding proactive negotiation in HTTP/2, I'll note that Waka strips all negotiation fields. I find the entire feature revolting, from every architectural perspective, and would take the opportunity of 2.x to remove it entirely." Roy Fielding http://lists.w3.org/Archives/Public/ietf-http-wg/2013JanMar/...


Enjoyed the presentation, a link to the slides[0] would have saved some googling

Is this: https://github.com/hannestyden/hyperspec the HyperSpec you refer to?

[0]: https://speakerdeck.com/johnsheehan/building-api-integration...


At Leftronic we've also started to avoid clients. When you connect to many APIs you start running into issues with poorly supported clients and mile-long pip freezes. We like providing our own clients to our API because it's a nice quick way to get started but I think nothing beats good old python-requests and reading API docs.


I just watched the talk. It's really interesting and raises a few things I haven't thought about before.

Do you have any more information about automatic service/endpoint discovery and also about the smart HTTP client you use?

I would have thought that failing over to another endpoint address would have been done at a load balancer level.


I can't recall the exact episode, but we talked more about smart client on trafficandweather.io

We definitely need to publish more about it


I think https://github.com/pksunkara/alpaca is a good starting point. Given a web API, it generates client libraries in ruby, python, php and node.js


Are there any other alternatives to alpaca? How is the quality of the generate client code?


No, as far as I know, alpaca is the first of its type. As the author of the code, I can guarantee good quality of the generated client code. I have written the templates such that they follow their respective language conventions and ecosystems.


The Google APIs Client Generator is an awesome tool that automates this process. It takes a service description and generates a complete client library:

https://code.google.com/p/google-apis-client-generator/

The service is defined in a platform-neutral discovery document, which can be used by any provider:

https://developers.google.com/discovery/v1/reference/apis

There are generators for Python, Java, .NET, Objective-C, PHP, Go, GWT, Node.js, Ruby, and others.


> The Google APIs Client Generator is an awesome tool that automates this process. It takes a service description and generates a complete client library

So we've gone full circle and arrived back at SOAP. Why didn't we just keep using SOAP in the first place?


Well, at least the underlying API is much simpler - you can do things manually if necessary. Trying to talk to WCF SOAP services from Ruby has been really painful - none of the SOAP libraries actually work with the Microsoft stuff, and it's so overcomplicated that making requests manually is a pain.


Because SOAP is an unnecessary overhead and complication. It also kills performance.


Because everybody's SOAP implementation is subtly different, leading to bugs/pain/things not working.


    Caching, throttling, timeouts, gzip, error handling.
This all seems like something any serious user of a REST API should know how to handle very well. If not otherwise, then by use of a standard library.

Why does every API owner have to write basically identical clients in every language out there?


They wouldn't be substantially identical clients.

For example, I have extensive use of Twilio and Pin Payments (among other APIs) in Appointment Reminder. My use of the Twilio API can potentially spike into the hundreds of requests a second range, but if it gets into thousands of requests a second, that needs to throw a PresidentOfMadagascarException ("Shut. Down. Everything.") Code which does not interact with the Twilio API but instead implements meta-features on top of the Twilio API comprises 80%+ of lines of code implicating the Twilio API in my application.

By comparison, querying the Pin Payments API doesn't need a rate limit at all, but does need a sane caching strategy, because it requires thousands of API calls to answer a simple, common question like "How much did we sell in 2013?" Again, meta-features for the API comprise over 80% of lines of code implicating that API, but they're totally different meta-features.

AR is doing the 90% case with both of these APIs -- sane defaults in first-party clients would have greatly eased my implementation of them, allowing me to focus on features which actually sell AR, to the benefit of both my business and those of the APIs at issue, since their monthly revenue from me scales linearly with my success.


Another reasons is that your API likely sucks/is broken if you haven't made a client for it (and thus figured out the gaps/problems).


I learnt this the hard way. Initially, I designed the API with bigints for ids and found some older versions of PHP didn't support bigints. I had to switch to using strings.



Your integration tests should count as one client?


Your integration test will be more realistic and simpler to write if you use a client.


But if you do that, you're testing your client rather than your API. That's bound to matter: either your client is abstracting behavior and using only a specific subset of how your API could be used, or your client is not really useful.

For example, if your client implements fallback handling and caching behavior "the right way" for your API, and you only test using your client, then you're not testing your API when it is used "the wrong way" by developers who bypass your client. Those are important test cases too, unless you treat the wire protocol as an internal implementation detail and use your client's API as "the API". But if you do that, don't call it RESTful. (Technically it might still be RESTful, but that's a loaded term now and it will confuse developers who expect raw HTTP.)


Fine, but you can't provide clients for all languages and frameworks, so make sure your REST API is simple enough for those who need to make do without an official client library.


I think shipping your own client lib sort of defeats the whole point of REST as the "one true way" to build API's. I agree that building a client library is useful and makes it easier to integrate with, but it also proves that REST on its own is not completely superior than something as conceptually simple as JSON-RPC.

I fully understand the benefits of REST, but on a lot of projects, a single endpoint and a simple RPC protocol would be easier to integrate without need for a separate client library.

Also, client libraries don't always do the best job of making it clear when you are making api calls over the wire and that can be quite problematic.

For example, I've worked with code where it made an http request to get a price, which was then used in a Model calculation. This code looked completely harmless at the highest level, but each page request was hitting the server 100+ times to do all the calculations needed.

After finding the problem adding some caching was easy and now things run faster. However, having that level of abstraction and indirection makes it far less obvious if/when/where HTTP requests happen and that isn't always a good thing.


I wish clients would write themselves based on REST APIs.


Sounds like the good ol days of SOAP and WSDL


No, please. No SOAP or WSDL. over engineered.


If you want flashbacks to the horrors of WSDL then have a look at the WADL link posted above:

http://en.wikipedia.org/wiki/Web_Application_Description_Lan...


I just used SOAP on a architecture prototype earlier this year. It is pretty much alive in the classic enterprise world.


Now its Hypermedia


> You’ll have to mask GET requests with a HEAD request and appropriately handle HTTP status code 304 for resource not modified.

I'm not sure I understand this. Surely the point of if-modified-since and etag headers is that you can send them in the GET request and get back a 304, there is no need to do a HEAD-then-GET?


In my opinion, much of the described features of an API client actually do not belong in a client implementation. An API client cannot be shipped for all platforms. That either blocks platforms to use your API or leaves room for API consumers to get around your policies, like throttling and caching. I think enforcing policies should be done much closer to your API on the server side. An API proxy gateway could be used for most of the described points and would secure your API much better, without the extra effort of writing client libraries.


I think there's another benefit here: edge cases will show up. I wrote a Python client for a company's REST API which revealed an RFC bug during testing. From the source:

> For POSTs we do NOT encode our data, as CompanyX's REST API expects square brackets which are normally encoded according to RFC 1738. urllib.urlencode encodes square brackets which the API doesn't like.

Retrospectively, I should have asked the company to fix their bug, rather than work around it myself, but at the time I was a less confident programmer.


Your API should handle collections in any case. Otherwise this is just reinventing ESB while losing most of the stability and service management benefits by fragmenting and decentralising.


It's nice that this article claims everyone who makes an api should provide a client. But what happens when it uses hypermedia restful constraints? I think that any company which is able to provide multiple implementations for their API has already made it financially to afford those luxuries.

I know of some large companies which do this already, one of them being braintree, which offers an amazing API in many different languages, but they are already profitable and were bought by paypal.


I'm biased, since I'm the founder of Mashape [1], but maybe this could be of some interest. I'd like to point out that one of the features that Mashape offers is auto-generating client libraries in 8 different languages, leveraging Unirest [2] that we open sourced last year.

[1] http://mashape.com

[2] http://unirest.io


Agree with this, even if just done in one reference language. It is much easier for the community to port an existing binding to other languages, than to implement a new client from scratch.

Two notes:

2) No need to mask requests with a HEAD; a GET can also return a 304 directly.

6) De-duplication of calls: Any method except POST should be idempotent already, hence also a retry-on-error is trivial in those cases.


While I agree with everything you say it leads me to the opposite conclusion that I should not roll my own half assed rest client.


Wow! We did exactly these things in Q. I was surprised to see exactly the features we tout in our SDK mentioned one by one. Compare to this:

http://platform.qbix.com/guide/patterns

The Q platform was supposed to take care of the things that you have to do anyway when writing social apps.


Most of these things should be (optional) features of a good HTTP client library, which is kind of the point of following conventions like REST.

Maybe that just means writing a thin wrapper around one of these libraries specific for your API. But don't reinvent the wheel every time.


Your API sucks and is not as idiomatic as you think if you can't use curl as a client.


Your docs should be curl examples showing how to use the Rest API. If you are providing a client library in multiple libraries, not only are you doing it wrong, but you've also missed the entire point of building a Rest API in the first place.

Your consumers should know how to handle HTTP requests from within their language. If they don't, then no Client for your API will save them.


Not only should you include a client, you should write the client first.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: