I don't think this is messy, this is exactly how REST is intended to work, and is beautiful. I think the most common misconception that REST is simply a way to do CRUD over HTTP, when in fact they are completely orthogonal. You can do CRUD over REST, sure, but REST is also capable of much more. That's the realization the author came to with his `/request` and `/audit` resources, and is the zen of REST. Understanding that will put HTTP to work for you, instead of fighting against it.
The reason Level 3 HATEOAS REST is hard in Backbone and most other frameworks -- both frontend and backend -- is because they have yet to obtain the same enlightenment.
It's hard for me to submit to a philosophy for reasons like that it's beautiful, and that you will reach zen. Level 3 enlightenment sounds very cultish to me. :) I've dropped the notion of REST and been very happy with simple RPC, instead of contorting my mental model into resources or to align with the HTTP spec.
I personally have found zen in applying simpler concepts to software development. Such as composition over inheritance to my API design, mixing in certain aspects like content negotiation or caching, when those complexities become necessary. Or separation of concerns, making sure endpoints don't do too much, and the realization of concerns vs technology [1]. Really thinking about the notion of simplicity as describe by Rick Hickley in Simple Made Easy [2]. Or "There are only two hard problems in Computer Science: cache invalidation and naming things"--putting off caching until an endpoint becomes a problem--and not worrying if my URL structure is RESTful.
Here's an example of an API that I find beautiful [3].
I used be a REST evangelist like that b̶u̶t̶ ̶t̶h̶e̶n̶ ̶I̶ ̶t̶o̶o̶k̶ ̶a̶n̶ ̶a̶r̶r̶o̶w̶ ̶t̶o̶ ̶t̶h̶e̶ ̶k̶n̶e̶e̶.
Now I prefer to simply do RPC over HTTP with a JSON payload.
> I think the most common misconception that REST is simply a way to do CRUD over HTTP, when in fact they are completely orthogonal.
I dunno, REST maps to CRUD pretty well -- but its pretty limited when you think of it as restricted to CRUD operations against the equivalent of base tables in whatever your datastore of choice is -- its more like CRUD operations against views with arbitrarily complex rules mapping operations on the views to operations against base tables...
In my opinion, thinking like this how our industry goes down blind corners. To me this is how we wasted 10 years trying to XML-ify everything.
Specifically, your arguments rest on the vague and indefinable quality of "enlightenment." I think that this use of "enlightenment" means something along the lines of "I got something to work within a thought framework that appeals to me." But if he had started with a different framework (like RPC), he could have gotten that to work too, and perhaps would be feeling the "RPC enlightenment" because all of these problems can be solved within RPC too.
No one is arguing that HTTP-based APIs can only do CRUD. You can tunnel basically anything over HTTP, because ultimately the payload is just bytes. But the question is what using HTTP (and/or REST) has actually bought you, compared to competing approaches.
Graydon Hoare (one of the original authors of Rust) wrote an essay about this a while ago that I really love. Unfortunately he took it off the net, but it's still available on the wayback machine: http://web.archive.org/web/20040308200432/http://www.venge.n...
The essay is written in a very whimsical and wandering style, but the point is dead on:
If you want to store or transmit a message, you can do
cryptography, steganography, normalized database tables,
huffman codes, elliptic curves, web pages, morse code,
semaphores, java bytecode, bit-mapped images, wavelet
coefficients, s-expressions, basically anything you can
possibly dream up which codes for some bits. In all cases,
if you're coding bits and you are using a lossless system,
the *only* thing which matters is how *convenient* the encoding
is. There's nothing which makes one encoding "do it better"
than any other, aside from various external measurements of
convenience such as size, speed of encoding, speed of
decoding, hand-editability, self-consistency, commonality,
etc.
HTTP/REST and XML inhabit slightly different design spaces, so for HTTP/REST some of the external measurements might be a bit different. But if you are trying to argue that REST is better than competing technologies, the question is: what objective benefits does it offer, compared to those other technologies?
Clearly there are benefits to REST/HTTP (being able to use a web browser as an ad-hoc UI in many cases is one), but the idea of "enlightenment" should not be a substitute for an actual compare/contrast of approaches.
So yes, of course the author could find a solution to his problems that still used HTTP. The question is: once he does this, is he better off than if he had chosen a competing technology? (imagining a world where you aren't effectively forced to use HTTP to make it through firewalls).
I think the parent poster should probably have emphasized the importance of HATEOAS more and the power hypertext/linking gives you. I won't spend any time on it in this comment since there are plenty of resources on the web for that, but that's really been the important thing about REST for me. It really is a wonderful thing that clients cannot even attempt invalid state transitions, and I don't think I've seen it done in any other even moderately popular RPC/IPC framework. In other frameworks/architectures for RPC/IPC all operations are just "available" all the time, but if you were to actually attempt to invoke them you'd get "operation not supported" (or the like).
Strangely, very few people (at least that I've spoken to) who are fans of REST seem to have even picked up on the HATEOAS principles. I'm guessing that their fandom is more predicated on a hatred of SOAP/XML and love of the simplicity of JSON -- which is kind of strange since HATEOAS also works beautifully for Content-type=XML over HTTP.
EDIT: An attempt to formalize the Richardson Maturity Model (L3) for REST+JSON/HTTP can be found at: http://stateless.co/hal_specification.html (I'm not affiliated with it in any way, just thought it was interesting.)
But if you are trying to argue that REST is better than competing technologies, the question is: what objective benefits does it offer, compared to those other technologies?
REST isn't a technology, it's an architectural style. Regarding its benefits, they are all well specified in Fielding's thesis. REST is composed of a few constraints, and abiding by each one gives you certain benefits.
That said, I agree with you vis-à-vis the need to avoid blinders. In my opinion, there's nothing wrong with using RPC if that's the most adequate solution. What bothers me are all those new RPC-over-HTTP which people insist on calling RESTful.
So much specialized hardware and software has been designed to take advantage of the protocol. REST is really just an architectural guide for using HTTP as it was intended so that those tools can, ideally, provide some efficiency. The entity being transferred isn't really what matters. It's the metadata included about the entity, it's related resources, the client and server capabilities, the explicit expression of intent and result, and all the other header info that allows those tools to do their thing.
HTTPs ubiquity, for better or worse, makes it about the only option for any web based tool where control from end-to-end isn't a practical option. Better protocols can be made, better hardware and software can be made to utilize them, and some have, but their reach is very limited for now. Understanding the thing we're stuck with is pretty important. Luckily, countless man hours have gone into studying the protocol and building the tools that take advantage of it. Much of that information has been shared openly. I think that's a pretty big advantage over anything else.
I wonder why there are so few standardizations on specifying a generalized hypertext format for HATEOAS; Atom is being mentioned whenever this aspect comes up, but I think there should be a more suitable format.
i've been meaning to write something similar about the whole REST craze. REST breaks down pretty rapidly once you get out of the key-value-store paradigm (read: anything involving child objects).
REST lacks the ability to relay full state-change semantics without hackery. As the article pointed out, it forces you to be extra chatty over http, which is far from free over the congested, global network that is the interwebs.
For example, what if a single PUT request creates multiple sub-objects? How does my server reply with multiple location headers? Do i have to first re-get the created object's child locations and re-request each individually?
How about just sending the full object state back as the response to my PUT request? Well, according to REST, the body of the response just needs to be a description of the error or success status. Basically, REST sucks for reducing round trips if you're going to follow it pedantically. The theory is sound, but it needs to be updated to dictate how the server can send back more detailed info in response to POST/PUT/PATCH requests.
> For example, what if a single PUT request creates multiple sub-objects?
If a PUT, POST, or PATCH does that, then it does that. So what?
> How does my server reply with multiple location headers?
It doesn't respond with multiple location headers. With PUT or POST, with an 201 Created status response with the Location: header containing the URI of the parent object (the resource directly created by the PUT/POST), and an entity body, in some appropriate format, containing the URIs of all the created objects. (If there is a representation of the main resource which would do this that is acceptable to the client, this would seem to be an ideal way of communicating that.)
With PATCH, much the same, but with 200 OK.
> Do i have to first re-get the created object's child locations and re-request each individually?
There's no reason for a REST API to require that.
> How about just sending the full object state back as the response to my PUT request?
I see no reason why sending a full resource representation in a 201 status response, provided that the client has indicated that they are willing to accept the relevant media type, is problematic, especially if it does what the HTTP/1.1 spec says the 201 response entity should do. (The one thing that common resource representations might not do that they should is provide their own location and the direct locations of any embedded subentities that are separate addressable, but there is no reason why a resource representation couldn't do that, and I'd argue that in REST it would be desirable that resource representations do do that.)
> Well, according to REST, the body of the response just needs to be a description of the error or success status.
As, stated, that's accurate, but you seem to be acting as if that was "needs to be just" rather than "just needs to be" (e.g., misunderstanding it as a maximum allowed content, rather than a minimum required content.)
> Basically, REST sucks for reducing round trips if you're going to follow it pedantically.
Except that none of the problems you've pointed to in that regard have anything to do with REST.
> I'd argue that in REST it would be desirable that resource representations do do that (provide [...] the direct locations of any embedded subentities that are separate addressable).
You could actually go a step further - if you're using "Hypertext as the engine of application state" (HATEOAS), URIs for the locations of direct subentities need to be there, if the client is expected to be able to make those state transitions, for the API to be "fully RESTful". (Though, I'd personally agree with the article that features like discoverability and content negotiation are secondary to those you get from idempotence/properly using HTTP methods, and feel they should be considered bonus features, rather than strict requirements)
> You could actually go a step further - if you're using "Hypertext as the engine of application state" (HATEOAS), URIs for the locations of direct subentities need to be there, if the client is expected to be able to make those state transitions, for the API to be "fully RESTful".
Sure, I'd agree with that.
> Though, I'd personally agree with the article that features like discoverability and content negotiation are secondary to those you get from idempotence/properly using HTTP methods, and feel they should be considered bonus features, rather than strict requirements)
I think properly using HTTP is sufficient for calling something an HTTP API, but HATEOAS is necessary for calling it (accurately, at least) a REST API.
There's too much "REST is a popular idea, so lets call out API REST even we think is good for our use case, even if its not REST".
All APIs don't need to be REST APIs, but APIs that call themselves REST APIs should actually be REST APIs, and not just HTTP APIs.
> How about just sending the full object state back as the response to my PUT request?
That is what I do. I don't see anything wrong with it. PUT effectively replaces the state of the resource so you get it back right away (get the latest state, which should be what was sent in the request, minus a conflict for example if multiple clients do it or you have some incrementing revision id thing)
> For example, what if a single PUT request creates multiple sub-objects?
I'd use POST for creating/adding objects. I think of PUT usually as idempotent and as I mentioned above, use it to replace the state of the resource. So if this one POST ends up creating multiple-sub-objects you can return a json object that encapsulates URIs do those new objects.
Remember resources don't have to map to internal objects, db rows or other such things. You can have the objects or db rows represented or used in different resources. If it makes sense and if you have this transactional interface, maybe it makes sense to have an explicit transaction resource that is in charge of managing a transaction (where multiple things happen and then it kind of becomes very explicit).
How about just sending the full object state back as the response to my PUT request? Well, according to REST, the body of the response just needs to be a description of the error or success status.
This isn't strictly true:
If the target resource does not have a current representation and the PUT successfully creates one, then the origin server MUST inform the user agent by sending a 201 (Created) response. If the target resource does have a current representation and that representation is successfully modified in accordance with the state of the enclosed representation, then the origin server MUST send either a 200 (OK) or a 204 (No Content) response to indicate successful completion of the request. [0]
Both the 200 and 201 payloads may include anything you want, including a giant representation with many constituent parts (which parts would ideally each contain a link to its respective associated resource).
By merging children into the parent resource, he traded a reduction of requests with the loss of separation of concerns.
How about adding a layer instead whose only concern is the reduction of the number of requests for the client. It would collect and merge the info from parent and children resources and return it in merged form to the client ?
I really do wish HTTP had a mechanism for responding to a single request with multiple combined response bodies as if requests were made for each individually (from the perspective of, e.g., a caching proxy) - the loss of separation of concerns from merging children into the parent isn't just a usability issue, it also means you lose your cache granularity - the cache for the parent object is now stale whenever a child is modified, and a request for a child after getting the parent won't hit the cache at the request level.
There's keep-alive, but that still means that if you want to get a parent and its children, you have to wait for the parent request round-trip.
To add - this is a major issue we've been working through at Slant.co - we've combined child objects into the requests for the parent objects (sometimes even two levels down), but it's significantly complicated caching efforts - we're in the process of building some namespacing into our server-side request cache, so we can transparently make cached Question and Option pages stale whenever, e.g., the title of a Pro/Con gets changed. If HTTP allowed multiple responses, we could treat everything transparently server-side as individual requests, and get much higher hit-rates for proxy and client-side caches.
> I really do wish HTTP had a mechanism for responding to a single request with multiple combined response bodies as if requests were made for each individually
That seems to be a pretty significant feature of SPDY and the in-progress HTTP/2.0 work.
I wasn't aware of that, that's good to hear. I'm under the impression, though, that since SPDY encrypts everything, that you can't get caching at intermediate nodes unless you explicitly MITM yourself, which would reduce the utility of that feature. Then again, I'm not sure how much caching happens outside of places where you'd MITM yourself now anyway, so I guess in practice, that might not be a huge step back.
> I wasn't aware of that, that's good to hear. I'm under the impression, though, that since SPDY encrypts everything, that you can't get caching at intermediate nodes unless you explicitly MITM yourself, which would reduce the utility of that feature.
Last I heard, it was quite a matter of debate in the IETF workgroup on HTTP2 the extent to which SPDY's "TLS is mandatory" approach would be adopted for HTTP/2.0 (IIRC, Microsoft at one point staked out a "we will enable HTTP/2.0 without TLS in our browser so the standard better allow it" position, and numerous parties making strong arguments that there were definite use cases -- especially internal networks -- where users would want the other advantages of HTTP/2.0 and where TLS would be a burden rather than a benefit.)
And if you are worried about caching internal to your own organization, you could, in the worst case, use a (TLS-required) HTTP/2.0 user facing server with HTTP/1.1 (non-TLS) internal servers, eliminating the need to MITM your own TLS traffic. Obviously, that doesn't help external cacheability, but you probably aren't going to usually want to send external content insecurely from an app.
jsonapi looks interesting, but it's really too bad that they require HTTP. If they would change it to be more transport agnostic, it would work really well with websockets.
that's what we've done with our API: we have a meta-resource /multi that takes in an ordered array of Request objects, each with a URI, (HTTP)Method, and Parameters. The /multi service then splits these Requests out, sends them off to the "real" server in order (GETs, of course, being parallelizable when next to each other in the provided order), and then composes the responses back into one array of Response objects which gets passed back to the client.
It's honestly amazing to work with, as we can be very strict about our separation of concerns on the backend, while letting the frontend combine bits and pieces as makes sense for a given client interface.
Why can't I subscribe to a feed for this blog? I was going to blame NewsBlur, but upon viewing source I don't see link rels for anything but the favicons and stylesheet. I thought for a moment that I might just give them my email address, but then they pulled a bizarre "Looks like you have cookies disabled" out of some orifice. A: I don't. B: why would you need cookies to post a form?
We use CSRF protection across all POSTs/PUTs, so cookies are generally required. I'll look into removing this for certain forms (like blog subscribe, fairly safe me thinks!).
Thanks for being responsive! Now that you point it out I realize I could have just clicked on the shuttlecock icon, whoops.
I'm not certain what could have caused the cookie problem, since I've just gotten that same error message on a different device, neither of which actually have cookies turned off. (For instance, my HN cookies seem to be working fine...)
REST, as Fielding described it, is good for decoupled systems that can (and need to) evolve independently, like web browsers consuming HTML and JavaScript.
If you can relax the decoupling or independent evolution constraints, RPC over HTTP is usually easier to understand and implement. This is where most HTTP APIs fall. (…and that’s OK)
I found using HTTP (a mere transport layer) status codes as part of an API very unnatural and wrong. It feels like bending TCP/UDP packets structure to implement FTP to me.
And shoehorning your API into any kind of "blessed guidlines" just to earn you a badge? That's just a waste of time.
They're not using HTTP as a mere transport layer, since they're using different HTTP methods and URLs to indicate different semantics. If they were passing payloads through a single endpoint and method, like XML-RPC and SOAP do, that would be using using HTTP as a mere transport layer.
Seems I phrased my previous comment too vague, sorry for that. I'm aware of what REST stands for.
My question is what advantages utilizing "obscure" HTTP verbs and headers with spreading "endpoints" all over ones application provides compared to RPC over HTTP?
Surely, using RPC with a single endpoint is much simpler and thus more maintainable/modifiable?
Which obscure HTTP verbs? I didn't see any in the article.
You should read Fielding's thesis, but the short version is: when you're using a single endpoint, you're not really simplifying the communication -after all, your application still needs to do different things-, you're just replacing parts of the standard HTTP methods and server-sent URLs (which the client doesn't need to know beforehand) with custom/proprietary method names which the client needs to be coded against specifically.
Of course, this depends on whether you're actually following REST or not. If you're hardcoding endpoints all over your client applications, your architecture is not really RESTful.
Still, even if you do hardcode them, you still gain some advantages: for example, you can add a layer of caching by just sticking Varnish in the middle, which is not possible with RPC without custom code. That's the advantage of following the Uniform Interface constraint.
The reason Level 3 HATEOAS REST is hard in Backbone and most other frameworks -- both frontend and backend -- is because they have yet to obtain the same enlightenment.