Back in the day, I was on a yahoo group that had the REST evangelists pushing for how REST maps much better to consumable APIs than the enterprise nonsense that SOAP/WSDL/WS-* and all that created.
The SOAP people disagreed because they wanted everything defined up front between parties, out of band and then the enterprise companies started selling "tooling" to over-engineer and "manage" this nonsense.
Soon there was "governance" introduced so that centralized control of APIs could be configured and managed. Of course, the different implementations couldn't interoperate, so there was classic "supplier lockin" to your chosen Enterprise-Ready WS-* infrastructure (tm). WS-Addressing introduced resource addressing over XML over HTTP that overloaded the equivalent of a URL. Just one example.
REST in general (HTML/JS/HTTP in particular) has proven to be scalable, understandable, and implementable, without all the associated nonsense that WS-* dragged in.
Want encryption in transit? TLS provides that.
Want caching of results on either client or server or intermediate? HTTP Etags and headers provide that.
About the only thing that REST over HTTP doesn't provide is sessions, but JWTs and OAuth now handle most of that.
> everything defined up front between parties, out of band and then the enterprise companies started selling "tooling" to over-engineer and "manage" this nonsense.
This pattern has come up again and again. There's a centralized, orderly, Le Corbusier-style planned system. Everyone hates it and avoids using it in favor of a ramshackle schema-less system re-purposed from something else, which has no other advantages other than actually working.
See e.g.: circuit-switched networks, guaranteed QoS systems, centralised PKI, the more esoteric parts of XML vs the simplicity of JSON, one-directional hypertext, and the Internet itself vs telcos.
I'm an "infrastructure guy" that has just spent the better part of the last 8 hours of my working day talking to a developer trying to stand up some microservices, or whatever the kids are calling it these days.
Every few minutes, they started a sentence with "I just want to...", which I now mentally translate to "... ignore everything like security, reliability, or anything else that doesn't get me a checkbox tick for my assigned task."
Their "REST" API? It uses a HTTP GET to post a response, using an intermediate -- unauthenticated -- storage blob to stage the data.
"It worked in dev" came the reply to my probing questions.
Look, back in the "horrible old days of WS-*", when I still used to wear a developer hat, I could crack open Visual Studio in the morning and simply add a web service dependency to my project. In a matter of milliseconds -- seconds at worst -- it would automatically cough up a C# client for whatever it is that I wanted to talk to, and I could immediately start coding some actual business logic.
Today I watched a grown man type in the boilerplate code that the machine used to generate for me.
Apparently, this is better because it is "simple".
So simple that a programmer using HTTP GET for posting a reply can understand it.
Your experience with SOAP is vastly different than mine to the point where I almost laughed out loud at your anecdote about adding a new web service dependency in a few seconds. SOAP is inherently complex, and needlessly so in my opinion. I've got personal battle-scars from trying to make Apache Axis and .NET play nice with all the various WS-* extensions needed for our use case.
There are always going to be people who, for lack of time, effort, education, whatever, use simple tools wrong. That's not an argument against simplicity. It is a recurring and false hubris to think we can fully specify any system and be "done." In the real world the are always confounding factors that violate the assumptions and constraints of any system we design. When that happens, I vastly prefer the more adaptable, less codified, simple system.
The fact that even those people above can still get stuff done even when using the tools "incorrectly" is an argument for the simple approach, in my view.
That's key. If client and server were both .NET or both Axis, things were easier. As soon as they differed -- prepare for long-term pain such as restricting the data types you want to use to string and int. Limited interoperability.
> The fact that even those people above can still get stuff done even when using the tools "incorrectly" is an argument for the simple approach, in my view.
> I could crack open Visual Studio in the morning and simply add a web service dependency to my project. In a matter of milliseconds -- seconds at worst -- it would automatically cough up a C# client for whatever it is that I wanted to talk to, and I could immediately start coding some actual business logic
Visual Studio is a vastly underrated tool. If you're doing something it supports within the Microsoft ecosystem it can just plough through all of it for you. Doubly so if you add Jetbrains Rider.
The thing is, whenever we see something like this, it's because it's either easier to reimplement the thing from scratch than work out what the correct already-implemented solution should be, OR the already-implemented solution simply doesn't work in your environment because it's not supported.
It is sometimes depressing to watch the Javascript ecosystem try to build standard workflows only to knock them over again once they've managed to place one brick upon another, but Microsoft are quite capable of doing the same thing of their own accord (I say as someone surveying the wreckage that is WinUI3)
Schema languages for "REST" exist (OpenAPI, RAML), and many people use them, but they certainly haven't taken off to the extent that WSDL did. I find that odd.
The new hotness is gRPC, which is based on protocol buffers, which uses an out-of-band schema file (you can do a certain amount with protobufs without a schema file, but nothing very useful). People appear to be happily adopting that, and aren't complaining about the need for a schema.
The only factor here is that the developer you were talking to here appears to be an idiot. They’d fuck it up in any stack. The technology is immaterial.
There's such a thing as falling into the pit of success.
A successful REST implementation using hand-rolled JSON protocols is in the middle of a vast field with hidden man traps full of sharpened bamboo spikes.
Equally vile is the majority of applications reliant on WS-* based protocols, their appalling lack of semantic consistency, cross-vendor incompatibilities, byzantine failure modes, and grotesquely siloed enterprise runtimes.
Equating certain technologies with incompetence, especially coupled to a blinkered misrepresentation of how to use them, remains a wholly unjustifiable misattribution. What's more, doing so with a reference to "the kids" further indicates a poor grasp of programmer demographics, and especially the denizens of this forum.
To sum up: most software is shit. Any correlations to variations in the uniformity of the shit that you may have observed are globally unsupportable.
Funnily enough I've found the opposite to be true nowadays. When someone offers a WS-* API all you need is the URL of the WSDL and you can start calling it, from whatever language and framework you like. When someone offers a "REST" API you have to spend hours just figuring out what specific stack of stuff they've actually done.
Eh, not really. Python SOAP clients are bad and have always been. In reality, SOAP works well only when using platforms sold by commercial vendors, who invested heavily in producing tools for it in an attempt to generate lock-in for their tools. Which means interop issues between platforms, for anything beyond the trivial case, are rife.
Whereas REST is so basic that you literally need nothing more than a standard HTTP client (and a JSON parser, these days; back then it would be some hand-rolled XML or even just encoded form-like data).
One of my favorite tricks with REST is sending a complete, bug-reproduction to a vendor as a simple one-line curl command. No arguing back and forth, just tell me when this curl command works properly.
Spose you're right, but I've never tickled a SOAP interface with curl. I fire up a python script when I need to do even the most basic interaction with a SOAP endpoint because, as Dustin Hoffman says in Hook, "Brace yourself lad, because this is really going to hurt."
Python is actually where most of my experience with SOAP was. Import suds, point it at the WSDL, get on with your program. I had far fewer interop issues with that than with all the different ways people implement "REST".
Try doing that with NetSuite's WSDL in any non-Microsoft language and get back to me.
To do it in Java you need a custom-patched obsolete version of Apache Axis which you must compile and run under Java 6, and you must pass a very specific set of options to Axis when you run it. If you can manage all that, it does work.
The available SOAP libraries for Python are, AFAIK, not capable of ingesting the WSDL at all without crashing, though perhaps it would work if you had more than 16GB of RAM.
I really like rest. But, the 3 day php hack I built 10 years ago still brings in the cash. I must admit, it's not a service called by arbitrary clients. But the WSDL was text book and I have only one document that I generally need to refer to.
> About the only thing that REST over HTTP doesn't provide is sessions, but JWTs and OAuth now handle most of that.
I would disagree. HTTP with REST is half a communications protocol. The server can't push data. There are hacks around this but it's not REST as defined by the dissertation. HTTP and REST took sockets and lobotomized them. They work well where you don't need duplex communication but the tragedy is that they have been forced into solutions where they don't fit because lowest common denominator. Think of how much simpler and more flexible a world we would live in if every web server behaved similarly to a message broker, for example. (GraphQL does this, so there is hope).
Yes, and that's REST+..., not REST. If you've got a websocket, you're explicitly making the design choice that client-side application state can live outside the HTTP payload.
The key question is this: given a link and some subsequently-received websocket data, could a second client reconstruct the same application state given only a link? If the answer to that is "no", then what you have is not a system which supports HATEOAS. It's just not something people pay attention to once websockets are in the mix. Granted, you might get there by accident, depending on what you're doing with them, but I don't often see that as a design goal.
HTTP-Kit in Clojure-land has solved this beautifully with a unified interface for handling both WebSocket messages and HTTP requests, so using one function you receive requests/messages that you can act on, and responding to those also have the same interface.
My favorite WS-* is WS-I, a bunch of guidelines about which versions/subsets of the other standards to use, if you actually want to talk to someone else.
(As someone who once had to implement a SOAP server supporting diverse clients, I'll never pass up a chance to get a dig in)
While I agree with your premise that SOAP and WS-* is over-engineered, at the basic (simple SOAP + WSDL) I quite like it. There's a defined contract and you can generate strong-typed clients. I haven't checked in a while, but how is Swagger/OpenAPI in that regard?
JSON Schema gives you the equivalent of WSDL. There are various libraries that will take a schema and create the strongly typed objects in the language of your choice.
Theoretically, if you have "Code on Demand", your client can ask the server for the code to handle a media type :)
Things like grpc and GraphQL seem to be the descendants of SOAP/WSDL.
GraphQL is quite nice as it kind of has the best of both worlds: a strongly typed schema for data and operations, while being fairly close to a simple JSON/HTTP API.
It's always a pendulum. When we're lucky it's a "spiral", where when we swing back to the first pole we do it better than the time before. We're not always lucky.
> If the get-shit-done crowd wasn’t going to use SOAP, they still needed some standard way of doing things. Since everyone was using HTTP, and since everyone would keep using HTTP at least as a transport layer because of all the proxying and caching support, the simplest possible thing to do was just rely on HTTP’s existing semantics. So that’s what they did. They could have called their approach Fuck It, Overload HTTP (FIOH)
Having been a developer during that period of history, I'd say OP gets it about right, that's what happened.
For better or worse; it's not obvious to me that it was a poor choice. I do agree it somewhat misappropriated the "REST" term and misrepresented itself as being more "standardized" than it was -- also for more or less the reasons OP suggests: "that would have seemed recklessly blasé next to all the formal specification work that went into SOAP… a veneer of academic respectability".
Although I do recall some people right from the start really were interested in HATEOAS and trying to do it... I think it's practical benefits were somewhat elusive. It reminds me of "semantic web" in some ways, not sure if that makes sense...
I think not only was it a reaction against SOAP, but also against other common API styles like shoving everything into a GET query string or a POST body. Some name had to be attached to it. Simply 'HTTP' couldn't be used as both SOAP and query/POST both used HTTP as well (not to mention WebDAV).
As a young engineer implementing an early, prominent 'REST' service, it was kind of a surprise to me to find these lurking, unused verbs like PUT or DELETE in the spec. But they were there & that style beat the living tar out of SOAP or query string. But, in my memory, rfc2616 was used far more as inspiration and reference than Fielding's dissertation.
>For better or worse; it's not obvious to me that it was a poor choice.
It wasn't a poor choice. It was the least common denominator, and what put it over the top was JSON instead of XML and using URLs for more than the address of the end point to send your RPC to. The problem with SOAP and other APIs of the era was everything was bespoke once you got past transport (and transport often was hard because different systems implemented transport differently). With REST, transport was settled, and at least the URL forced some kind of hierarchy, and you could reason that GET /user/settings would return a settings object for the user.
I remember consulting for a tech company around '07 that needed to send remote (extremely high compute) tasks to a heavy iron server they dedicated to such work. They were using SOAP and having a very difficult time getting all the pieces and parts to work. At the time, Drupal 6 (a PHP web framework that used to be popular) had just released a "Services" module/plugin that enabled one to add a REST Service to any Drupal site.
Working with them to initially understand what the SOAP tooling required, and then comparing that to what that REST Services Module required was a no brainer. They had a team of 11 people working on the SOAP infrastructure, and they were all super happy to ditch it and get back to their real work. Re-writing as a RESTful API using Drupal Services merely as a bureaucrat to communicate with their heavy iron server only required myself and one other programmer I brought with me. About 3 weeks later we completely replaced their SOAP infrastructure. For years afterwards, I'd check in on them and it just worked.
The mistake of SOAP specs was to believe toolmakers were on their side, and tooling would magically solve all problems so that you would never actually have to look at what went down the wire. Whereas toolmaking companies were only on the side of making money, and looking down the wire was often unavoidable. REST went back to basics, simple protocols that humans could work on with little more than a text editor.
I mean, a tool-maker wants to sell you tools. Why wouldn't they want to sell you tools that obviated the need to look down the wire? If customers wanted those tools.
Maybe it was just harder/more expensive to make those tools than the "just solve it with tools" spec-makers anticipated. Rather than it being any particular misalignment of incentives.
It's just that the cross-vendor interop efforts were often skin-deep. They all tried to lock you in one way or another, by implementing this or that spec draft a bit differently and making specs overly-verbose and burdensome to implement for anyone without a big budget. Microsoft had just won the browser wars that way, after all. But hey, if you stuck to one vendor, it all worked like magic.
I see, the interoperability issue makes sense, and now I'm starting to remember it. (I never did THAT much with SOAP).
But you really think the specs-writers (being some of the same people as the tool-makers?) intentionally made the specs overly-verbose and burdensome to implement, to protect their tools businesses? That's different from what you said in your original comment, about spec-makers "believing toolmakers were on their side" -- now you're saying the spec-makers were the tool-makers and not actually on the end-developer-user's side!
I'm still inclined to believe a lot of it was people (both spec-makers and tool-makers) doing their best with a challenging problem, rather than intentionally sabotaging it for profit. But you think I'm too optimistic?
It really depends on your view of W3C as a standard body.
> That's different from what you said in your original comment
SOAP basically started as an attempt to learn from the mistakes of CORBA and RPC, and I think it started with good intentions all around. When complexity hit, however, the original designers basically went "we'll deal with it in the various sub-specs, but don't worry, tooling will take care of this anyway". I do think most of them were not malicious in this approach. Meanwhile, vendors understood where the wind was blowing, and got themselves heavily involved in the various efforts - hence the proliferation of WS-* and incompatibilities.
> I'm still inclined to believe a lot of it was people (both spec-makers and tool-makers) doing their best with a challenging problem
Considering the amount of money and manpower that went into these efforts, ascribing it all to lack of skills would possibly end up being less optimistic - about the state of corporations, the work they produce, and what this means for human progress.
TBH I think the charitable reading is just that there were too many cooks in the WS kitchen. I feel some of them burnt pans on purpose, maybe they didn't, but at this point it doesn't matter much.
In what way? What is GraphQL competing with? I can't think of another reasonable alternative that allows the client to specify what they want in a single round trip. What preceded GraphQL was REST with a whole lot of inconsistent include_xyz query params.
I'm also guessing, but I suspect that the sharp rise of mobile was a contributing factor. Mobile can have good throughput but not so good latency so packing more per request makes a lot of sense.
At one place I worked we actually made two BFF (backend-for-frontend) servers that took one client request, made many within datacenter requests, put a single response together and sent it all back. We had one for mobile and one for web. We only had one datacenter region so made sense for global users regardless of throughput.
There were a couple of additional overlapping factors that contributed to the rise of the FIOH crowd and misinterpretation/misuse of REST and HATEOS:
1. the growing popularity of AJAX and SPA frameworks
2. the rapid growth in mobile clients
We had to make a lot of trade-offs from ideal REST/HATEOS architectures to support these new clients.
Eventually for some it wasn't enough. Facebook launched GraphQL in response to the problems with REST-based APIs they perceived for mobile clients. Fast forward to today and we're walking back to the RPC protocol era.
The problem I have with this advice of, "choosing the right architecture for your application," is that it encourages an ecosystem of competing protocols. Exactly what made the prior RPC-era so annoying... not only was SOAP incredibly bloated and hard to deal with but every services that implemented it also implemented their own snowflake version of it with it's own errors and interpretations. Even if you had a "compliant" SOAP client it was likely that it would be incompatible with your service providers' service in some undocumented way.
At least with FIOH-style APIs you just have to deal with their inane reasoning for return "200 OK: Error" responses and not implementing HATEOS.
> Since everyone was using HTTP, and since everyone would keep using HTTP at least as a transport layer because of all the proxying and caching support, the simplest possible thing to do was just rely on HTTP’s existing semantics. So that’s what they did. They could have called their approach Fuck It, Overload HTTP (FIOH), and that would have been an accurate name
First of all, I fully agree with this article. I lived through this transition and my own reasoning is well as that of basically everyone I talked to was really just that the existing infrastructure was overly complicated, while "just passed some JSON over HTTP" was easy.
I do, however, want to disagree with one point quoted above. The proxying and caching support were not the reasons why FIOH was based on HTTP. The reason was firewalls. At first, almost all of the things I built and almost all of the RESTful APIs I saw published, were really just some form of remote procedure call. Caching was supported in theory but hardly ever used in practice.
However, it was policy pretty much everywhere to have firewalls that blocked traffic to most ports other than 80 and 443. The more advanced firewalls also verify that traffic over those ports was actually HTTP traffic. So it wasn't possible to build any kind of an API that wasn't based on HTTP -- it would have been impossible to deploy and use without getting permission from the firewall and security teams on both the client and server and of the connection.
I'd say the adoption of HTTP protocols was certainly driven by firewalls. This was just as true for SOAP as it was REST. However, in earlier times caching was also important, and this is one of the places where REST beat SOAP. However, the caching idea seems to have been lost by most developers now.
For high volume sites we would cache at multiple levels. At the load balancer, on the CDN or in the browser - depending on how public/user-centric the data was. You would not expect a browser to return to the server every time for a 1 pixel spacer gif (oh the glory days), and you wouldn't expect it to keep retriving the user's name for the top of each otherwise static page.
However, things changed - the consumers of APIs were no longer browsers and the internet just got faster and faster. I can understand why developers just decided to forget about caching.
This is probably why most developers I now encounter just can't understand why the thing at the end of a GET or PUT url should be a "thing" not an operation.
The URL identifies an entity. In the old static website days this meant a file - a html file or an image on a file system. With REST, the idea was to create a virtual filesystem and take advantage of the HTTP infrastructure already in place with regards to caching etc.
So often when I get involved in "REST" projects, I find URLs that include words like "update" or "add", or where the GET returns JSON with a completely different shape to that the PUT accepts.
Does it matter? If you don't cache anything - probably not, but it isn't REST as it was understood.
This was my experience as well, trying to convinced enterprise security folks to let new protocols through firewalls was nearly impossible.
The difficulty was exponentiated for api/service calls inbound to the organization.
SOAP services, even though they went over http, had to go through special inspection devices because XML parsing libraries were so complex and prone to parsing issues and security vulns.
> SOAP services, even though they went over http, had to go through special inspection devices because XML parsing libraries were so complex and prone to parsing issues and security vulns.
So did the person who ordered that inspection device realize that the SOAP streams under inspection would be examined using a set of XML parsing libraries that are complex and prone to vulnerabilities? Seems like this setup just creates another device that must be maintained. Perhaps the solution would be to use another machine to inspect the inspection devices. Turtles all the way down.
The ironic outcome here was that everything moved to run over 80 and 443 and so traditional firewalls, by being so locked down, became basically irrelevant to modern information security. They just became NAT machines, and security practice moved up one layer of the network stack along with everything else.
1. an affirmative statement that HTTP semantics are mature and robust enough to express networked application communication
2. that using those semantics is better than not, when communicating over HTTP
This shouldn’t be controversial! Want GraphQL? Cool, ?foo[bar]=quux. Want a special data transport format? Cool, Content-Type: application/furby. Want a big blob of different content? Multipart has your back.
There’s nothing special about REST. Use the format! The only special thing is you have to… use it. POST / everything ain’t that. Sure it’s convenient for your custom format that doesn’t interop with anything. And then it’s a decade of articles about no one using common standards. Welp, this standard was here all along.
> This is the exact opposite of what people usually do...
Indeed, because the people that actually build stuff figured that none of these constraints are beneficial absent a magical smart client that will never exist. People implement HTTP-RPC APIs. They call the result "REST API" for buzzword compliance, which turns out to be quite important. The nitpickers that insist on bringing more REST concepts like HATEOAS into the API only make things worse to use.
It's a similar situation to Alan Kay lamenting that most "object-oriented" language don't implement what he had in mind. The people that get stuff done just don't care. The people that do care build exotic systems that the rest of us just don't want to use.
the emphasis on resources as a low level 1 idea really sold me here.
urls- names for resources- are an axiomatic core part of the web. rest definitely emphasized that things have names, that we are using verbs to interact with individual things. dont get a /team get /team/42. now you've transfered that representation to you. i see this core idea of the web being made up of resources as fairly absent from the discussion here, or talked around, but martin did a good job really putting a pin on how basic, hiw essential resources are.
the real worlds restfulness start to breakdown when we look at how many objects have real urls in them. often the state has some id's. team may have an account id. but rest suggests this should likely be a url instead.
works like Marcel Weiher's In-process ReST work to make the server-side more restful, more url oriented. As opppsed to perhaps altering id's on the way out the door, a server can now think in terms of complete urls.
One of the issues with graphql is that it doesn't use HTTP semantics. Errors are 200 status with an error field in the body. Queries are POST /graphql for everything (ok, so they don't have to be, but once you've got anything in your system that needs to go over a POST request, everything else will).
This is problematic in any environment which expects those arguments to have been settled a decade ago and has built its monitoring around them. Ask me how I know.
But how does that cause problems? b/c the monitoring software cannot see the errors?
Well, even if it did, it presumably wouldn't be able to make sense of the error field anyway; so it either has support for graph-ql specifically, or cannot see the errors.
If the resolution of monitoring is literally is-200/not-200 a basic level of customisation should allow for "has error field" as a second error condition. Saying "expects those arguments to have been settled a decade ago" ignores that little in HTTP endpoints is settled, and building brittle software around these assumptions is the problem.
I once got into a heated argument about adding HTTP verbs. They said only the classic GET,POST,DELETE are RESTful and you should never define your own. I had to bust out Fieldings paper to show that only GET has special defined properties.
And that was with a Spring Boot contributor. Misconceptions abound.
Yea I’m fighting a redesign of a REST api where the designer has been out of touch with web dev since the 90s/early 00s. They want everything as GET or POST (for our purposes we don’t allow DELETE). Doing an update? Use POST. I tried to convince him PUT/PATCH should be used but nope it’s all POST.
Either, niether or both stances could be correct REST depending on the meaning you want to give the operation.
PUT replaces the entity or thing that has that url. If the entity didn't previously exist then it is created. A cache should be able to use the incoming entity and return it directly for a following GET to the same url (in the same way the entity returned by a GET might be cached). I don't think anyone would configure a cache to work like this any more - but that's the idea.
POST was the dynamic-web afterthought for anything that really didn't really fit into the website maintenance behaviour defined for GET, PUT or DELETE. You could send anything to an endpoint and it isn't cached. POST often gets used in REST to create an entity within a collection at the url, but really (in HTTP terms) anything goes. POST does allow for returning a cachable entity through use of a Location header.
PATCH came much later and its use in REST is debated. Originally it was intended to effciently update multiple text documents through use of a single patch (or diff) document in the body. From a HTTP/caching point of view it is effectively the same as a POST (i.e. this is not an entity so don't cache it). People use PATCH today because they want to partially update a larger entity, but HTTP and REST did not provide for this. How the body of a PATCH is to be interprited in REST is effectively undefined - its up to you, just as with POST.
Some prefer to do partial updates using PATCH for semantic reasons. Some like to use POST for syntactic ones. Some like to POST to a url extended to encode the part of the entity being changed (which is wierd but compatible with POST), and some wrongly do this with PUT.
Unless you can see functional issues (e.g. when a CDN somewhere sees a DELETE for a url and could assume it should now always return a 404), then it's probably best not to get too hung up on which verb to use.
These pointless arguments over HTTP semantics are one of the biggest problems with REST. Just use GET/POST for everything and stop thinking about which of these few verbs best fit your use-case. You'll be more productive.
Ten years back I looked into drafting a RESTful API for a mature software server that didn't expose yet a public API.
What I did was read Roy Fielding's thesis then try to apply REST from first principles to our services, sketching out an API pattern that complied with the best-practices he outlined. It quickly became apparent that a couple of things were very weird:
(1) trying to reimagine our very stateful application in a stateless way was a BIG change for "let's add an API" work
(2) trying to make it so that the client could enter the API with no knowledge of the initial URI, and find every other possible interaction using links provided by API responses, was pretty ambitious
In retrospect what I probably should have done was rounded up a dozen potential API developers and interviewed them about the problems they expected to need to solve using the feature, then designed something that met their needs, rather than trying to build to Fielding's spec just because the feature request said "please build a REST API" and that's what a REST API is.
I just moved everything to basic JSON RPC with GET and POST when I'm not using GraphQL. Etags are useful for caching and that's basically it.
Fielding dissertation is a fantastic academic exercise, but I don't have time to debate the "true meaning of REST" with my coworker, like I did for 15 years.
remote API are no different from libraries, you can use semvers to version them it's up to the client to detect whether the API breaks at some point or not.
As for the idea of resource based design, HATEOAS and "smart clients", it's as dead as the semantic web.
> Fielding dissertation is a fantastic academic exercise, but I don't have time to debate the "true meaning of REST" with my coworker, like I did for 15 years.
I think this is a very true point: GraphQL certainly has analogous challenges but doesn’t seem to have developed the quasi-academic cult debating how many angels can dance on the head of a pin[1]. I remember so many pointless arguments that something had to be built in a theologically-correct manner even if it wasn’t a natural fit for the problem and there was no known user who wanted that behavior.
1. As with other things like Agile, this isn’t a criticism of Fielding but rather what came after that success, especially the people who wanted to talk like academics without attempting the rigor.
> I remember so many pointless arguments that something had to be built in a theologically-correct manner even if it wasn’t a natural fit for the problem
I don't think most of the arguments were about building something RPC-like, the people complaining were just saying "stop calling RPC APIs REST", and that's perfectly reasonable. REST is totally not just RPC over HTTP.
The arguments I was referring to were the kind where someone was saying that the API _had_ to behave a certain way because this sentence in Fielding's dissertation meant it was WRONG to do x, even if there was no known immediate benefit and/or sound technical arguments for the original approach.
The reason I mentioned Agile was the common pathology of treating something as a holy cause rather than a tool — there's a certain mindset which assumes the former and resists any attempt at perceived deviation regardless of merit.
Yes this is my thoughts too. Though I include a DELETE in that small list.
Seriously, it's been years and years and people still don't know what REST actually is. Neither do I, because everyone else gives a different explanation. It's a mess.
There is something very similar between HATEOAS/smart-clients and "semantic web", right? They are both part of the same kind of vision/dream. Or maybe semantic web is actually an attempted implementation of HATEOS?
This connection just occurred to me today in these comments; maybe because I hadn't thought about HATEOS in a while.
I still work with a lot of people who don't think semantic web is dead...
I never understood what people could possibly be envisioning with HATEOAS and smart clients. If some code needs to use an external service to achieve its goals, then it must have the knowledge of exactly how to use that service.
Building a proper rest API is pretty hard. I think I speak for many developers when I say that I limit myself to the most important core statements. For most applications, a simple API + HTTP + sensible url scheme is completely sufficient. Especially when the API doesn't need to be publicly consumable. The author himself says that as a good designer you should select a style that matches the needs of a particular problem being solved. Things like URL Autodiscovery are definitely desirable from an idealistic or academic point of view. However, I have never seriously needed such features.
> Things like URL Autodiscovery are definitely desirable from an idealistic or academic point of view. However, I have never seriously needed such features.
Yeah I'm so tired of this bs from developers who treat their jobs as an idealistic academic exercise instead of doing what makes sense for the organisation. They don't work for Roy Fielding for christ's sake, so why are they spending so much time and effort supporting his research?
I struggle to understand how this happened, because Fielding is so explicit about the pitfalls of not letting form follow function. He warns, almost at the very beginning of the dissertation, that “design-by-buzzword is a common occurrence” brought on by a failure to properly appreciate software architecture. He picks up this theme again several pages later:
Some architectural styles are often portrayed as “silver bullet” solutions for all forms of software. However, a good designer should select a style that matches the needs of a particular problem being solved.
This problem is endemic in software architecture. Every time I read a criticism of Design Patterns most of the complaints about it echo text literally in the book itself. Each design pattern chapter has a long list of reasons why not to use it, but somehow that gets lost in translation.
REST doesn't make much sense outside of the context of it's original environment: coarse grain hypermedia systems. It was cargo-culted into first the SOAP API world and then, more humorously, into the JSON API world. Goes to show how much more institutional momentum matters than common sense in technical questions.
Even granting XML and JSON could have a hypermedia imposed on top of them, if it is code, rather than a human, consuming the API, all the dynamicism of the uniform interface is wasted.
- Avoid this entire conversation if you can (web only, blazor, live wire, business constraints, etc.)
- Use JSON if your API is exposed to the general public. Keep the nesting of complex types to a minimum, preferring to reference nested items by identity rather than placing them inline.
- Use out of band mechanisms if your API is very complex and/or shared between business entities only. Example of this being SOAP/XML.
We have been consuming WSDL files for purposes of integrating with a complex banking system. I would argue that JSON would make our jobs much harder in this particular case.
The primary reason I don't like the XML path is because of the up-front schema sharing session. Once you get over this and run your codegen, life is actually really easy. For example, I enjoy having my compiler yell at me when I try to send a string vs a Datetime to a vendor endpoint.
gRPC if you need some sort of static typing over https and don't want vendor lock-in. SOAP/XML works too. It's been a while since I looked which open source toolkit supported it. Problem with SOAP/XML is that although its not supposed to be "Java specific" it is mostly "Java specific".
Not least because, despite HN now swung hard against microservices, they exist often for a reason, and it's difficult for one service to emit hypermedia for another.
The inspiration was humans surfing freely through interlinked web pages, but API clients just don't naturally do that.
> The inspiration was humans surfing freely through interlinked web pages, but API clients just don't naturally do that.
There's nothing unnatural about it, it's just one extra step to use a name that the service must guarantee is stable rather than directly accessing a hard-coded URL which can be unstable. You're used to:
var client = new HttpClient();
var result = client.Post("http://somehost.com/api/hardcoded-service-resource", /*pass some data*/);
The resource URL is hard-coded with a specific structure and meaning, which means the service can never change it without breaking clients. A simple REST equivalent would be:
var client = new HttpClient();
var entry = client.Get("http://somehost.com/api/entry").ParseHypermedia();
var result = client.Post(entry.StableNameForEndpointIWantToCall, /*pass some data*/);
There's nothing all that unnatural about this. The key is that the entry point returns a description of the API using hypermedia which maps stable names to unstable URLs. This allows service to change its internal structure as long as the entry point returns a hypermedia result that has a stable structure.
You can think of HATEOAS for APIs like DNS for API URLs. Calling it "unnatural" is like saying that hard-coding IP addresses in your clients is more natural than using DNS names. That's just crazy talk!
> The key is that the entry point returns a description of the API using hypermedia which maps stable names to unstable URLs. This allows service to change its internal structure as long as the entry point returns a hypermedia result that has a stable structure.
But it's much simpler to handle this with basic versioning. If you want to rearrange /foo so it now lives at /bar/foo, you can just put the entire old API under /v1 and then have /v1/foo internally redirect to /v2/bar/foo.
You don't need to maintain a giant hypermedia reference for all your endpoints and have the client invoke it and parse it on every single call to make things "dynamic". They aren't dynamic, it's just a layer of indirection, but the coupling between the client and StableNameForEndpointIWantToCall is still just as tight and brittle as the one between the client and /v1/foo. Same maintenance work for the server, but the second requires less work for the client.
> But it's much simpler to handle this with basic versioning. If you want to rearrange /foo so it now lives at /bar/foo, you can just put the entire old API under /v1 and then have /v1/foo internally redirect to /v2/bar/foo.
I'd say it's equally simple, not simpler. It's also less flexible. What if you don't want an internal redirect, but have to redirect to other locations? For instance, perhaps the location of the entry point is getting DoS'd. With the hypermedia entry point, you could immediately respond by returning URLs for a different geolocated service, or round-robin among a bunch of locations, or any number of other policies without having to change other infrastructure.
The point is that with the choice of a single architectural style, you get all kinds of flexibility at the application level that you'd otherwise have to cobble together using some mishmash of versioning, DNS, and other services.
It's simpler and more flexible on the whole, in a TCO sense, which you won't see if all you do is look at isolated examples of small services operating under optimal conditions. Then of course less flexible options will look simpler. When has that ever not been the case?
> You don't need to maintain a giant hypermedia reference for all your endpoints and have the client invoke it and parse it on every single call to make things "dynamic".
That's not how it works. Firstly, I don't know what a "giant" hypermedia reference is. An API with a thousand URLs, which never happens, would still be parsed in tens of milliseconds at worst on today's CPUs.
Secondly, like any URL, the entry point result has a certain lifetime that the service must respect, so you cache it and only refresh it when it expires.
But DNS allows for redirection wrt IPs. How does HATEOAS do the same?
First, you already have DNS, and can abstract wrt existing URL structure. second, you still need stable references to services in order to identify them, and they in turn need to have stable structure - Hence the only thing you can change is the top-level stuff? Why is that important?
> But DNS allows for redirection wrt IPs. How does HATEOAS do the same?
I gave a code sample above. The service's entry point exports a map of unstable URLs to stable names, just like DNS exports a map of unstable IPs to stable names.
What I described is the most basic form of HATEOAS, but it also allows hierarchical structuring, ie. you can discover more URLs via any path through a hierarchy of hypermedia returned by URLs from the entry point.
> First, you already have DNS, and can abstract wrt existing URL structure
DNS abstracts IPs not URLs. You could come up with an encoding where you map URLs to subdomains, but now you're lifting application logic to influence infrastructure policy, where your devs are now messing with nameservers instead of just sticking to their HTTP application, and you're tied to DNS TTL instead of application-specific caching policies, and DNS insecurities (no TLS).
For instance, with HATEOAS I can make a resource permanent, or live only a few seconds, and these are application-level policies I can define without leaving my coding environment. DNS just can't give you this control, so trying to shoehorn the flexibility that HATEOAS gives you into DNS is putting on a straightjacket.
i.e. why is ref <myFooService> better than www.myDomain.com/service/myFooService ?
Unlike IPs, URLs can be just as abstract as names, no?
> it also allows hierarchical structuring
This is what I meant by "the only thing you can change is the top-level stuff" - is this important enough to require the extra level of abstraction? For whose benefit is the restructuring if the stable names all share the same namespace anyway.
> you can discover more URLs
Is this a manual user hitting these API endpoints, or a client? Why is it useful that a client can be given different entry paths?
> where your devs are now messing with nameservers
current service discovery solutions don't require this. The name server just points to a single proxy.
> i.e. why is ref <myFooService> better than www.myDomain.com/service/myFooService ?
Because the latter is less flexible. I've expanded on this here [1], but for the short version, you can change the reference of <myFooService> without regard for location or based on any number of other application-specific policies, without having to host everything behind a single DNS entry.
> you could immediately respond by returning URLs for a different geolocated service, or round-robin among a bunch of locations
Is this not possible with a proxy and a (non-perm) redirect?
> without having to host everything behind a single DNS entry
But your entry point will still need to be a single entry!
It sounds more like you want extra semantics around DNS - e.g. have some entry point provide latest mirrors, localised servers etc; but this would require extra logic in the client app that could be a new protocol at a lower level.
I suppose that makes sense if you are working at the application level, but as an industry standard I'd sooner have a lot of this stuff lifted to the non-application layer.
That said, I still don't get the HATEOS aspect; Is the reason you'd have to follow references so that there is no single "stateful" collector of the state of (other) backend services?
Probably not clear but I was talking about a situation where an API client (e.g a web application server) needs to manipulate customers (managed in one microservice) and also their orders (managed in another).
Not the two microservices talking to each other.
So the API client will be making separate API calls to each of the microservices.
My premise is that it is too hard for API responses from the customer microservice (for example) to include hypermedia links to resources within the orders microservice.
Got you. I just want to point out the absurdity of it. Java microservices commonly use Javascript Object Notation for their data transfer. How is that the right tool for the job. With all the translations that are needed, why don't they use RMI/RPC? And why even spend all that development time on something that you can solve in an afternoon with a JDBC connection anyway.
Because the paradigm of RPC is not about data transfer (which is the serialization protocol).
RPC is one architectural style. It's pretty undefined other than "call/response". Marshalling of arguments is just a serialization issue.
RMI/RPC doesn't work well across firewalls or org boundaries. It doesn't support separating internal and external host identification, so when you try to RMI to a host in another DNS domain that has a local name, there's all sorts of nonsense to get the Java runtime to understand it can have multiple DNS names and be associated with multiple network interfaces.
JSON/protobuf/cap'nproto/ONC-XDR/XML are about serializing data (whether it's a resource or a marshalled argument/result for RPC).
So basically: it doesn't work well if you want to reach every IP on the whole internet. Yeah I suppose that's correct, but we are talking about internal service-to-service communication so why would you have those requirements in the first place?
It's only call/response because that's all you ever need if you can call any arbitrary function by it's name. That's infinitely more powerful than any other protocol that restricts you to a limited grammar.
>Probably not clear but I was talking about a situation where an API client (e.g a web application server) needs to manipulate customers (managed in one microservice) and also their orders (managed in another).
I would use a microservice gateway with the client only talking to the gateway. It seems weird to me that the client can call any microservice.
I've been working on this for nearly 5 years, and despite not being very popular yet, the technology seems pretty sticky for those that have adopted it.
But there's gaps in the standards, and we need way more tooling for more ecosystems. But when it works, it works great.
I always understood application in HATEOAS to be referring to the browser or whatever type of application sitting on some machine somewhere had been written to consume hypertext, but it sounds like you're considering a service to be the application?
My point was, how is such an API served up when it spans resources that are managed by two separate services / microservices - one for customers, one for orders?
Of course this is a simple example, we are smart tech heads, so obviously we can wedge a third service in front of the other two, which exists solely to provide aggregating behavior like this.
But why? Is hypermedia so useful that it's worth going to this trouble? (No IMHO).
Don’t you just return links that refer to the other service? (ie absolute URLs). That is kind of the point of hyperlinks: the service hosting a resource can change, and you just change the links you serve rather than changing lots of hard-coded clients.
(the example I used is so simple it didn't really highlight the difficulty)
My point is that one microservice cannot easily generate links for another.
How does the customer microservice (that wants to be hypermediaful), generate a deep link into the order microservice, e.g perhaps to obtain the latest order for a customer?
It can't, unless it is uncomfortably tightly coupled to the other microservice.
The most pragmatic solution is not even to try. Better to pass on the hand wavey HATEOAS stuff, and just tell the developers to work from the API documentation.
> How does the customer microservice (that wants to be hypermediaful), generate a deep link into the order microservice, e.g perhaps to obtain the latest order for a customer?
In short, you're asking how to implement service discovery.
Also, in REST there is no such thing as a "deep link". There are only resources, and links to said resources. HATEOAS returns responses that represent application state by providing links to related resources, and that's pretty much all there is to it.
I completely agree it is in part a service discovery problem - my original point is that HATEOAS is not a workable service discovery mechanism in a microservices environment.
Instead, use some service discovery technology. Not hypermedia.
It's too hard for one service to generate links (yes, let's call them complex links instead of deep links for enhanced correctness) into another.
If it can do that then they were never really independent microservices in the first place, they are so tightly coupled.
> I completely agree it is in part a service discovery problem - my original point is that HATEOAS is not a workable service discovery mechanism in a microservices environment.
What leads you to believe that? You want a related resource, and you get it by checking it's location. It's service discovery moved to the resource level. What's hard about it?
> Instead, use some service discovery technology. Not hypermedia.
I don't understand what's your point. Where do you see any relevant difference? HATEOAS is already service discovery at the resource level.
> It's too hard for one service to generate links (yes, let's call them complex links instead of deep links for enhanced correctness) into another.
Not really. Tedious? Yes. Too hard? Absolutely not. Not only there are even modules and packages that either do that for you or do most of the work but also it's not different than just putting together a response to a request.
> If it can do that then they were never really independent microservices in the first place, they are so tightly coupled.
You seem very confused about this subject as you're not only mixing up unrelated concepts but also imagining problems where there are none.
From the start, REST is just an architectural style, and HATEOAS is just an element of said style. HATEOAS basically boils down to a technique to allow clients to not have hardcoded references to URLs pointing to resources. Instead, when you get a response to a request, the response also returns where you can find related resources. That's it. It matters nothing if said links never change at all during the life of a service. What it matters is that if you're developing a service that's consumed by third-parties that support HATEOAS, you do not have any problem whatsoever peeling out responsibilities to other services or even redeploying them anywhere else, because your clients will simply know where to look without requiring any code change at all.
> My point is that one microservice cannot easily generate links for another.
Right, if these are all REST services, then one service should not be generating links for another, it should be obtaining those links from the service itself.
> How does the customer microservice (that wants to be hypermediaful), generate a deep link into the order microservice, e.g perhaps to obtain the latest order for a customer?
It asks the service for a link via a call to a link it obtained from the service's stable entry point. Pretty standard stuff when you're reasoning about encapsulated systems.
Why do you think it’s hard for one microservice to generate links to another microservice? The alternative is that all the clients are tightly coupled to both of them.
The alternative may not be palatable to you but IMO if two microservices have an encyclopedic knowledge of each other's url structures, then they aren't really separate microservices at all.
Well indeed, but the hyperlinks here are just surfacing the coupling that already exists between those services. Making explicit an already implicit coupling.
Two other points:
- you can use HATEOAS to let one microservice discover the links it needs from another, exactly as a client would do.
- by not using hyperlinks you have not solved the problem, you’ve just moved the requirement to know the URL structure from another microservice (that you control) to dozens or hundreds of clients, that you likely don’t control. This makes your services much more brittle, not less.
Respectfully I believe you have constructed a straw man.
In this contrived example, assume there is no need for the customers microservice to even know about the existence of orders to do it's job.
So forcing it to understand all the various url formats, query params, that the orders microservice supports - just so it can populate hypermedia elements in it's own APIs - which in turn is being done only to support discoverable clients or perhaps to appease the REST gods - seems an ugly architectural choice to me.
Perhaps I would feel differently if I had ever seen a project where HATEOAS had really moved the needle instead of being more a catalyst for ideological battles.
Surely the Orders service would support a single entrypoint - "index for customer X", or something - which would return a list of links into the resources that it owns? You're not forcing the Customers service to know anything more than that single per-customer entrypoint.
Exactly the same pattern applies for any other domain. All the Customers service needs to know is that there is a link to another domain; the structure is entirely down to the service for the other domain to manage.
Why does the knowledge have to be encyclopaedic in the first place?
If a customer microservice has to link to another, the only information it needs is the link format, for example: `https://another-microservice/customers/{customer_id}/orders`. This URL pattern can be in an environment variable, or configuration file.
However, I'd consider needing a customer-related microservice to link to the "last order" in another microservice a bit excessive. Why is this requirement in place? Why can't the client go to the first URL (`/customers/{customer_id}/orders`) and figure the last order from another link there? The link to the last order can be provided by the second microservice, without needing to involve the customers microservice.
If this is all an optimisation to save the client a roundtrip, then yeah, there are gonna be downsides. But that's the nature of optimisations: there are downsides. It would be the same in any other architecture.
Parent commenter point is that you need the URL structure to be somewhere. If it's not in the microservices, it's in the frontends. There's no magic silver bullet here, something's gotta give...
> Is hypermedia so useful that it's worth going to this trouble?
No. It's a pointless indirection. These APIs are universally awful to use. I'd also rather you provide a library that abstracts your poor taste in HTTP API design. If I have to navigate your HATEOAS swamp, you better make up for it by being really important (PayPal).
Not familiar with Django sadly but it sounds like it would act as something of a proxy that sits in front of your microservices.
My point is that if you are starting with just e.g 2 microservices, one for customers, one for orders, then hypermedia on it's own is not useful enough to justify adding a new proxy/ layer, just so that an API produced by one of those microservices can include hypermedia for the other.
>Not least because, despite HN now swung hard against microservices, they exist often for a reason, and it's difficult for one service to emit hypermedia for another.
Why would microservices even need HTTP to talk with each other?
SOAP, REST, Big Data, microservices - it's funny how, at the time, everyone seems to be unbelievably vociferous that <technology_of_the_moment> is the only way to do things and that we should all be thankful that the great magnet in the sky gave us this sacred cow. Then, 5 years later, it's obviously the worst thing ever next to the <new_hotness>. <new_hotness> always being unevenly in some ways worse and in others better.
We need to, in our exuberance, ask ourselves: what are the current false Gods? Are we technologists that reason or a headless flock?
REST was never about "technology", it's about an architectural style that is particularly suited to a networked separation of API consumers and producers.
SOAP was an overengineered set of "standards" that had enough wiggle room that none of the SOAP implementation would interoperate correctly.
"microservices" are just "services". The technology that allows them, also allows for horizontal scaling, especially if you ensure that your API is stateless (not the resources, by definition, they have state, that's what you're "transferring").
I've been doing this stuff for 30 years now, and the more things change, the more they stay the same.
IMNSHO, the current false Gods are all the "cloud" items. The infrastructure stuff in AWS/GCP/Azure is just... networking and servers with APIs on top. There's nothing particularly special, except that the APIs let you avoid thinking about the underlying infrastructure.
As the cloud climbs the software stack, from IaaS, to PaaS (eg RDS), to SaaS (eg Stripe), it's interesting that the APIs remain primarily HTTP/JSON (or XML) and "RESTful" for the most part.
I wonder if the reason HATEOAS is not taken more seriously / developed further is simply the economic / cultural context of the dominant style of software development during the past decade (centralization, winner-takes-all, command-and-control etc) rather than something intrinsic about the architecture
A decentralized web (as originally imagined) involves countless diverse servers and clients exchanging data of constantly evolving (as in: growing in complexity) semantic structures. What is needed is not inventing a "language to rule all languages" but inventing the dictionary.
If there is an alternative pattern that can somehow enable a truly functional decentralized web please step forward.
I think you're right, but I also think the cause-and-effect is pretty bilateral. Designing implementing and maintaining that kind of decentralized HATEOS/"semantic web" (there's a simlarity, right?) infrastructure, in a way that actually works well... is very expensive. More expensive than it's (non-"political"[1]) benefits. People do things where the immediate benefit is worth the immediate cost.
That is, there is something intrinsic about the architecture that is both suitable to decentralization, and more complicated/expensive for not enough obvious benefit to those who pay the expense. And these are related.
[1] "political" in the sense of "where the power lies". And they are real benefits there to decentralized infrastructure (at least for those who aren't going to be in the center). But the way our economy and society works, they don't produce funding to create.
I agree, thats why the role of the public sector is important to overcome the lack of "immediate" benefit, or more precisely, the lack of immediate pecuniary benefit. Quote: The Web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.
XHTML. Seriously. If what you want looks like an extensible document type with custom embedded data that you want to reliably parse out, that's what it's there for.
the "X"(ML/HMTL/...) universe seems to have been deprecated but lots of boring people out there that need to manage real life complexity seem to wonder the wisdom of the "in crowd"
progress has sometimes a strange "helix" like structure: revisiting and re-using valuable old patterns after fixing their pain points
> Only a single chapter of the dissertation is devoted to REST itself; much of the word count is spent on a taxonomy of alternative architectural styles…
Ah, how science used to be done (actually by 2000 this was rare too).
If you look up the original (computer) mouse paper you’ll see that the mouse was just invented on the fly bc they were trying to see if a trackball or pen digitizer (light pen — hold up to the CRT) was the better pointimg device. But an n of two was too small so they made a couple of others (I liked the broomstick hanging under the table) and the mouse was a throwaway example that actually turned out to be the best.
I asked Engelbart about this once and he said this was in fact the case. He invented quite a few unusual input devices.
(Perhaps cursor keys were in there too? I can’t remember)
The implicit and generally overlooked actor of the REST based systems is the client. REST requires a semantic client that reads the provided resource lists. The germinal idea behind REST was an appreciation of WWW as an ad hoc distributed system resilient to various fault modalities where unknown clients collaborated with known servers, using only a hypertext protocol. How did this work and how can it be generalized? That is the REST dissertation.
A REST system for domain X naturally requires design for a finite state machine client that can infer semantics for the resources in domain X. In general WWW, the client is a browser + a human being that reads links. Once you remove the human semantic processor of this equations while retaining capability, the client rapidly increases in complexity, if at all possible.
Is it natural to think of ("mutating stateful!") systems? It is difficult and somewhat out of phase to be server centric. But if you think in terms of a smart client for domain X, map out the app's capabilities, and then map resources in domain X you need to fulfill capability C, then REST is a perfectly natural way to consider systems, in an end-user centric approach to system design, for a specific domain.
> This is the deep, deep irony of REST’s ubiquity today. REST gets blindly used for all sorts of networked applications now [...]
This. I don't even know where to start if I were to talk about the idiocy behind stateless networked apps on my phone.
My fav. example are car/scooter sharing apps. Some even 'forget' the map tiles between screen locks.
But pretty much all of them 'forget' your active booking.
And the route to the vehicle.
Which is just great when you are walking from a party (where you had the host's WiFi to do the booking) in a neighborhood with shitty mobile network, in Winter.
For something like ShareNow it can take up to three minutes on an 'E' connection to load missing map data, active booking and route to car.
If your phone locks screen during that time ... just unlock and ... wait another three minutes.
You may have guessed that I'm talking first hand experience here. Frequently so, ofc. I'm using these apps heavily here in Berlin, since 2014.
It's the reason I bought gloves that let you use a touchscreen. Not kidding.
Even though it's valid for 20 minutes and the likeliness of the the vehicle becoming unavailable due to reasons outside of the user canceling the booking or changing the phone's system time are pretty much non-existent even this info is not retained on the Phone. Because that would entail state.
The idea that the network interface of an app can be RESTful yet the app itself can carry state and cache stuff on the phone – and that is A-OK – seems absent from the minds of whoever develops this.
I'm not saying REST is the right approach to the apps you describe, but nothing you're describing is a REST issue per se. You're describing incompetent handling of network disconnects, cache invalidation and many more. Theses are issues that plague all distributed systems.
True, but we have generations of developers whose mental model for this stuff is based on stateless protocols, and that naturally bleeds through into the software architectures they produce. I definitely notice the difference when talking to engineers who have experience developing stateful software (e.g., desktop apps, client/server, etc.), vs. those whose experience is primarily with web apps (not that web apps can't be stateful, I guess just different levels).
I've been thinking about this a lot lately, in the context of offline-first mobile dev. That's a tough nut to crack, and I find myself re-evaluating some classic distributed system stuff I haven't really thought about in years. Gotta dust some old gears off!
I think you are reading something between the lines
I quoted from the article regarding the idea of using REST or similar ideas for 'everything'.
But my own text talked about statelessness used to worst effect, not REST.
The point was that there is a whole generation of developers who never wrote a desktop app.
A lot of them come from web. They treat the cloud as an extension of the device with no regard for the case where it can't be connected to at all (for some time) or the latency induced by the connection wrecks UX.
And I see one reason for that in the statement I quoted. But again, what I wrote myself was not saying anything about REST.
These apps are poorly written, but this has nothing to do with REST, or HTTP being stateless. Indeed, HTTP being stateless forces apps to implement state management which can naturally survive network interruptions. If the apps you are complaining about used a stateful protocol, they would be even worse!
I said the the protocol can be stateless but the app itself, towards the user, should carry state – where this makes sense. E.g. in the cases I described.
I worked on one of these. We had giant problems with caching.
You have an assumption that the likeliness of the vehicle becoming unavailable but there are so many ways this isn't true. The edge cases of car sharing can be downright weird. (For example, some services have both app and mobile, and can cancel via a call to customer service. Let's say that a user books on mobile and cancels by app or even by calling to customer service. They are refunded the money. What happens when they then try to unlock the car? What happens if the signal to the car is poor?)
The tension between good UX and bad actors is amplified by $20,000 dollar assets and had some of the most difficult problems I've worked in from a requirements standpoint. The easiest thing was to avoid caching where possible.
I'm learning this stuff at the minute. And based on the description provided of REST I couldn't marry it up what what the course was later describing as REST. This article has made sense of that for me.
"..today it would not be at all surprising to find that an engineering team has built a backend using REST even though the backend only talks to clients that the engineering team has full control over."
This bit is particularly important to me personally because I have been that guy in the past. As a new grad fresh out of college, someone taught me REST and I was.. impressed. I thought it was the only way to design web backends. It was my hammer and everything else I saw was a nail.
A few years on, I can look back and understand how stupid I was.
Frankly, years ago, when reading it, I got the same idea, but promptly dismissed it because it couldn't be the case that all that people got it wrong and I was right.
With the popularity of grpc and thrift I've not seen many cases where REST is needed for internal communication. I understand it's required for external communication though. Grpc and thrift are kind of the opposite direction APIs are taking (or have already taken) as in don't we hate RPCs? We do but speed and efficiency are much more important for internal comm (and binary protocols are just one of the factors at play here).
Yeah, it's designed for discoverable APIs on the open internet, why use it for internal communication I have no idea. All the time I've wasted making apps slower and worse, and arguing over it, just because it's the "correct" RESTful way, and creating and maintaining documentation, when there's only one single client, and it's also managed in the same team. It just becomes a line drawn in the code in a random place where everything becomes more difficult, dogmatic and impractical.
REST is far too narrow to fully encapsulate the features necessary in even the most basic of web apps. A truly RESTful API is essentially just a database. So application backends always ends up being a JSON RPC API over HTTP. Better to formalize that with an RPC framework than zealously attempt sticking to something that is inadequate.
I disagree. The idea with REST is to push state to the backend, and add links to substates, so that the protocol doesn't have to be stateful, and logic is pushed to the edges of the network.
SPAs breaks this principle by making webpages into fully fledged apps who do their own state tracking and navigation. This was never the intention of REST. The idea was to make the webpages themselves linkable RESTful substates of the application, and having the browser do the rest.
From the SPA angle, RPC might look like a good solution. But RPC in itself doesn't solve any problems except literal remoting.
A RESTful API makes states linkable and removes ordering dependences from the API.
Look at web frameworks which tried to make web sessions stateful. They are dealing with all the pains that REST solves.
Not true. People are trying to use the mental model of a huge stateful desktop app and apply it to the web. That's not to say it's a good idea to use that model, far from it.
The web was designed to avoid it for multiple very good reasons. In fact, the success of the web has partially been attributed to those very design decisions, so it might make sense to pay attention to them.
Google made public their own guidelines on how they design all of their new cloud APIs a few years ago. They came to a very different conclusion while still acknowledging the same points.
Their position seems to be that REST standard methods as much as possible (even when technically delivering an RPC API) and fall back to custom methods when it makes sense to do so.
> A truly RESTful API is essentially just a database.
Not sure where you get that from. A truly RESTful API describes resources and their state changes via media types and URLs and the use of a limited number of verbs for transferring those state changes over the API.
People like to map REST to CRUD because GET looks like a SELECT, POST looks like an INSERT, PUT looks like an UPDATE and DELETE looks like a DELETE.
But there's nothing set in stone about those mappings.
An API can represent their resources however they like and use the appropriate links to URLs to represent the state changes. What the underlying persistent storage is (if any) is irrelevant.
YAGNI most of the time. Custom verbs change the focus to the verb. The verb is not visible because it's ephemeral during the "call". Whereas the state of the Noun is clear.
Verbs are also not idempotent for the most part, whereas it's relatively easy to ensure that the non-idempotent HTTP verbs (POST, PUT/PATCH, ignoring DELETE) can be made "repeatable" with use of etags or other unique identification.
Which is often what you want - a lot of things in the real world are more easily understood through verbs than nouns, which is what the essay is all about.
REST says that everything has to be an entity, which is why so many "REST" APIs end up just exposing their database as CRUD rather than offering a vocabulary that actually makes sense for the domain. When nouns are the only important things in your domain then it works. When verbs are important in your domain it doesn't.
> it's relatively easy to ensure that the non-idempotent HTTP verbs (POST, PUT/PATCH, ignoring DELETE) can be made "repeatable" with use of etags or other unique identification.
Most "REST" APIs "in the wild" don't actually do that (or, worse, do it incorrectly), so I'd question the idea that it's actually easy in practice.
It encourages custom verbs that are universal independent of the media type. That way, the operation of the verb (idempotency, caching, etc) can be fully specified and applies across the board.
If I do a GET, I know that I can do it again without affecting the resource. I'll GET the current state of the resource identified by the URL.
The RFCs define those verbs and their operation. I can rely on that spec for the way they operate.
Maybe you can do it with REST but it would be more complicated and won't be the right tool for the job.
RESTful or FIOH seems only appropriate if you design a public API. If your API isn't public, you don't need it to be RESTful. If you just need microservice communication, you don't even REST nor HTTP.
We shouldn't try to deform the whole logic and actual needs to fit a few HTTP verbs.
With RPC you can call any function by arbitrary name. That is by definition the most powerful protocol ever. Any restricted grammar will take away from that.
I agree, it's too narrow to be used for general remote procedure calls, and an incredibly primitive database compared to the alternatives. Imagine having a web socket directly to an RDBMS being able to call any stored procedure, you would do backflips around teams arguing over REST API:s.
>Imagine having a web socket directly to an RDBMS being able to call any stored procedure, you would do backflips around teams arguing over REST API:s.
Doing that will make REST zealots judge you for heresy.
> Doing that will make REST zealots judge you for heresy.
Yeah but why are people REST zealots I don't understand. Why are developers allowed to have "strong opinions" that align with goals of people outside of the organisation, that conflict with the organisations goal, isn't that misconduct
I wouldn't mind using "FIOH API" instead of "well the API is RESTful, I mean, it isn't really RESTful, but we use HTTP methods on resource-based endpoints and a couple of HTTP status codes to represent our mostly CRUD-ish actions".
Working on a big microservice solution right now, it amazes me how inefficient we are: we are making HTTPL calls, replying HTTP calls, serializing data to JSON, deserializing data from JSON in a long chain of calls.
I rather see a distributed application the same way as I see a monolithic application: I don't want to make HTTP calls, I don't want to serialize JSON, what I want is to call remote functions with my local data in a reliable, fast and efficient way. It seems weird that we can't just do that and instead we pack the data in layers upon layers, thus decreasing performance and increasing complexity a bit.
Following "REST" or "FIOH" we actually follow the principles of Execution in the Kingdom of Nouns[1]. REST is to APIs what OOP is to programming: something that is unnecessary and forcefully used by anyone in all those places it really shouldn't and for the wrong reasons.
>Anyway, Fielding makes clear that REST is intended as a solution for the scalability and consistency problems that arise when trying to connect hypermedia across the internet, not as an architectural model for distributed applications in general.
Are there know architectural models better suited to build microservices? GraphQL, grpc are constructed on top of HTTP which according to the article uses REST as inspiration.
I dreaded SOAP, and how it seemed to spawn an industry . I dreaded having to answer SOAP related questions in job interviews, but more than that I dreaded using any form of SOAP in my day-to-day work. Its nice our industry consigns certain technologies into their place on the sidelines. Not to say we still have many others that are a burden on the developer, and the organization in general.
what's funny is while REST is so badly conceptualized the underlying concept of using simple http with headers and a json payload to implement RPC is a very good one.
That said I think XDR based approaches (everything from original Sun XDR to protobuf/grpc) are far better because the idea of passing API calls through HTTP proxy layers is kinda crazy.
> Fielding also felt that cookies are not RESTful because they add state to what should be a stateless system, but their usage was already entrenched.2
I'd be curious to see a web without cookies.
I understand the desire for stateless, but given how many things need logins and an user "session" how would that work without cookies?
>We remember Fielding’s dissertation now as the dissertation that introduced REST, but really the dissertation is about how much one-size-fits-all software architectures suck
That's somewhat ironic given that anybody is using "RESTful" APIs now.
There were two key discoveries that Fielding proposed regarding URI, one of which is REST. The other is that URI may serve as a universal unique identifier that works regardless of whether a resource is resolved to a given address.
Yes, Fielding's dissertation backronymed the HTTP protocol into REST, but the concept of resource driven design also exists.
The benefits of applying the REST style is:
1. Limiting the verbs makes you focus on the nouns (URIs) and the description of them.
2. REST verbs do NOT "map" to CRUD in a database. This is overloading the REST style to match a server's implementation. What or how a server (or client) maintain the state of a resource is up to that server/client.
3. HATEOAS is not really achievable for naive clients. Browsers "expose" the links to the human user to enable the user to navigate. HTML provides the particular media type to expose that navigation.
4. If a server and clients agree on a media type, then as part of that agreement is the interpretation of attributes, like "link" elements with "rel" and "href" attributes.
5. RPC (whether as ONC-RPC, CORBA, XML-RPC, JSON-RPC) etc is brittle. It usually requires both the client and server to upgrade their understanding of the interface simultaneously and makes it hard to version or separate the system elements. This lesson has to be learned over and over as programmers realize that "remote" procedure calls are not local and can't use the same semantics.
6. The process I've followed (successfully) is:
* Define a "well known" URL. I use "/versions" that returns a media type that describes the available API versions, attributes of them like "not before" and "not after" and a versioned endpoint for subsequent usage. This allows clients and servers to evolve independently. The versions follow semver, so it's clear when there's breaking and non-breaking changes.
* Define resources in terms of their state (via a FSM) and transitions. Map the transitions to HTTP verbs, along with media types that describe the change in state.
* The transitions should be expressed in terms of the resource, not CRUD. Don't talk about "Create Account", you "open" an account. In business terms, it is very rare that you "delete" something. In addition, HTTP semantics means that you can't return a message body for the DELETE verb.
* Using PUT vs PATCH can be avoided by exposing the things that you want to "PATCH" via sub-resources in the URL pattern (or appropriate HREFs in the parent resource). Including the sub-resource attributes in the parent (when you GET it for example) should be selectable via a query parameter on the GET. However, where possible, use GETs of the sub-resources and cache them appropriately.
The net result is that you have an API that is consumable mostly automatically, with the agreed media types defining the necessary navigation.
I've never used downloadable code as it hasn't been necessary, but it's relatively easy to see how to extend the media types to include links to the code (ie the equivalent of <script> tags).
Unfortunately protobufs have re-introduced the concept of RPC style APIs and they absolutely suck, even if protobuf does allow you to version the messages. You still need to exchange the proto headers and recompile code to update.
It's still brittle RPC underneath, because programmers can't be assed dealing with network failures or other issues or with proper internal versioning of the underlying implementations.
There's really no need for most use-cases to use anything other than HTTP, the standard verbs and their interpretation, URLs (potentially with patterns but avoidable with proper links), and JSON media types/schemas. Compression of the JSON makes it mostly equivalent to protobuf on the wire, HTTP/2 etc avoids the single threaded nature of the HTTP TCP connection.
> RPC (whether as ONC-RPC, CORBA, XML-RPC, JSON-RPC) etc is brittle. It usually requires both the client and server to upgrade their understanding of the interface simultaneously and makes it hard to version or separate the system elements. This lesson has to be learned over and over as programmers realize that "remote" procedure calls are not local and can't use the same semantics.
Using "REST" for communication can be brittle too, since you have to update the contracts in both "client" and "server".
Sounds to me like you're listing the benefits of the FIOH (as used in the article) style, i.e. using HTTP verbs to manipulate (reasonably named, nested) resources (which often, but not always, maps to CRUD actions on your database models), as opposed to the RPC style.
The main issue that we're doing FIOH and calling it REST, which makes internet arguments about RPC vs FIOH harder and confuses the hell out of newbies who are told to go and build a "REST API".
Agreed, "REST" as we generally use it is actually "RDD" or "ROA" (Resource Driven/Oriented Design). Use of HTTP as the protocol is a "side effect" that happens to map nicely. [1]
But it's more than that, especially if you design using resources, finite state machines and transitions. That's independent of protocols or anything else. Once you have that defined, the infrastructure often "falls out" and you have APIs that are reasonable, as well as working well into CQRS and Event Sourced implementations.
A lot of this stuff was thrashed out on that yahoo group, which happened around the same time as "AJAX" became the latest buzzword.
I think of RPC as "The verb is important, not the noun", whereas REST/ROA is "The noun is important, not the verb".