Hacker News new | past | comments | ask | show | jobs | submit login
Post-REST (tbray.org)
288 points by mpweiher on Nov 19, 2018 | hide | past | favorite | 109 comments



REST for the control plane works because there are few enough "resources" that you can have a URI for each and CRUD works. Much more importantly, the pressure to have complex multi-resource transactions is low because the control plane tends to have simple schemas and there is no need to do complex multi-resource transactions with any kind of atomicity.

The data plane is not at all like that. There can be many many resources, and very complex transactions.

PostgREST, a REST interface to PostgreSQL, shows you can have REST for the data plane, but it achieves that by forcing one to have simple transactions with any and all complexity wrapped into PG functions so that the REST interface can remain simple.

The place where REST falls down, really, is transactions. For simple enough schemas and transactions involving only one relation, it's easy enough to have URIs for resources and collections. But for anything much more complex than that REST simply fails. Instead one ends up writing an end-point that accepts POSTs whose request bodies encode complex transactions, and this ends up resembling SOAP (basically RPC over HTTP) rather than REST.

Imagine an HTTP/1.1 verb named TRANSACT whose body consists of a concatenation of the requests that you really want to do, and with a header controlling / reflecting transaction control options (e.g., atomicity). If HTTP had such a thing, then it would be a lot easier to build a proper RESTful interface to the data plane. But HTTP has no such thing, and probably never will.

Incidentally, it's useful to consider how NFSv3 had RPCs for every little thing, while NFSv4 has only one RPC (COMPOUND) that is... a lot like the TRANSACT I imagine above -- it's just not that crazy a thing to imagine. The transaction complexity problem in HTTP is not unique to HTTP!


> The place where REST falls down, really, is transactions. For simple enough schemas and transactions involving only one relation, it's easy enough to have URIs for resources and collections. But for anything much more complex than that REST simply fails. Instead one ends up writing an end-point that accepts POSTs whose request bodies encode complex transactions, and this ends up resembling SOAP (basically RPC over HTTP) rather than REST.

It's perfectly RESTful; resources that represents actions are just as valid as any other resources. I feel like the mistake here extends a common mistake of object-oriented analysis where the valid idea that objects should correspond to nouns are somehow combined with forgetting that actions are nouns.

If you have an API with complex transactions (especially if you need to expose state of those transactions), why wouldn't they be resources? Why wouldn't you post a representation to a collection endpoint? How is that not exactly REST?


> It's perfectly RESTful; ...

Yeah, so this is where just what the meaning of REST is comes in. There's no representation of the "transaction" resource that you can GET, therefore they are not like the other resources (the ones you can HEAD/GET). You can only POST representations of transactions that internally contain ad-hoc encodings of CRUD operations on multiple other resources. This really feels like it should have been a first-class operation in HTTP rather than constructed in an ad-hoc way as needed.


> There's no representation of the "transaction" resource that you can GET

Except in the most trivial cases, it's probably essential that there be such a representation and that you 201 the transaction with a URL for the resulting resource. The nature of aggregate transactions is that you probably don't want to do a 200 and done, especially if you are supporting arbitrary combinations of CRUD actions as you suggest.

> You can only POST representations of transactions that internally contain ad-hoc encodings of CRUD operations on multiple other resources.

“Ad hoc”? No more than any other resource representation. Not that I think most applications that need a transaction resource of some kind need an arbitrary combination of lower level actions; most need well defined classes of domain transactions that are each more constrained.


If you design in this model, 'GET transaction-uri' should get, y'know, the payload of the transaction, or a status resource, or whichever you asked for based on proactive or reactive content negotiation, or whichever the server wishes to make available.

It's just such a mental departure from the last 20 years of 'do-the-verb-to-the-noun' object oriented design that it's not often done.


Sure, but it's still not like a resource, and you still have to write code to implement it. You could never have a user-agent construct a transaction because the construction is completely ad-hoc and will differ by application. It's just not generic.


> Sure, but it's still not like a resource

It is exactly a resource.

> and you still have to write code to implement it.

Well, someone, somewhere has to, just like any other resource.

> You could never have a user-agent construct a transaction because the construction is completely ad-hoc and will differ by application.

This is not true in any sense where the same is not true of lots of resources.

> It's just not generic.

While REST may favor using suitable generic representations where available, being generic isn't a requirement of REST.

REST is DRY, not “Don’t write code for anything.”


It seems like the GP is arguing that REST is insufficiently generic and you’re responding with “That’s not true, because REST isn’t generic”. Why is it desirable to write the same boilerplate code for every resource and now (presumably) every combination of resources that are operated on by every kind of transaction? Or are you making a different argument?


It's not clear what 'generic' means here, or in the parent, or anywhere in this thread. But in my view, this post [1] is saying that payloads that define a complex transformation or atomic manipulation of multiple independently-accessible resources require an extremely intelligent user-agent to construct case-by-case, because there's no widely-accepted terminology to describe arbitrary transformations of arbitrary data, or to introspect raw, semantically non-marked-up data ahead of time and figure out what needs to be done.

In other words, to construct a resource you could POST to '/bank-transfer' that removes x amount from account y and adds it to account z requires the client to know about what these things mean. But the point of this post [2] is that that's precisely true, and it shouldn't be surprising, because this is exactly how every resource works.

The disconnect between the expectations in this post [1] and this post [2] stems from the fact that most of these CRUD-over-HTTP APIs are actually implemented as thin middleware that parses info from the URI and hands it to database to retrieve. Resulting data items are then serialized using simple conventions as some commonly-used, non-hypermedia structural format, like JSON (no links, no rels), and served as 'application/json', devoid of any programmatic semantic meaning. A human reads this, and uses a combination of common sense, natural language skills, documentation, and trial-and-error to extract meaning from such a payload (e.g. "customerId" probably matches our customer number). For both retrieval and the inverse of this as storage, this is simple, and works: fields are matched by name, and the values are taken at face value.

To the author of this post [1], this probably feels 'generic'. It takes low effort to implement on the server, it can be templated to work on any resource it can find in the database, and assuming an intelligent (i.e. human) client, these resources are practically self-describing! But to the author of the other post [2], this approach seems like a hardcoding of a particular behavior into client based on the the structure of a resource (e.g. a Customer resource), which doesn't seem meaningfully different from the same burden that would be required to implement a resource that describes a domain verb rather than a domain noun (e.g. a Bank Transfer resource).

[1] https://news.ycombinator.com/item?id=18488976 [2] https://news.ycombinator.com/item?id=18489402


Generic refers to whether the existing user-agents can construct the request without the help of some JavaScript code. For example, your browsers know how to construct HEAD, GET, POST, and other requests, but they don't know how to construct a POST to a transaction end-point that encodes a bunch of POSTs on individual resources.


Assuming you accept things like links and forms, you can get a user-agent to collect input and ad-hoc "job" descriptions from the user and fill in the required structure in your specific application model. It's no more exotic than building up a shopping cart and submitting an order. Once you reify the process, you can add asynchronous monitoring and lifecycle controls as REST operations on these resources.

Don't mistake a custom resource model with server-side processing with a requirement for a custom user-agent. All of the above could be done with pure, form-driven interactions or augmented with client-side processing to provide a slightly nicer UX. Of course, that doesn't mean you will have a generic web service that can perform all different transactions.


I'm not sure what you're thinking of. What I've see is where a transaction resource is expressed as a collection of modified documents. E.g., in pseudo JSON:

  [ doc1, doc2, doc3 ]
Usually each doc consists of some metadata (at least the doc URI, probably a revision number of the base document, maybe more depending) and the modified document content.

(actually, it might be better like this: { "uri1": doc1, "uri2": doc2, "uri3": doc3 })

I'm not sure what your objection is, but this seems pretty generic to me.

So the client GETs documents. It modifies some, puts them in a collection to form the transaction and POSTs them back to the store.


The reason you can't POST a transaction [1] is primarily that due to ~25 years of Web inertia that has hardwired people's brains into thinking that the request lifecycle is that a request comes in, you deal with it, you emit your response, and then you drop all state on the floor. For a good chunk of those 25 years nothing else was possible due to memory constraints, but today things are bit more fluid, yet the old ideas persist, often still unexamined [2].

I could build you a transactional API in Erlang, Haskell, or Go fairly easily, where it is easy and not necessarily unidiomatic to have a thread persist for a long period of time. It would do the obvious thing: You POST to get a transaction ID, the thread starts a DB transaction [3]. You do your various other POSTs, passing the transaction ID. When you're done, you POST to the transaction itself the commit request and get the response back. If you take too long, the coordinating thread will automatically roll back. It's not terribly different from other REST-based workflows that exist in the world; a REST interface [4] is not particularly required to not have workflows in it. There are various technical issues, some of which I footnoted even, but it's conceptually perfectly possible.

There isn't a representation of any sort of resource until a developer creates it. There's no particular reason why a transaction can't be such a resource, any more than any number of other very abstract resources that exist in the world.

[1]: Properly speaking, not a GET because it changes the state of the world.

[2]: Unexamined != wrong, by the way. Some of the old issues still obtain. Some, however, do not. Dropping all state is still often a great idea, but there's a lot of APIs and code bent around a repeated cycle of "drop all state -> immediately reconstruct it all in the next request" that wastes a lot of developer and computer time for something that is not always necessary anymore if you're not scaling to the moon. Scaling code is great, but don't pay the price for 10^8 when you're never going to pass 10^3.

[3]: Possibly it could build up a list of desired changes you want, or a number of other ways around holding open DB transactions for long periods of time, but let's go ahead and do it the hard way for demonstration purposes.

[4]: At least, the modern colloquial sense of the term. I'm not particularly concerned about the details of The Thesis, which I consider a nice piece of work, but not something particularly binding on the rest of world, and not necessarily that much more interesting than an extended blog post since I do not believe it to have been based on anything like extensive real world experience or anything else that would give it more heft than any other thesis.


> There's no particular reason why a transaction can't be such a resource, any more than any number of other very abstract resources that exist in the world.

Well, true, but also kinda besides the point? I mean, what you are saying is essentially "there is no particular reason why a transaction can not be represented as a sequence of bytes". But by that measure any sort of protocol or transport that can move sequences of bytes is equally good/useful.

The fact that you can in principle tunnel anything over anything is not a useful insight when trying to determine the usefulness of a particular abstraction, at best it tells you that you can always add another layer of abstraction to add missing functionality.


> You POST to get a transaction ID, the thread starts a DB transaction [3].

I don't disagree with the logic here, but this is going to get old fast in a modern load-balanced, multi-region/zone topology, since it means pegging the client's state interaction to a particular backend process.

Sticky sessions are the classical answer to this, but they are very much something you want to avoid these days, because they require that every proxy along the path to your app is aware of the stickiness. If you end up deploying your app to, say, Google Kubernetes Engine, you don't even get a load balancer that supports stickiness.

The only way I would implement this would be with a level of indirection. When a request comes in, start the thread, open a listener socket (with, say, gRPC), then store the socket address in some reliable shared data store (Etcd, ZooKeeper, Postgres, etc.). Any requests from the same client can then be routed to that socket internally. If you're writing this in Erlang, you can replace "socket" with "process" and be done in half the time.

That might work okay for latency-sensitive OLTP stuff, but a good pattern I've used before is to model transactions as explicit state. The client uses CRUD actions to build up a "batch" of data (e.g. in a CMS-type app, create a folder, then create a document in the folder), which is stored separate from live data and completely invisible. Once the client is ready to commit, it sends a commit request, and the backend takes the stored state and tries to apply it to the main database.

There are some benefits to this asynchronous, batch-oriented view of transactions: For example, you get audit logging and debuggability "for free", and the backend can be smart about conflict merging and retrying without involving the client. It also allows the client to post incomplete data; it can create a document before the folder exists, even though the document at this point is not yet valid according to business rules, and later in the same transaction it can "patch" the document with the right parent folder. This maps well to the kinds of ETL that apps tends to want to do, where it's nice to write a thin dumb import process that doesn't have to keep lots of local state.


"Scaling code is great, but don't pay the price for 10^8 when you're never going to pass 10^3."

I am responsible for applications both where holding open a DB transaction as I describe would be suicidal levels of stupid as I expect hundreds of events per second as the applications hits maturity, and perfectly acceptable because the load of the application is measured in single-digit write sessions per week. (They're high-value sessions on a per-session basis, having massive effects, but there aren't a lot of them.) It's obviously a fail to engineer the 10^8 system to 10^-2 standards; it's less obviously a fail, but still a fail, to make the opposite error.


I was talking about the operational aspect, not scalability. Read my first paragraph again. :-)

I don't see any huge issues with the design in terms of scalability.


> The reason you can't POST a transaction [1] is primarily that due to ~25 years of Web inertia that has hardwired people's brains into thinking that the request lifecycle is that a request comes in, you deal with it, you emit your response, and then you drop all state on the floor.

I think that's still true if we're talking ephemeral state, like session data. Caching such data is an optimization, but conceptually we should be able to toss it at any point.

If you need to keep state, then it needs to be durable with a well-defined lifetime, and so this state should itself be a resource.


> If you need to keep state, then it needs to be durable with a well-defined lifetime

Why? Why can't you drop it at any time and have the client repeat the interaction on any kind of problem?

I'm not saying that it shouldn't be a resource either. Those things do not exclude each other.


I meant server side state. Client provided data isn't state from the server's perspective.


We are talking about server transactions. It's server-side data sent by a client.


Just FYI, there's a pending enhancement in PostgREST[1] that would allow us to take advantage of HTTP/2 multiplexing and have multiple requests executed atomically in a single transaction.

[1]:https://github.com/PostgREST/postgrest/issues/286


Neat!


Another thing, regarding eventing, is that you can use HTTP for that: just use chunked transfer encoding in never-ending responses. I've written a tailfhttpd that does this for tailing files a-la tail(1) -f, as you'd expect. You want to have some sort of application-level heartbeat though because HTTP/1.1 chunked transfer encoding sure doesn't have that, but otherwise you can absolutely sit there waiting for events that come along over HTTP, and this can be extremely fast (my tailfhttpd is C10K).


> The place where REST falls down, really, is transactions.

I don't agree. The standard way of supporting transactions with REST is to provide a transactions endpoint where clients start a transaction by POSTing a new transaction resource, POST their DTOs to build up a unit of work, and when they are done just commit the transaction. That's pretty much a one-size-fits-all solution as that's the standard way to implement transactions at the database level.


I was going to say the same thing and was pleased to see that someone beat me to it. If you combine unit of work pattern with continuation tokens then you can handle bulk transactions involving very large numbers of records with a simple rest interface.


Yes, and I've said as much. But the user-agents can't construct this without JS.


> Imagine an HTTP/1.1 verb named TRANSACT

Why not just use RPC at that point? If one of the concatenated messages has a failure, REST can only report back an error code for one of them. Also, the client can't really stop sending once it has started. The whole connection would have to be torn down to cancel out early. Also, transactions usually involve both read and writes. Doing a stream of writes that are not conditioned on some reads is often not that useful. (I.e. check if this account has money, and then transfer to some other account).


For one thing because the CRUD structure enables a bunch of things. For example, suppose you wanted to front a service with a reverse proxy that performs authorization: if the verbs and resources encode all you need to authorize supplicants, then you could do it generically, otherwise your proxy would have to understand the application's resources in much more detail.

Of course, you can do this with RPC as well. But HTTP is so... widely used, while RPC frameworks are a) a dime a dozen, b) none so widely used that you can find reverse proxies for them.

  > > Imagine an HTTP/1.1 verb named TRANSACT
  > 
  > [...]  If one of the concatenated messages has a failure, REST can only report back an error code for one of them.  [...]
REST as it is can report whatever you want because the encoding of transactions is ad-hoc, so you just have to make a provision for conveying all the error information that you want.

A TRANSACT verb does not exist, but if it did it could easily returns a status code, message, and error response body for each operation in the transaction.


> But HTTP is so... widely used, while RPC frameworks are a) a dime a dozen, b) none so widely used that you can find reverse proxies for them.

The beauty of gRPC is that it is HTTP. All proxies that support HTTP/2 implicitly support gRPC.

> REST as it is can report whatever you want because the encoding of transactions is ad-hoc, so you just have to make a provision for conveying all the error information that you want.

But at that point it isn't really REST. The single request-single response semantics are lost. The error codes don't map cleanly to the operations. In some cases they do, but not generally.


> The beauty of gRPC is that it is HTTP. All proxies that support HTTP/2 implicitly support gRPC.

s/gRPC/SOAP/g

And what were the complaints about SOAP? Right, that using POST for everything was not RESTful, which among other things means things you could have gotten with GET can't be cached by middleboxes, and so on.

  > The beauty of gRPC is that it is HTTP. All proxies that support HTTP/2 implicitly support gRPC.
  > 
  > But at that point it isn't really REST. The single request-single response semantics are lost. The error codes don't map cleanly to the operations. In some cases they do, but not generally. 
No, you'd still have that: you'd be doing more than one request/response in a linked way.


> And what were the complaints about SOAP? Right, that using POST for everything was not RESTful, which among other things means things you could have gotten with GET can't be cached by middleboxes, and so on.

Honestly that wasn't much of an issue. If you're doing something transactional then the GET parts still need to participate in that transaction and so you don't actually want to be caching them. The big problems with SOAP were XML and especially XML schema, the fact that there was no good way to have a single source of truth about what your data model looked like, and especially the corresponding absence of clarity about what kind of changes to the data model were forward compatible.

And even with all that, SOAP applied intelligently (rather than autogenerated by one-size-fits-all tools) was often nicer to work with than REST is. Part of me thinks the only reason REST has a better reputation is that developers are forced to hand-craft REST interfaces whether you want to or not, whereas with SOAP it's very easy to have a tool generate an API that's crappy but "good enough".


In my mind, SOAP is terrible because of all of the tools that implement the standard but do not interoperate. The fact that a .NET and a Perl SOAP implementation can't communicate without a very specific configuration implies that the standard isn't clear enough on the points that actually matter.

The other issue I have with SOAP is really an issue with "generic RPC mechanisms" in general. In trying to generically solve every single possible RPC situation with the same tool, they end up creating an opaque tool with a ton of configuration options that expose the entire problem domain back to the user. So if that's necessary, then why bother trying to hide the details of the solution? Just let the developer decide how best to create the RPC abstraction in his or her own application.


> REST for the control plane works because there are few enough "resources" that you can have a URI for each and CRUD works.

Isn't the issue also that entities on the control plane often have relatively simple state machines? You can reduce REST complexity by designing APIs that manipulate the state machine rather than thinking in terms of a pure CRUD model.

For instance, the need to hold client-side state disappears in transaction models where you request a desired state and then have the option to wait for the actual state to correspond or just go away and check later to see how it's doing. The poll request (if there is one) translates from looking for transaction completion to waiting for the desired state.

Edit: for clarity


I referred to schema simplicity elsewhere in that comment, which is not the same thing, but similar enough to state machine simplicity.


Combining multiple changes in one request is actually part of the oData standard (which is REST): http://docs.oasis-open.org/odata/odata/v4.0/errata02/os/comp...

You can send multiple retrieves and changes in one multipart request.


Would you go through a concrete example?


Sure. I'll use PostgREST, which has a RESTful data plane interface to PostgreSQL. There's no UI as such, though you can build one with, e.g., Admin-on-REST. There's only trivial transaction support, even though there's some non-trivial query support. All transactions consist of a POST to update one row or insert many rows all on the same table, or a DELETE to delete rows. All non-triviality has to be done via functions or views with INSTEAD OF triggers.

But now imagine you could construct a transaction like this using curl as an example:

  $ curl -X DELETE https://pgrest.foo.example/sometable/somerowid -- -X PUT https://pgrest.foo.example/sometable/someotherrowid -d @file_containing_row_data -- ...
If you use one curl(1) invocation per operation then you get no transactional capabilities. But you can't do it in one request because HTTP itself has no transactional capabilities.

Instead you have to construct a single request whose body encodes all those requests in some PostgREST-specific manner (as opposed to some generic and standard manner that all applications could use).


I think one big win could be to just hide the underlying APIs with better tooling.

For example, with GRPC your client bindings are automatically compiled and you just use them.

Is it using protocol buffers? Do you care?

To a certain extent we do care. A lot of these things "just work" out of the box but there are some practical real world implications here.

At Datastreamer we push about about 100GB of JSON to our customers per day. This is all news and blog content.

Sometimes our customers complain that using JSON is a waste. Realistically though it costs them about $150 extra per month to parse the JSON vs using protocol buffers.

We made this decision because implementing JSON is a LOT easier than implementing protocol buffers.

Most of our customers end up putting a junior or intermediate engineer on the implementation and they're already familiar with JSON.

However, if the tooling hides the encoding and even the wire protocol this means we can upgrade faster.

We're still not at a point yet where the tooling is free. You tend to get stuck around something like GRPC or GraphQL anyway.

Hopefully this is resolve in the future and we can get all this infrastructure optimization for free.


> I think one big win could be to just hide the underlying APIs with better tooling.

This is what happened with SOAP because SOAP is too complicated to use without tooling. When it works, it's great. When it doesn't work, it's a nightmare.

The reason we use things like JSON and REST is because of the transparency and simplicity that it provides. There isn't much mystery when you need to debug. That's the trade off against things like bandwidth and performance.

I'm not really disagreeing with your point; it's easily to look at all this and wonder why we don't do it better and faster.


And then one quickly misses the tooling when building requests by hand.


"Is it using protocol buffers? Do you care?"

This is exactly why "RPC" solutions have always failed in the past.

The abstraction is too leaky. They quickly become almost impossible to debug, because the underlying implementation is needlessly opaque and complex (cough SOAP).

For the foreseeable future, there will always be more and better tools for analyzing and debugging HTTP and making ad hoc HTTP requests.


> Is it using protocol buffers? Do you care?

I don't, but other people do. For anything low latency, the cost of encoding/decoding starts to become extremely difficult to avoid.

But that said, I'm pretty sure GRPC can use JSON internally, so it's up to you which encoding to use.


> For anything low latency, the cost of encoding/decoding starts to become extremely difficult to avoid.

Whether this is an argument against JSON and in favor of Protobuf or vice-versa depends on which language you're operating in. Most benchmarks I've seen show that Protobuf is faster to encode and decode for Java and Golang while JSON is faster for Python and Javascript.

IMO, if you're worried about low latency, you're also likely using a compiled language where Protobuf will be faster anyway.


> For anything low latency, the cost of encoding/decoding starts to become extremely difficult to avoid.

Consider this: At the scale Google operates, fractions of a % of improvement are worth doing for saving the company (tens/hundreds/more) thousands of dollars in CPU/energy/anything, and they have the people to throw at the problem.

Does an alternative solution really improve any/all of throughput/latency/reliability?


> Set­ting up and tear­ing down an HTTP con­nec­tion for ev­ery lit­tle op­er­a­tion you want to do is not free.

"Doctor, it hurts when I do this." So don't do that; many HTTP libraries come with connection pooling / persistence built in, so it's quite common to not even need to handle this yourself. Should we also stop using relational databases because setting up a connection takes time?

As for the rest of it, sure, you might have a queue, or some orchestration layer. Is that your consumer-facing API? Just b/c I push a message into some queue implementation does not mean I want to expose that implementation to my caller.


He has an entire section on "Post-REST: Per­sis­tent con­nec­tions" which you seem to have missed.

"As for the rest of it, sure, you might have a queue, or some orchestration layer. Is that your consumer-facing API? Just b/c I push a message into some queue implementation does not mean I want to expose that implementation to my caller."

REST for external APIs, and queues for internal, more tightly coupled micro services seems like a good way to architect things.


> He has an entire section on "Post-REST: Per­sis­tent con­nec­tions" which you seem to have missed.

That section is really stupid. He goes on about HTTP/2 and QUIC while seemingly being ignorant of HTTP/1.1's existence.

Keep-alive has existed for decades. Use it, it solves the problem definitively without adding any new ones.

(People have yet to figure out what the point of HTTP/2 and QUIC was. Except for Google's not-invented-here syndrome, of course.)


For web applications that use "traditional" RESTful APIs with AJAX, most of the overhead is from setting up an AJAX call for every single call instead of using an "always-on" listener like websockets.


I get the points of why people don't like REST, and especially the latency argument is quite true, however somewhat mitigated by http2. But I like that REST is quite easy for developers to reason about.

I actually built a websockets library (more proof of concept) [1] that uses a rest like syntax, implementing something similar to an express API. It exposes both a websockets server and http. It's FAR from anywhere near complete and just a concept. I have no doubt there are better solutions out there, but I just don't think REST is all that bad.

But I do also love @burtonator's idea of maybe not using JSON, but hiding the complexity of something like RPC.

[1] https://github.com/markwylde/restocket


REST is a good fit if the operations you want to do are CRUD, or if the data model you have is hierarchical. This is for most people, but it breaks down when there involve multiple steps. A bank transaction is a good example, because it involves multiple updates that need to be done atomically. Mapping this to REST is complex. It's possible of course, but more complicated than the equivalent RPC counterpart. Hence, REST is not better in some common cases.


REST resources don't have to map 1-to-1 with your db tables

The transaction itself can be a resource

This seems almost maybe an argument against taking microservices too far until a single service can't own the whole transaction


This is a great point, and I think this is a major point of confusion with REST.

People think of resources as a table or row with CRUD operations rather than in the wider sense as an object with a generic set of operations to trigger transitions to a new state.

It's worth reiterating that REST operations are conceptually much more generic than CRUD. Nothing says that what you GET or PUT/POST need correspond directly to attributes of an object or columns in a database - in the widest sense they simply represent the current state of a resource and a state transition you wish to invoke respectively.

The point is not to (only) encode persistent state, but any state. In a sense representing state that is not persisted on the server is more essential - resources that are persistent on the server can be operated on without always sending the full state. But information that is not persisted on the server needs to be encoded and transmitted with the request/response in some way.


So, all of this new tech exists because people have issues with using a resource as an abstraction?


Elaborate on your bank transaction example. Have you written one, professionally? Banks have done fine with REST.


Yes, I implemented a karma system on my website, where users could give karma from one person to another. It involved a bunch of things, like creating notifications, persisting the transaction to a table, etc. Doing it with REST involves making a /transaction/ endpoint, where all the operation at done, and a final /transaction/commit POST. It's ugly and not obviously correct.


REST is good mostly because of the ecosystem. You get to build responsive web apps and also non-interactive automation. You get to have all manner of proxies because the transport is very amenable to them. Lots and lots of tooling is available. It doesn't suck, really. But it's not like REST is so superior to RPC -- yeah, there's caching, but that could be built into an RPC layer as well, and the only reason it hasn't been is that HTTP won.


Reposting here. At Global Day of Code Retreat this past weekend I was surprised that our Game of Life implementation in SQL was the easiest.

Why are we doing data schema type erasure at the Application layer? The "flexibility" and "encapsulation" aren't worth the manual hell of hand coding it back in all the way to the front end.

Why not expose SQL schemas all the way to the front end? Even if the Application layer makes a view that the data store doesn't know about, everything downstream could benefit through automation.

From there you get into issues mostly having to do with APIs that can be paginated, and how to optimize those connections to balance caching with network latency.


"We" do sometimes put declarative languages at the client level, whether SQL or something else like Prolog, Datalog, MDX... It's a long tradition. As for why "we" don't do it more often, many times you don't need to, or I think a lot of the barrier is a combination of lack of tooling plus lack of expressiveness in the client programming language. The alternative of "hand coding" actually has a lot of automated tooling already, too, so there's not much handwork involved at all if you don't want there to be. Feed the tool nothing but your schema, out pops a de-sql'd MVC-esque set of program files in whatever languages you care for. For some things it's nicest to work with these autogen'd artifacts over anything else as easily available.

A somewhat related question that's fun to think about is why we don't put databases directly in our clients very often (especially SPAs). The ClojureScript community has been doing some good work exploring what it's like to put your client state in a client DB type of structure. There was also a compilation of SQLite to JavaScript done, I wonder how much usage that has?


> Why not expose SQL schemas all the way to the front end? Your API is a promise and if you break your promise bad things happen.

If your API is SQL against some tables, you're promising you won't make any breaking changes to that -- or else you're promising you can roll out an update to all the servers and clients at the same time.

SQL against tables has a very large surface area, and there are many details that are different depending on the version of your database software. So, you'll find you can't update your database software without also updating (and testing) all of your clients. The more complex and more widely distributed your software is, the more time-consuming and expensive it will be to update anything. If you have clients outside your direct control and running on the same release cycle, you'll find you can't update your database software at all without breaking your clients.


> SQL against tables has a very large surface area, and there are many details that are different depending on the version of your database software

So you only offer an ANSI SQL compatible parser, not specific to any database (and it would then be compatible / compile-able to postgres, sqlite, whatever), and you only expose specific views into the database.

Once you create such a public view, you treat it as immutable, and all API changes require new views. Each old view must function forever.

This is not actually all that different from REST apis today; we version them, old versions must exist forever (or all clients must be updated), you can't update your server software that serves the rest API without being darn sure it's backwards compatible...

The only real difference is we encode our CRUD operations in a non-sql language right now (ad-hoc json schemas usually) vs encoding them as the ANSI subset of SQL against some fictional database views.

All of your complaints apply equally well to REST apis I think.


You’re ignoring the massive surface area of SQL, ANSI or no. You’re going to have to maintain that — for each of your API versions, including any/all semantics, quirks, etc clients may rely on.

It’s a much larger commitment than you need to be making for many, many APIs.


"Why not expose SQL schemas all the way to the front end?"

Because providing a SQL interface to all of your external consumers is a good way to quickly bring your service to its knees.


> To start with, it’s not as though client de­vel­op­ers can com­pose ar­bi­trary queries, lim­it­ed on­ly by the se­man­tics of GraphQL, and ex­pect to get uni­form­ly de­cent per­for­mance. (To be fair, the same is true of SQL.)

Maybe I'm missing something, but isn't that exactly what you can expect from a GraphQL API as a sweet spot between SQL and REST? Semantics of GraphQL enable a schema-dependent set of queries, not arbitrary queries.

> Any­how, I see GraphQL as a con­ve­nience fea­ture de­signed to make syn­chronous APIs run more ef­fi­cient­ly.

For asynchronicity, GraphQL has subscriptions.

That being said, I agree GraphQL (of today at least) won't be a solution to every problem and I bet transports such as HTTP/2 and HTTP/3 will cause new things to be built which don't look like just a faster REST, GraphQL or MQTT. I would mention event sourcing and persistent logs in addition to just stream/queue processing.

Tooling (like API management gateways and API consoles) is not still all there for GraphQL and MQTT either.


> Semantics of GraphQL enable a schema-dependent set of queries, not arbitrary queries.

I mean, in SQL useful queries are generally schema-dependent. Sure, you could have XPath/JSONPath/whatever extensions that allow you to query something closer to free-form documents, but your queries will still be schema-dependent in some way.


SQL schema defines data and you cannot limit which queries are available on that data, whereas GraphQL schema defines both data and which queries are available.


I'm surprised we haven't seen more development already with HTTP/2 REST APIs. It let's you do your API at whatever logical level makes sense and then do your orchestration independently.


I stumbled on this the other day http://rsocket.io/

Does anyone have an opinion if it's a good idea and likely to gain adoption?

It sounds to me like it implements the sort of thing being discussed in this post


RSocket is gaining use at Facebook and is beginning to get traction at Alibaba. Spring 5.2 appears that it will also have first-class support for RSocket. There is also a company, founded by some of the creators of RSocket, that are building an open-source application development platform for it called Proteus.


The article is named Post-REST, but the summary says: "...REST still pro­vides a good clean way to de­com­pose com­pli­cat­ed prob­lem­s, and its ex­treme sim­plic­i­ty and re­silience..."

Is there a reason not to use REST then? I didn't quite understand.


Yes and no. Should you keep using HTTP calls for adding an item to a cart? Possibly. Should you use HTTP calls to communicate between services, let's say, to poll a resource every second and know if a worker has finished, when there are better systems and there's good enough technology available to support a higher throughput? Not so much.


> Should you use HTTP calls to communicate between services

Yes, it's mostly fine for that. Not always ideal, sure.

> let's say, to poll a resource every second and know if a worker has finished

No, you generally shouldn't poll every second. You should POST a subscription and then the other end should POST you a notification.

If you are in a situation where you need to poll instead of subscribing and waiting, then, maybe use HTTP long polling, maybe poll less frequently than every second, maybe use something other than HTTP, but in any case you are in a suboptimal circumstance.

(Of course, whether you should use HTTP as a transport protocol is orthogonal to whether or not you should use a REST architectural style.)


Polling every second would be wasteful; you can just make an HTTP request and hold the connection open until it's finished. Or just pass an URL, and let the other side call you.

(That's assuming HTTP, which btw is not the only protocol which can implement REST constraints)


Then you are dealing with client timeouts or the fact that a lot of endpoints do not support url callbacks (for good reasons) or maybe clients that cannot be called back (for good reasons too).

Cannot say much about REST constraints outside of HTTP so excuse my ignorance on that. But I think it was clear the parent was referring to HTTP, or most possibly, GET and POST HTTP calls returning JSON.


You always have to deal with timeouts and other network issues, regardless of protocol.

Rather than retry every second, you retry on timeout.


And yet most HTTP APIs are going to give you a response with an status instead of holding it. I guess because it's much easier to implement and also assumes less on the consumer part. Then there was Comet and what resulted from that, websockets and server side events.


So in summary - HTTP calls for front-end, but not back-end to back-end services?


Not exactly.

Transactional CRUD (post a form, submit an email, add to cart) is pretty well adopted and standardized as HTTP, so probably use that (even backend to backend, if the contract fits CRUD resource semantics).

RPC (non-CRUD interactions) or streaming updates (e.g. a spinner in a app representing a pending job) or low latency (chat messages / presence status, etc.) don't really fit to HTTP requests so great (requiring e.g. polling) so you should look at alternatives (probably something proxied over a websocket in the case of a browser front-end app).


I wouldn't be super quick on drawing the front-end and back-end distinction. A front facing interface using sockets could (depending on how flexible we are with terminology of course) be considered front-end too. And I think that's the entire point of the article.

I have seen huge benefits, performance being one, communicating using ZMQ (at the cost of complexity and reliability). A worker, running on your computer, consuming data fed through a socket from a broker, running on your server.


After reading the article, the combination of REST APIs combined with durable, persistent queues (Kafka), covers most architectural needs. Queues for managing internal low latency and high throughput use cases. REST for external facing APIs with good documentation and well defined semantics.


> Is there a reason not to use REST then? I didn't quite understand

There can be. For example in the world of satellite internet where every byte is precious and every tcp connection is a massive performance penalty (3 way handshake * distance to satellite at speed of light = huge latencies), a REST API can be very wasteful. Something like GraphQL is much more suited to that case where you can get only exactly what you need, nothing more, and on a single connection.


Nothing in REST prevents you from asking and getting only what you need. Assuming you're using HTTP, that's a common use of URL parameters; alternatively, you can pass a more detailed request in the body (yes, even in GET).


> Nothing in REST prevents you from asking and getting only what you need. Assuming you're using HTTP, that's a common use of URL parameters; alternatively, you can pass a more detailed request in the body (yes, even in GET).

GET is specified as not having a body and conformant HTTP implementations ignore the body of a GET, so in addition to violating RESTs general proscription on violating the underlying protocol (HTTP, in this example), you break compatibility with standard toolchain components if you do this.

What you seem to want is SEARCH pulled out from under the weight of WebDAV, i.e.: https://tools.ietf.org/html/draft-snell-search-method-01


Right but... you're basically suggesting I implement a query language into my REST backend. Why would I do that if a similar QL already exists in the form of GraphQL?


Or use oData which is a REST query language.


Where are the tech manager blog posts that discuss time, money, and effort in addition to tech pros and cons?

GraphQL and SPAs are unconstrained tech solutions with questionable upside. Only well funded organizations can afford to throw money at projects that are more about attracting engineers than improving the bottom line.

You don't need a huge marketing campaign and a non-profit institution to drive adoption of technology that has a compelling value proposition.


I was recently working on an API where we decided to use protobuf. I remember the first time I saw the message as encoded by protobuf; it really made me aware of how wasteful JSON can be. I know that usually that all comes out in the gzipping but there is nothing quite like seeing half a page of JSON become 1 line of gibberish to demonstrate the point.


Protobuf is really slow with encoding/decoding and also does unnecessary validation of the data. If you are just starting out with binary data serialization I recommend FlatBuffers instead of protobuf.


Really slow compared to what? Json? A handwritten binary encoding? For most use cases protobuf should be fast enough. I also don’t understand the unnecessary validation part. If you receive data over the network you should always validate it. Even if it’s from within the same company network


Maybe things have improved since, but the last time I tried FlatBuffers (about 2 years ago), I found the API (generated C#) to create the FlatBuffers cumbersome, inelegant and quite error prone.


Problems with these assumptions:

- QUIC. Yes, it's UDP, so its connections don't behave the way TCP ones do, but it's still based on IP and UDP connections, so it still breaks down under a number of circumstances (rotating backends, mobile clients changing connection types, firewalls/proxies, etc)

- HTTP-based. If you use HTTP, you must accept its warts, like being stateless. Even considering a transport protocol that can multiplex connections, you still need to track all these individual stateless calls. Perhaps a protocol that took care of state might give you some benefits (such as workflowing?)

- Workflow orchestration is just distributed centralized state machines. It shouldn't have much to do with a replacement for REST, because the API isn't necessarily controlling a state machine, and the API (in this case HTTP) isn't stateful anyway...

Why don't we evolve APIs into a sort of language? Why make 4 separate requests (PUT this, GET that, munge it, POST a result, GET to confirm) when you know exactly what you want to do? We can instead build a language or spec where we can shove a variety of instructions (even a "workflow") into a request, let the backend handle the method of accomplishing it for us, and retrieve the result. This can be extended throughout microservice architectures.

The transport protocol is a separate issue. So you want lower latency, and you want less round trips, and you want to survive network interrupt. Design a transport protocol for that (QUIC is a good base). If you want it to survive even more interruption and allow clients to split networks, start looking at intelligent tunnels with advanced routing mechanisms and spread them along the path. But that's a lot more funky than simply replacing TCP.


> - QUIC. Yes, it's UDP, so its connections don't behave the way TCP ones do, but it's still based on IP and UDP connections, so it still breaks down under a number of circumstances (rotating backends, mobile clients changing connection types, firewalls/proxies, etc)

Firewall support is a big concern but the other points you mentioned are design goals for QUIC — see section 9 of https://datatracker.ietf.org/doc/draft-ietf-quic-transport/?... (“Connection Migration”):

> The use of a connection ID allows connections to survive changes to endpoint addresses (that is, IP address and/or port), such as those caused by an endpoint migrating to a new network. … Not all changes of peer address are intentional migrations. The peer could experience NAT rebinding: a change of address due to a middlebox, usually a NAT, allocating a new outgoing port or even a new outgoing IP address for a flow.


"If you use HTTP, you must accept its warts, like being stateless."

Being stateless is one of the best things about HTTP, I wouldn't consider it a wart at all.

State should be very, very isolated. Ideally, stored in a database somewhere.

"Why make 4 separate requests (PUT this, GET that, munge it, POST a result, GET to confirm) when you know exactly what you want to do?"

At that point, you are implementing a database with transactions. So expose it over a SQL interface and be done with it (see Cockroach DB). This is solving a very different problem from most REST APIs. Exposing the ability for external customers to run arbitrary transactions against your servers is very dangerous in terms of provisioning, latency, and availability requirements.

Unless your business is providing a database as a service.


State isn't always isolated, and for good reason. Design and operational parameters may dictate state as part of the expected functionality. Sometimes, because of the stateless nature of a protocol, extra work has to be done to constantly re-evaluate an operation in progress to see if it should be terminated early or changed. Since the operation isn't stateful, some other separate component has to act as a sort of babysitter. Then you either have to have the inefficiency of repeated calls, or implement your own pseudo-stateful protocol to wait for some kind of event to be passed up to alert it of a state change. Using the same hammer for every kind of nail is dumb.

And no, you don't need a database and transactions just to perform four operations and return a result. Not every transaction will be affected by a change of state in the middle of the transaction. Sometimes transactions are duplicated, or replace a result entirely, and the latest one can win. A transaction in progress can also provide data to another transaction later, and the first transaction can be paused or terminated. We don't see complex interactions like this because the databases and protocols we use have strict separations between their parts, but we don't have to do things this way. The more functionality you surrender to a database, the less flexibility you have in the future, because now your big database is the bottleneck for operations.

We need more design paradigms that allow flexible operations, and protocols that can work with such operations to manage transactions by also working with state. I say we "need" this because it seems the state of the art of these systems has languished under proclamations about what we can and can't do, due to assumptions about how we must go about them, and what technology is readily available for use. But we're talking about a new protocol, so we don't have to carry the baggage of those assumptions with us.


"And no, you don't need a database and transactions just to perform four operations and return a result."

The actual request was for a composable language, with 4 operations as one example. But a language isn't limited to just four operations.

By the way, this exists in the FHIR spec as a transaction Bundle:

https://www.hl7.org/fhir/bundle-transaction.json.html

But that, of course, now means you are implementing transactions. Which is one of the hardest things to implement in a distributed environment in a correct, scalable, and performant manner. Frankly, I would be very hesitant to trust such an implementation, if it wasn't the core competency of the developers who wrote it.

(Or unless they are just calling something else that implements the transactions, but at that point again you are just a very thin layer on top of a database.)


This is very important. It is absolutely critical to be able to push state out to the client (e.g., using encrypted state cookies, or avoiding state altogether).


Plugging a personal project: I’ve been working on a project called Lightbus [1] which has been very useful for me in creating evented systems. It is still early days, but so far myself and a couple of others have found it useful.

[1]: https://lightbus.org


If REST ultimately results in heavy use of message queues and async URL responses, the question is why not use eg. AMQP or other messaging protocol straight away rather than bother using HTTP facades/services as unecessary SPoFs at least between backends.


Depends on how tightly coupled you want to be.

Directly exposing your queue to external consumers seems very dangerous.

Having a well defined and well documented REST API for external consumers, and using queues for communicating between tightly coupled services seems like a much better solution.


Well-defined and well-documented isn't exactly what I'd use to describe "REST" APIs.


Why not?


Turns out most people are perfectly happy with their little SUV even if you can't transport a couple of tons of steel or win a Formula 1 race.

It's ridiculous to think that anything can cover all use cases.


> The idea is you get a re­quest, you val­i­date it, maybe you do some com­pu­ta­tion on it, then you drop it on a queue and for­get about it.

> The next stage of re­quest han­dling is im­ple­ment­ed by ser­vices that read the queue and ei­ther route an an­swer back to the orig­i­nal re­quester or pass­es it on to an­oth­er ser­vice stage.

In this scenario, what are the mechanisms by which a service could route an answer back to the original requester?


My question exactly. Can’t exactly use long polling for this.


Looks like event-driven systems are all the rage now!

I feel there's still too much competition/innovation in the message queue space though. Too many choices, with maddeningly different characteristics.

And seemingly you can tweak most of those to behave somewhat similarly between them, which makes judgement even harder.


As it should be.

The question is: can you use HTTP to transport event notifications?

Actually, can you use TCP to transport event notifications?

Yes, you can, but it's a bit unwieldy because you need some sort of keepalive / heartbeat with which to distinguish inactivity from transient network problems (and then attempt to reconnect). Then again, even if you were using UDP you'd have to do some sort of keepalive / heartbeat.

I use an HTTP server that supports "tail -f" via chunked transfer-encoding responses to indeterminate byte range requests (Range: bytes=${offset}-) that don't end until the file is removed, renamed away, or replaced (renamed over). This server also supports weak ETags constructed from (device ID, inode number, generation number), If-Match, and If-None-Match.

This allows clients to GET a resource whose body is the event stream and hang in there waiting for events.

Again, you need to add a heartbeat in the event stream as HTTP chunked transfer-encoding does not have this, and you need to detect network problems. But otherwise this works fantastically well and it is HTTP and RESTful.


Isn't competition good for consumers though? Eventually, one of these network architectures will become more popular and we don't have to try them all out personally ourselves. I benefit greatly from people in the 90s testing out HTTP over other competing protocols, which, at the time may not have been obviously better.


"I feel there's still too much competition/innovation in the message queue space though."

Interesting, to me Kafka seems like the clear winner and the obvious choice for all of my queuing needs.

What am I missing out on from other queuing systems?


I recently evaluated a bunch of different messaging systems, and found NATS to have the widest support for different client languages, plus those clients are also maintained by the same parent company. With other MQs we have used, we have found much variety in the quality of the client libs. YYMV.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: