Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

REST is for noobs, JSON RPC is silent pro's choice :)

Make all requests POST and enjoy easy life without useless debates on should creation of resource be on POST or PUT or should you return HTTP status 404 or 200 if resource/document on server is not found (of course if should be 200 because request was a success, 404 should only be used it api method is not found).

I 100% agree with Troy Griffitts beautiful take https://vmrcre.org/web/scribe/home/-/blogs/why-rest-sucks



JSON RPC:

- Everything is a POST, so normal HTTP caching is out of the question.

- JSON RPC code generators are non-existent or badly maintained depending on the language. Same with doc generators.

- Batching is redundant with HTTP2, just complicates things.

- Because everything is a POST normal logging isn't effective (i.e. see the url in logs, easy to filter etc). You'll have to write something yourself.

- Not binary like Protobufs or similar

But yeah, "the silent pro's choice"... Let's keep it silent.

JSON RPC is pretty much dead at this point and superseded by better alternatives if you're designing an RPC service.


> - JSON RPC code generators are non-existent or badly maintained depending on the language.

Very much so. It’s in a terrible state where I’ve looked. Most of the tooling is by OpenAPI or similar which comes with a bloatload of crap that is only marginally better than say SOAP. It needs to be much simpler.

> - Not binary like Protobufs or similar

Agreed. This is not an issue for small things that can be base64 encoded but once you need large blob transfers you don’t have any reasonable option. This is a problem in eg graphql which also misses the mark and you have to step outside for things like file uploads.

It feels like the whole standardization effort around json rpc is weak. It doesn’t address the needs of modern RPC-like systems. Which is unfortunate because there’s a real opportunity to improve upon the chaos of REST.


It's not ideal, but in practice GZIP base64 is only marginally larger than GZIP binary


Indeed, good point, and worth clarifying. A lot of people think the size overhead is the problem, which usually it isn't, like you say, because of fairly cheap compression.

However, the main issue with big base64 blobs is that you can and should never assume that JSON parsers are streaming. So you may need to load the whole thing in memory, which of course isn't good.

Note that I'm not necessarily blaming JSON for this. My gut feeling is that crusading for streaming parsers is a Bad Idea. Instead, this is something that should probably be a higher-level protocol, either by streaming chunks (a la gRPC) or by having separate logical data streams (see e.g. QUIC). JSON RPC does not, afaict, solve these issues.


Base64 multiplies the GZIP size by 1.33x (4/3, or a 33.3% increase in size)

SO (https://stackoverflow.com/questions/4715415/base64-what-is-t...)


That's for Base64-encoded GZIP, not GZIP-encoded Base64 :)


Have you tried zstd, now widely supported?


Thanks for this. I felt I was going crazy, decrying many professional and smart engineers work as not being 'expert' enough, as if they didn't weigh up and consider other options. Yes, there can be a bit of cargo culting, but to claim that only experts use JSON RPC is ridiculous.


i always fail to understand what kind of services there are that aren’t at least RPC-ish

thin CRUD wrappers obviously but usually when you are piping data from one source/format to another, you typically want to do something that is ever so slightly “not-CRUD” (call another API/service, etc.)


I'm with you on this one.

Probably the confusion comes from the fact a lot of people think having a verb in their URI makes the API RPC, while only having nouns is the proper REST.

But the whole verbs vs nouns debate in the context of REST sounds a bit like... arguing whether building a round or square control tower out of straw will attract more cargo.

HATEOAS is the cornerstone of REST, this is what sets it apart from RPC-style applications, not the absence or presence of verbs in URIs.

Think of a regular (that is non-SPA) Django, RoR, etc application.

The user points their browser to the app's home page. The backed receives the HTTP request, renders the HTML, and sends it back to the browser.

The browser renders the HTML and lets the user interact with all the control elements on the page. When the user clicks a button or follows a link, the browser sends the corresponding HTTP request to the backed which inspects it and decides what next HTML page (or maybe the same) representing the state of app should be transferred to the user.

This is basically REST. The key to notice here is at no point in this example the browser gets to decide what the app's "flow" is supposed to be -- this is the sole responsibility of the backend.

A consequence of this is the entire structure of pages (aka resources) can undergo a drastic change at any moment, but as long as the home page URI stays the same, the user doesn't suddenly need another browser to access the app.

If changing a resource's URI, or removing a resource altogether can break an existing client, if an existing client cannot make use of a new resource without changes to the client's sources -- that's RPC even if there's not a single verb in the API URIs.

Most likely this architectural style isn't something that first comes to mind when we think of today's mobile apps or SPAs as API clients. And in my opinion it's just not a good fit for most of them: the server isn't expected to drive their flow, it just exposes an API and lets each client come up with its own UX/UI.


Noob question, why is batching redundant in HTTP2?


It isn't.

Batching means combining multiple logical operations in a single physical request. HTTP/2 muxes N logical requests over 1 physical connection, which is good, but the application will still process requests independently. You always want to batch workloads into single requests if you can, HTTP/2 doesn't change this.


Doesn't seem redundant to me. Even if you can multiplex requests, batches still have certain advantages, e.g.

- compression is often more efficient with larger payloads

- can reduce per-request overheads, e.g. do authentication once rather than X times

- easier to coalesce multiple queries, e.g. merge similar requests to enable data retrieval via a bulk query, instead of X individual queries


HTTP/2 Supports multiplexing, so you can send multiple requests at once on the same connection


I don't like REST either, but JSON RPC is similarly hamstrung in some scenarios (examples: streaming, CDN caching, binary encoding).

I mostly dislike REST because nobody can agree on what it is and there are too many zealots who love to bikeshed. If you stick with the simple parts of REST and ignore the zealots, it's decent enough for many scenarios.

I've yet to find an RPC protocol that fills all requirements I've encountered, they all have tradeoffs and at this point you're better off learning the tradeoffs and how to deal with them (REST, JSON RPC, gRPC, WebSockets, etc.) and how they interact with their transports (HTTP/1.1, H2, QUIC, etc.), and then play the unfortunate game of balancing tradeoffs.


ReST makes sense in certain cases, where resources are a tree (like a typical web site is a tree), with collections of leaves, and these leaves make sense by themselves. Then you can go full HATEOAS and reap some actual benefits from that.

Most of the time (like 99.9%) what you happen to need is JSON RPC. Even if some parts of your API surface look like they would fit the ReST model, the bulk does not. Ignore that, build a protocol along the lines of your subject area. Always return 200 if your server did not fail or reject the request, use internal status signaling for details. Limit yourself to GET and POST. Use HTTP as a mere transport.


These seem arbitrary rules.

"Use internal status signaling" for example doesn't seem any better than deciding what status codes mean what; it's just a second layer of codes where the first one is now useless.

"Limit yourself to GET and POST." - delete and patch are pretty useful for documentation simplicity too. If there were a LIST verb that would be even handier, but nothing's perfect.

"build a protocol along the lines of your subject area" - I think you can do this (and well or badly) using REST or RPC forms.


+1 and I'll bump it up a notch... not only should you ignore REST you should ignore URLs. You want to write protocols, not APIs. Redis, for example, has a better "API" than any web API I've used. Easy to use, easy to wrap, easy to extend and version. HTTP is the other obvious example that I shouldn't have to go into.

If you'd like a good back and forth on the idea the classic c2 page is a great resource. http://wiki.c2.com/?ApiVsProtocol


Don't ignore URLs completely! They are great for namespacing and versioning.


Why add the additional complexity of multiple connection points? Protocols support both of those operations perfectly well and it seems that adding URLs would just confuse things.


Because at some point you will need to deprecate ciphers and when you do you don't want old clients to explode. The domain is the way you version connection requirements so you can support old clients with crappy ssl options without screwing up the security of new clients.


HTTP is itself a protocol, and URLs are part of that protocol. They're not really "connection points" in any meaningful sense.


Sometimes all you got is port 443, and adding subdomains is a non-zero hassle, especially if you serve all off the APIs from the same code anyway.


You don't need subdomains or other ports because you encapsulate everything in the protocol. A system that works on a protocol only really needs a data socket which can be simulated pretty easily via any URL with the POSTs working as a bursty stream.


Ahh, the 2000's called. They want their SOAP back.


I don't think the parent was referring to an XML-based protocol.


This article defines REST incorrectly, and doesn't seem to understand the concept of HTTP methods, calling them verbs (arguably fine) and types (huh?) seemingly arbitrarily. Methods are a core part of HTTP -- just because you can't specify them explicitly in a browser as a user doesn't mean they're "cryptic curl arguments" or worth ignoring. I'm not sure I'd put too much stock into this perspective.


Thank you all for the great comments.

I want to emphasize that I was not thinking about JSON RPC as a specific protocol, but more as a JSON format to transfer data, similar to how REST APIs usually do, and some kind of "HTTP method agnostic remote procedure call", it does not have to be JSON RPC standard.

Personally, I am a fan of just having API Class-es + methods that automatically map to API calls with automatic api interface and doc builders. I find that it would be super strange if I had to prefix my internal methods with DELETE or PUT based on do they remove or add to some Array. Using that logic, why do that in APIs.

I just find it super strange that people want to mirror their app logic + error response codes to some protocol like HTTP – ridiculous :) Why not go even lower as TCP and use some of that spec for our client <> server API conn. Many people will laugh, but if you think about it, where is the difference?


> I find that it would be super strange if I had to prefix my internal methods with DELETE or PUT based on do they remove or add to some Array. Using that logic, why do that in APIs.

It's true that POST ends up being a bit of a grab bag for all the non-CRUD API calls.

But I find it very useful when looking over someonje's API to find them using PUT, or DELETE. PUT in particular provides really useful signals about the nature of the resource we are dealing with.

And lets not get started with the in-built caching etc. you throw away by not using GET.


> I just find it super strange that people want to mirror their app logic + error response codes to some protocol like HTTP – ridiculous :)

Why is this ridiculous?

HTTP is the default protocol for network services, so it seems to me that it is perfectly sensible to design your API to be compatible with HTTP semantics.

> Why not go even lower as TCP and use some of that spec for our client <> server API conn. Many people will laugh, but if you think about it, where is the difference?

Because HTTP is the only protocol that can reliably transit arbitrary networks (middle-boxes, NAT, etc.) in practice.


REST conventions only make sense for externally consumed APIs. Even for those, there's GraphQL.


The Venn diagram overlap between REST and GraphQL is pretty small.


I've been a REST API developer for a few years now. For whatever reason, I've never bothered dipping my toes in the RPC realm. This article resonated with me. Looks like I'll be building an RPC API in the near future.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: