Hacker News new | past | comments | ask | show | jobs | submit login
An Introduction to APIs (zapier.com)
244 points by teleforce on July 30, 2023 | hide | past | favorite | 118 comments



I still like REST. Most web applications are CRUD and don't need RPC. It also provides a standard and expected interface for 3rd party developers integrating with your code. If you're a small saas startup, nobody is going to waste their time learning the particularities of your protocol. Also makes the code very easy to read if you follow best practices for MVC style webapis with dependency injection. In my view, asp.net core is the apex of all RESTful frameworks


Precisely right. Standards work cause everyone understands them. I know a PUT request is almost certainly an update of some kind. I know a POST makes something.

For most shit you wanna do, its view, edit, delete, its really not that complicated.


One team got a stream of 404’s not because the requested objects did not exist in the DB, but because the service moved. Web server was saying: resource does not exist.

Another time, the server had the resource but didn’t like the state of the data, so refused to serve it. Debate ensued as to whether this was a 400 or 500 class error. People got religious.

Yes there’s an answer but it should be so obvious that we don’t have the debate. This isn’t sophisticated verbs, both happened with GET.


TL;DR - API developers should make it so consumers have the necessary information and tools to know what's happening. If you're just returning a 404 with no other info, you have a bad API.

There's basic error handling/reporting that seems to transcend technology and architecture, and a big part of that is that errors should have unique error codes. In the context of a web API, both "route not found" and "resource does not exist" should return a 404, but each should have a unique `code` in the body:

    {"code": 0, "err": "route not found"}
    {"code": 1, "err", "user not found"}
For HTTP, the status code is often for general application development, and the error code is for debugging, though there can be overlap and it's completely fine if a client wants to implement custom logic based on `body.code`.

A validation error should look like:

   {"code": 2, err: "invalid", "data": {
      "username": [{"code": 100, "err": "required"}],
      "password": [{"code": 101, "err": "must be at least 6 characters", "data": {"min": 6}}]
   }}

The `code` always indicates what other data, if any, exists. Above, a code of 2 means there'll be a `data` field of errors. A validation error of `101` means there'll be a `min` field in `data`. `err` is an user-safe (developer friendly) description of the message which can always be regenerated from `code` + `data`.

For errors that aren't known ahead of a time (e.g. a server error), that should also have a distinct code, say 500, and the "data" field should contain an `error_id` which can be used to look up the error.


> In the context of a web API, both "route not found" and "resource does not exist" should return a 404, but each should have a unique `code` in the body

I forget if it was 404 or something else, but you should check if it actually works first. One of our sites did exactly as you suggest here, and it worked totally fine in development (django "runserver"), but didn't work in production (wsgi behind apache). Turned out with that HTTP code, apache was discarding the body.


There’s a standard format for errors in JSON described in RFC 7807:

https://datatracker.ietf.org/doc/html/rfc7807

People shouldn’t invent their own custom error JSON when a standardised format will work.


Absolutely. In fact, you're almost perfectly describing the JsonAPI standard for returning errors: https://jsonapi.org/format/#errors

One improvement I'd steal from theirs and drop in yours - constant (or enum) string codes. It's a lot more scannable when debugging/reviewing/maintaining than having to look up integer codes in a table.


Just pointing out that codes make it easier to provide translations and ability to switch between error types with more confidence than string matching.

Error codes should be accompanied by helpful messages so you don't have to look up the table.


This makes sense if your web API asserts that all responses are JSON responses, and if it provides a schema for responses somewhere as a contract.

But, in general, it's totally fine to return a 404 with no other info. That's a totally acceptable API.


Obviously a bad server state should be 500


Yeah, from what I understand 400 means "fix it at the client's end" and 500 means "fix it at the server's end" which seems to be what the RFC thinks, too:

https://httpwg.org/specs/rfc9110.html#overview.of.status.cod...

> The 4xx (Client Error) class of status code indicates that the client seems to have erred.

> The 5xx (Server Error) class of status code indicates that the server is aware that it has erred or is incapable of performing the requested method.


Basically, 4xx is telling the client you did something wrong (such as visit a non-existent URL), 5xx is when the server errors and can't respond appropriately.


Obvious to you and me. Server team disagreed and tried 422 unprocessable entity, which is not correct. My point is that these issues should be clear enough that there are not thousands of StackOverflow questions on it, and debates between teams. “It’s so easy” is not true. There are layers of understanding.


They are clear, the server team is objectively wrong


Am I the only one around here who hates the whole PUT PATCH nonsense? When I write APIs, you’re either querying data with a GET, or mutating data with a POST. Everything else is a waste of time.


Yea, I also frigging hate it.

Everytime i need to plan out an API i stumble into a lengthy PUT/PATCH analysis and read up session that shouldnt really be necessary.

POST should be the only thing needed, if the object already exists, mutate it, if not, create it, in 99% of the times you dont need the “idempotency” argument.


This suggests you just don't grok HTTP semantics.


No no, this just means I think the HTTP semantics are overkill, and a simple GET/POST gets across everything that I need to.


If your application models a remote API in terms of endpoints which are either read-only/GET or read-write/POST, then you're subverting your own interests. There's no reason to constrain yourself in this way.


Web dev has so thoroughly revised the definition of 'API' it's not even funny.

Desktop, embedded, video games, HPC suddenly cried out in terror and were suddenly silenced.


It's become just a word that means "interface endpoints". And frankly that's fine, it's what it is in the end. Whether in an OS, a website, or another platform.


It is even weirder to me how business has latched on to the term "APIs" like some kind of rabid dog biting into your ass

Like, this is a fundamental aspect of how I interact with my job and business has taken the term and elevated it into its own distinct thing


I died a little inside the day a new hire, some kid fresh out of who knows where, shook my hand and asked "so are you an API developer?"


The same way they latched onto agile, Agile, Scrum, DORA, and a bunch of other keywords.


> Web dev has so thoroughly revised the definition of 'API' it's not even funny.

What leads you to believe that a HTTP API does not meet the definition of a API?


The other way around, the meaning of API standalone has been narrowed to mean HTTP APIs, as if it were redundant. Specially tedious if you're searching for non-HTTP API design notes.


> The other way around, the meaning of API standalone has been narrowed to mean HTTP APIs (...)

I don't think so. The pervasiveness of web apps and applications consuming web services means there's a lot of work involving web APIs. This doesn't mean the definition of API was narrowed. It's all about context. If your bread and butter is developing web services, you don't waste time with context-free definitions of your API. You talk about the API and everyone is on the same page.


If all anyone ever talks about is APIs in the web services sense then, yes, the definition of API has been narrowed. When everyone will assume you mean web services API when you refer to an API and have to use extra words to describe non-web APIs then the definition has narrowed. That's how English works: the definitions of words depend on their usage and dictionaries describe how words are used.

You even admitted "The pervasiveness of web apps and applications consuming web services means there's a lot of work involving web APIs" and "If your bread and butter is developing web services, you don't waste time with context-free definitions of your API" which is acknowledging the common use definition of API has morphed into Web API. It's only in the context of technical documents or documents that are explicitly referring to non-web APIs that use API and it is understood to mean something other than a web API.


In the trade we call this being "API go lucky". It doesn't work well in conversation.


Almost all my development has been in the fields I mentioned. I prefer to think of HTTP 'APIs' as end-points instead: you query a URL for some fields, and you receive some data—whether it be plaintext (JSON), or binary. That's essentially it. I feel narrowing the definition of an API to just 'HTTP end-points' is very reductive. The word stands for 'application programming interface', but I don't really need to program anything to use a HTTP end-point: I can navigate to it using a Web browser, if I want.

For me, an API is a lot more concrete: a set of data structures (whether they be classes, objects, pairs, tuples, lambda functions, etc.) with extensible functionality (member functions, pure functions accepting those data structures, etc.) that developers can use independently, or together, to produce some new functionality, i.e. program using interfaces.

So what I'd traditionally think of APIs are the .NET API, the C++ STL, the Java API, the Windows API, the Linux headers, Qt, GTK, even React.


I always thought of API in the sense of POSIX Or NFS etc. A common interface that existed across different libraries regardless of the vendors. Designed for interoperability.


One place I worked was unable to differentiate between libraries, SDKs and APIs - they just called all of them "an API". Infuriating.


Well, to be fair, libraries and SDKs do have APIs, it's what you mainly interact with when you use them, the defined interface that you programmatically use in your application.


So like a kind of Application Programming Interface?


You create a "web-service" and then describe it by describing its "interface" you are in fact creating that interface. That is how it is on the web because the web-tech does not give us a language in which to describe service-endpoints.

In non-web development the APIs can be described formally as function-signatures in any statically typed programming language.

Or is there some simple language in which to write descriptions of web-services?


Of course, there are tons, they're generally called IDLs (interface description languages), common examples are Protobufs, Avro, Thrift, etc.

They're not hugely popular for web services largely because the web was designed to avoid exactly this kind of a priori contractual requirements between communicating parties. But that's a much bigger conversation.


REST is for noobs, JSON RPC is silent pro's choice :)

Make all requests POST and enjoy easy life without useless debates on should creation of resource be on POST or PUT or should you return HTTP status 404 or 200 if resource/document on server is not found (of course if should be 200 because request was a success, 404 should only be used it api method is not found).

I 100% agree with Troy Griffitts beautiful take https://vmrcre.org/web/scribe/home/-/blogs/why-rest-sucks


JSON RPC:

- Everything is a POST, so normal HTTP caching is out of the question.

- JSON RPC code generators are non-existent or badly maintained depending on the language. Same with doc generators.

- Batching is redundant with HTTP2, just complicates things.

- Because everything is a POST normal logging isn't effective (i.e. see the url in logs, easy to filter etc). You'll have to write something yourself.

- Not binary like Protobufs or similar

But yeah, "the silent pro's choice"... Let's keep it silent.

JSON RPC is pretty much dead at this point and superseded by better alternatives if you're designing an RPC service.


> - JSON RPC code generators are non-existent or badly maintained depending on the language.

Very much so. It’s in a terrible state where I’ve looked. Most of the tooling is by OpenAPI or similar which comes with a bloatload of crap that is only marginally better than say SOAP. It needs to be much simpler.

> - Not binary like Protobufs or similar

Agreed. This is not an issue for small things that can be base64 encoded but once you need large blob transfers you don’t have any reasonable option. This is a problem in eg graphql which also misses the mark and you have to step outside for things like file uploads.

It feels like the whole standardization effort around json rpc is weak. It doesn’t address the needs of modern RPC-like systems. Which is unfortunate because there’s a real opportunity to improve upon the chaos of REST.


It's not ideal, but in practice GZIP base64 is only marginally larger than GZIP binary


Indeed, good point, and worth clarifying. A lot of people think the size overhead is the problem, which usually it isn't, like you say, because of fairly cheap compression.

However, the main issue with big base64 blobs is that you can and should never assume that JSON parsers are streaming. So you may need to load the whole thing in memory, which of course isn't good.

Note that I'm not necessarily blaming JSON for this. My gut feeling is that crusading for streaming parsers is a Bad Idea. Instead, this is something that should probably be a higher-level protocol, either by streaming chunks (a la gRPC) or by having separate logical data streams (see e.g. QUIC). JSON RPC does not, afaict, solve these issues.


Base64 multiplies the GZIP size by 1.33x (4/3, or a 33.3% increase in size)

SO (https://stackoverflow.com/questions/4715415/base64-what-is-t...)


That's for Base64-encoded GZIP, not GZIP-encoded Base64 :)


Have you tried zstd, now widely supported?


Thanks for this. I felt I was going crazy, decrying many professional and smart engineers work as not being 'expert' enough, as if they didn't weigh up and consider other options. Yes, there can be a bit of cargo culting, but to claim that only experts use JSON RPC is ridiculous.


i always fail to understand what kind of services there are that aren’t at least RPC-ish

thin CRUD wrappers obviously but usually when you are piping data from one source/format to another, you typically want to do something that is ever so slightly “not-CRUD” (call another API/service, etc.)


I'm with you on this one.

Probably the confusion comes from the fact a lot of people think having a verb in their URI makes the API RPC, while only having nouns is the proper REST.

But the whole verbs vs nouns debate in the context of REST sounds a bit like... arguing whether building a round or square control tower out of straw will attract more cargo.

HATEOAS is the cornerstone of REST, this is what sets it apart from RPC-style applications, not the absence or presence of verbs in URIs.

Think of a regular (that is non-SPA) Django, RoR, etc application.

The user points their browser to the app's home page. The backed receives the HTTP request, renders the HTML, and sends it back to the browser.

The browser renders the HTML and lets the user interact with all the control elements on the page. When the user clicks a button or follows a link, the browser sends the corresponding HTTP request to the backed which inspects it and decides what next HTML page (or maybe the same) representing the state of app should be transferred to the user.

This is basically REST. The key to notice here is at no point in this example the browser gets to decide what the app's "flow" is supposed to be -- this is the sole responsibility of the backend.

A consequence of this is the entire structure of pages (aka resources) can undergo a drastic change at any moment, but as long as the home page URI stays the same, the user doesn't suddenly need another browser to access the app.

If changing a resource's URI, or removing a resource altogether can break an existing client, if an existing client cannot make use of a new resource without changes to the client's sources -- that's RPC even if there's not a single verb in the API URIs.

Most likely this architectural style isn't something that first comes to mind when we think of today's mobile apps or SPAs as API clients. And in my opinion it's just not a good fit for most of them: the server isn't expected to drive their flow, it just exposes an API and lets each client come up with its own UX/UI.


Noob question, why is batching redundant in HTTP2?


It isn't.

Batching means combining multiple logical operations in a single physical request. HTTP/2 muxes N logical requests over 1 physical connection, which is good, but the application will still process requests independently. You always want to batch workloads into single requests if you can, HTTP/2 doesn't change this.


Doesn't seem redundant to me. Even if you can multiplex requests, batches still have certain advantages, e.g.

- compression is often more efficient with larger payloads

- can reduce per-request overheads, e.g. do authentication once rather than X times

- easier to coalesce multiple queries, e.g. merge similar requests to enable data retrieval via a bulk query, instead of X individual queries


HTTP/2 Supports multiplexing, so you can send multiple requests at once on the same connection


I don't like REST either, but JSON RPC is similarly hamstrung in some scenarios (examples: streaming, CDN caching, binary encoding).

I mostly dislike REST because nobody can agree on what it is and there are too many zealots who love to bikeshed. If you stick with the simple parts of REST and ignore the zealots, it's decent enough for many scenarios.

I've yet to find an RPC protocol that fills all requirements I've encountered, they all have tradeoffs and at this point you're better off learning the tradeoffs and how to deal with them (REST, JSON RPC, gRPC, WebSockets, etc.) and how they interact with their transports (HTTP/1.1, H2, QUIC, etc.), and then play the unfortunate game of balancing tradeoffs.


ReST makes sense in certain cases, where resources are a tree (like a typical web site is a tree), with collections of leaves, and these leaves make sense by themselves. Then you can go full HATEOAS and reap some actual benefits from that.

Most of the time (like 99.9%) what you happen to need is JSON RPC. Even if some parts of your API surface look like they would fit the ReST model, the bulk does not. Ignore that, build a protocol along the lines of your subject area. Always return 200 if your server did not fail or reject the request, use internal status signaling for details. Limit yourself to GET and POST. Use HTTP as a mere transport.


These seem arbitrary rules.

"Use internal status signaling" for example doesn't seem any better than deciding what status codes mean what; it's just a second layer of codes where the first one is now useless.

"Limit yourself to GET and POST." - delete and patch are pretty useful for documentation simplicity too. If there were a LIST verb that would be even handier, but nothing's perfect.

"build a protocol along the lines of your subject area" - I think you can do this (and well or badly) using REST or RPC forms.


+1 and I'll bump it up a notch... not only should you ignore REST you should ignore URLs. You want to write protocols, not APIs. Redis, for example, has a better "API" than any web API I've used. Easy to use, easy to wrap, easy to extend and version. HTTP is the other obvious example that I shouldn't have to go into.

If you'd like a good back and forth on the idea the classic c2 page is a great resource. http://wiki.c2.com/?ApiVsProtocol


Don't ignore URLs completely! They are great for namespacing and versioning.


Why add the additional complexity of multiple connection points? Protocols support both of those operations perfectly well and it seems that adding URLs would just confuse things.


Because at some point you will need to deprecate ciphers and when you do you don't want old clients to explode. The domain is the way you version connection requirements so you can support old clients with crappy ssl options without screwing up the security of new clients.


HTTP is itself a protocol, and URLs are part of that protocol. They're not really "connection points" in any meaningful sense.


Sometimes all you got is port 443, and adding subdomains is a non-zero hassle, especially if you serve all off the APIs from the same code anyway.


You don't need subdomains or other ports because you encapsulate everything in the protocol. A system that works on a protocol only really needs a data socket which can be simulated pretty easily via any URL with the POSTs working as a bursty stream.


Ahh, the 2000's called. They want their SOAP back.


I don't think the parent was referring to an XML-based protocol.


This article defines REST incorrectly, and doesn't seem to understand the concept of HTTP methods, calling them verbs (arguably fine) and types (huh?) seemingly arbitrarily. Methods are a core part of HTTP -- just because you can't specify them explicitly in a browser as a user doesn't mean they're "cryptic curl arguments" or worth ignoring. I'm not sure I'd put too much stock into this perspective.


Thank you all for the great comments.

I want to emphasize that I was not thinking about JSON RPC as a specific protocol, but more as a JSON format to transfer data, similar to how REST APIs usually do, and some kind of "HTTP method agnostic remote procedure call", it does not have to be JSON RPC standard.

Personally, I am a fan of just having API Class-es + methods that automatically map to API calls with automatic api interface and doc builders. I find that it would be super strange if I had to prefix my internal methods with DELETE or PUT based on do they remove or add to some Array. Using that logic, why do that in APIs.

I just find it super strange that people want to mirror their app logic + error response codes to some protocol like HTTP – ridiculous :) Why not go even lower as TCP and use some of that spec for our client <> server API conn. Many people will laugh, but if you think about it, where is the difference?


> I find that it would be super strange if I had to prefix my internal methods with DELETE or PUT based on do they remove or add to some Array. Using that logic, why do that in APIs.

It's true that POST ends up being a bit of a grab bag for all the non-CRUD API calls.

But I find it very useful when looking over someonje's API to find them using PUT, or DELETE. PUT in particular provides really useful signals about the nature of the resource we are dealing with.

And lets not get started with the in-built caching etc. you throw away by not using GET.


> I just find it super strange that people want to mirror their app logic + error response codes to some protocol like HTTP – ridiculous :)

Why is this ridiculous?

HTTP is the default protocol for network services, so it seems to me that it is perfectly sensible to design your API to be compatible with HTTP semantics.

> Why not go even lower as TCP and use some of that spec for our client <> server API conn. Many people will laugh, but if you think about it, where is the difference?

Because HTTP is the only protocol that can reliably transit arbitrary networks (middle-boxes, NAT, etc.) in practice.


REST conventions only make sense for externally consumed APIs. Even for those, there's GraphQL.


The Venn diagram overlap between REST and GraphQL is pretty small.


I've been a REST API developer for a few years now. For whatever reason, I've never bothered dipping my toes in the RPC realm. This article resonated with me. Looks like I'll be building an RPC API in the near future.


People are complimenting great technical writing here, but the first definition they provide is this:

> Technically, an API is just a set of rules (interface) that the two sides agree to follow. The company publishing the API then implements their side by writing a program and putting it on a server. In practice, lumping the interface in with the implementation is an easier way to think about it.

which is neither technically correct, nor easy to understand for a layperson. Compare to Wikipedia:

> An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software.


Ch6 “Linking resources together”

This is the most difficult topic for me. I struggle to discover, understand and implement in my code highly linked resources. How an API is organized or designed should be its own chapter IMHO

“We'll skip the details… […] REST practitioners are split on how to solve the problem of associating resources.”

UUUG


I think JSON [tree] data [with no fixed field for unique identifier, and no fkey referencing] is wrong. The lack of a proper type system+schema for data is wrong. The need for server-side query management makes any API supremely rigid.

Still some concerns to solve, for me, with REST APIs.


JSON covers syntax not semantics. This means you can build the formats you want on top of JSON by only describing the semantics without having to make decisions about syntax or write parsers.


Not taking care of the semantics and letting [mostly] code take care of it is what I don’t like about that format. [standardization of @id and @type special attributes by JSON-LD sound like a good step forward, imho]


My point is that there’s no point in complaining that a JSON API doesn’t have any particular semantics like the ones you want. If an API says that the format it returns is “JSON”, then what it actually means is that it’s some ad hoc format cooked up on the spot that uses JSON syntax. The problem is not that JSON doesn’t include semantics – that’s out of scope for JSON – the problem is that the API in question half-assed its format design.

> standardization of @id and @type special attributes by JSON-LD sound like a good step forward

But those were standardised. In the JSON-LD spec.

If you want a format that has specified semantics, use a format that has specified semantics. Like JSON-LD. JSON only solves the syntax part of the problem, by design. It’s not there to define semantics, that’s what formats built on top of JSON do.


This article advocates for a traditional REST API design. At Seam, we use an API design that is like a RESTful RPC API, inspired by the API at Slack. I think that HTTP RPC and Slack-like APIs are much better than traditional REST because most consumers of an API use an SDK, and RPC-style HTTP APIs optimize for the SDK usage.

We also built a framework like trpc but for Next REST APIs[1] to get all the nice benefits of shared types but also the nice benefits of OpenAPI generation that typically come with RESTful frameworks https://github.com/seamapi/nextlove


Side point - has anyone got a better way to refer to a non-technical person than "non-technical"? I see they use that term in the intro and I use it to but seems a bit condescending.


The phrase you seek is called “Layman’s Terms” as defined: https://www.merriam-webster.com/dictionary/layman%27s%20term...

This avoids classification of the reader. To refer to someone who lacks the knowledge of a domain, they are a “layman”.


Layperson (layman in old money), though I don't think non-technical is normally condescending.

https://dictionary.cambridge.org/dictionary/english/layperso...

someone who is not an expert in or does not have a detailed knowledge of a particular subject

"bluffer" would be a humorous alternative.


…. And that’s the entire world is in this state. Stop trying to create victims out of people for no reason.

Non-technical = not technical = does not have technical expertise.

Seems pretty logical to me


I’m fine with that, but seeing the term “normies” here sets my teeth on edge.


How about the diminutive normoid?


I usually use "non-developer", as that's what it means to me most of the time.

I find it hard to call a data analyst (for example), who can be highly technical, as a non-technical person.


Muggle


Business user


noob


When an accurate term seems condescending, sometimes that tells us more about ourselves than the word.


Very well written. However, I have never thought of the word 'endpoint' as used in API documentation in the way they describe it. I have no idea what the true origin is but I've always thought of it as the end of the journey the api request makes over the network.

> These are called endpoints simply because they go at the end of the URL, as in http://example.com/<endpoint_goes_here>.


Basic REST and JSON RPC are very simple to start with, but have common problems when application gets bigger. How do you represent relations, pagination, filtering etc? My go-to specification for structuring JSON documents is https://jsonapi.org/ It covers most basic needs of a standard API.


> POST - Asks the server to create a new resource

> PUT - Asks the server to edit/update an existing resource

Maybe I've been doing it wrong all these years but it seems to me that the guides flip-flops the responsibility of POST and PUT. My understanding is that POST should edit/modify while PUT creates/replaces a resource.


> My understanding is that POST should edit/modify while PUT creates/replaces a resource

The way I've been segmented them is based on idempotency.

If you repeat the same call multiple times, do you get the same result as if you just ran it once? Then PUT is appropriate.

But if you have side-effects like creating new resources, that would result in different action each time you make the call, then POST it is.

Idempotent methods include GET, HEAD, PUT and DELETE, the resource should always end up in the same state after calling them N times (barring errors/exceptions and such of course). I'm fairly I got this from when I initially read the specification, it's probably mentioned with a bit more grace in the HTTP/1.1 spec.


Are you mistaking POST for PATCH? What I've been working with is:

- POST creates

- PUT replaces (i.e. edit, but you need to provide the whole resource)

- PATCH edits (i.e. you can only provide some fields)

APIs rarely implement all these properly in practice but that's my understanding of the theory.


I remember getting an interview question wrong when I said "yeah a get is supposed to just respond with data but you're writing it, you can make it do whatever you want"


I mean most GET requests have at least one side effect: one or more cache writes.

I’ve also implemented some GET endpoints that are a GET but have a side effect of marking something as read. (Normally as a variant to an existing endpoint for sessioned user).

I would expect at a minimum though if you are doing writes during a GET it should be idempotent.


Well, caching a response to a GET request is always going to be subject to variables like Etag and other hashes of the request, time limits, etc. which all ensure that responses, even old responses, are never _wrong_, they're at worst _stale_.

That's different, and safer, than something like a "read" bit on an entity, presumably tracked in an application data layer. I don't think you can mark something as "read" in your application from a GET request. Even if your server sees the response to that GET request as successful, it doesn't necessarily mean that the requesting client actually successfully received the response. As one of infinitely many possible counter-examples, consider a middlebox between a client and your server, where one client request to the server may generate N requests from the middlebox to the server.


While you might technically correct about using a GET to mark a “read” bit as correct on some activity, in reality there’s a trade off to doing it in a PUT.

Let’s say you have some notification resource which is a link redirecting to the thing that triggered the notification. Ideally the notification will automatically be marked read after the user sees the thing they clicked.

My setting the read bit in the GET that makes the redirect you open up. 2 negative possibilities:

- if someone could guess the GUIDs of the notifications they could CSRF a users notifications as marked read. (Unlikely and low impact if it does occur). - Adds the potential that the client may not have loaded page after the redirect and seen the resource.

There is a UX tradeoff now though if we make this a separate PUT after the page loads:

- in a web application context the user will have to either enable JavaScript so the app can automatically mark this as read or have a separate form on every landing page to mark it as read.

Another alternative would be to make this a POST form to view the notification that redirects but you have in effect the same issue of the user maybe not loading the page after the redirect.

At the end of the day for something as minor as a notification being marked read (as a result of a user clicking directly on it), some idempotent modification can work out and be easy to implement.

Now to be clear I am referring to a purpose built endpoint for a web application.

We expose 1000s of truly restful endpoints that are used outside of a web context and something like this doesn’t really make sense for them.


You probably wouldn't use a PUT for anything like this, true. But if you're going to mark a message as "seen" in a way that would impact a UI widget like an "unread notifications" red dot, then you almost certainly want to make sure that the state-changing request for that message is a POST, not a GET.

There are just so many ways for GET requests to be delivered to a server (or load balancer, or IP, or domain, or...) multiple times for a given client request. That capability is built in to HTTP and exploited in more places than you can ever hope to account for, or even detect.


During any serious side effects with a GET is a bad idea because of xsrf anyways


This is indeed the standard, AFAIK. Not sure what resources mention otherwise but it seems like a lot, judging by the comments around


POST is a kitchen sink. It can do anything. If it creates it must return a 201 with the new resource's location, otherwise if it succeeds but does not create a new resource (just modifies one) then it must return 200.


PUT can create - depending on whether the resource name is client or server determined.


Somewhat agreed, I see them as:

PUT - is effectively an "upsert" at a specific url. Doesn't exist? Create it, does exist? replace it.

PATCH - update a resource with a diff, at a specific url.

POST - this is a RPC, in the case of a REST API it can be used to create a new resource where the "id" is not provided and set by the server, it then redirects to the new url.

POST can be used for any RPC endpoints, even as part of a REST api.


I thought the same, but apparently the article is correct

https://www.ietf.org/rfc/rfc2616.txt


Interesting. I guess I've been going off of RFC 7231

> The PUT method requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload.

https://www.ietf.org/rfc/rfc7231.txt


My rule of thumb: if you know the ID of the resource you’re creating, it’s a PUT. If the system generates the ID, then it’s a POST.


More generally the URI instead of the ID.


The I in URI, stands for Identifier...


I’ve always known it as stated in the article, and I’m pretty sure that’s right, though I’ve never noticed any functional difference between the two (aside from what any given API may enforce).


And also what if you have a huge nested query which is only reading the database but is difficult to pack into URL parmeters (too long for example and you hit a character limit)? POST with a json body instead of GET even though it's against RESTful principles?


This is one acceptable workaround. Generally having caching at this point is unnecessary, and it's probably a good idea or developers to pay special attention if doing this.

If you have a longer-running query you could alternatively construct a Query resource of some sort and then GET it until it's ready.


There’s nothing like good technical writing.


Title should say "Web APIs".


As should the contents.


(2014)


was talking to someone about how product managers should have an understanding of engineering and they said you should just know what apis are haha


(2014)


maybe call it 'cloud APIs'

It was written in 2014, not sure if it's still 'up to date' but the articles seem well written, concise, to the point.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: