Incrementing or decrementing a counter is neither side-effect-free nor idempotent, and GET should generally be both, so you really shouldn't use GET for those operations. [1]
POST would probably be the usual choice for these operations. (Though, since you aren't really creating a subresource but are instead performing a defined transformation on an existing resource, you could probably make a case for PATCH.)
Web APIs are protocols implemented on top of HTTP, and should respect the semantics of HTTP unless there is a clearly-identified compelling reason not to.
Yeah, ok. I agree with that. I changed incrementing and decrementing operations to POST requests only, and updated the documentation. Thanks for the feedback!
I believe most anayltics platforms use/support GET, e.g. google analytics, mixpanel etc. If you limit yourself to use only POST, you will limit users from using your tool in areas like email pixel tracking where only GET is allowed.
Doing a cross domain POST is a pain. Doing a cross domain GET is what you do everyday. If you want to increment a counter based on a user action on a web page, then a GET makes a lot of sense.
Yeah, see I'm kind of torn, because GET just makes it so dang simple. POST requests, though semantically correct, add a small amount of overhead to implementing the counter. I'm not really concerned with spiders, as the URLs are intended to be obscure + robots.txt. In addition, future versions could include simple basic auth for private URLs. Though I intend to keep public URLs around for fun and drive by counting.
Yes ... And to address one of the comments above, PATCH is used to merge data into an existing record while PUT is used to replace the existing contents. This is a bit of an odd case since you wouldn't be sending data in either case.
Web crawlers and web page views index using GET; if GET affects your server state, so do page views and indexing.
GET is idempotent. That's not outdated, and it's not arbitrary: it is literally stated in the protocol specification. It's a useful distinction created and used by professionals to maintain sanity in complex schemes of communication.
Please do not ignore technical details because you think you're smarter than everyone else on the planet, or that the problems they solved a decade ago somehow went away in the face of node.js.
edit: Just in case you (or anyone reading this) is unaware of it,
Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.
I think that, depending on what his API is trying to do, GETing a counter URL to increment it could be acceptable if his service's main purpose is to be accessible via client-side javascript.
That being said, once you go down that road you have to begin taking into account everything that may depend on GET idempotence: send back proper caching headers as well as busting caches from the client side, decide whether or not you want to block javascript-parsing web crawlers, etc.
Because they are going to be used in an environment where there are lots of other things built based on the specifications and which are likely to rely on (or, in some cases, have inspired) the semantics described in the HTTP spec -- including lots of the infrastructure of the web.
> Those were arbitrary and out dated.
There may be some issues with them, but those would fall under the umbrella of "clearly-identified compelling reasons".
I was doing some usability testing in one of my webapps and needed a simple way to count conversions. There are lots of A/B testing solutions out there, but I just wanted a persistent count I could increment on specific user actions. So I created the ArbitraryCounter.com.
I find it useful, but I'm wondering if anyone else might. What do y'all think?
PS. This is WAY beta, and mainly a proof-of-concept. But I'll continue developing it if there's an interest it.
It's probably not that they're hard, but rather that they might require some boilerplate code and bloat that somebody might want to offload to an external service like this one.
The idea was to make something easier than booting up a database. Personally, I just needed a semi-persistent count that I could discard later. This fit the bill for me, and being able to retrieve the counts via JSON, I can easily add them to my Geckoboard dashboard.
I love simple api's like this. Its kind of the same mental model I keep around http://jsonip.com. A simple service that does one thing really well. Its been working well and has grown to millions of requests a day.
Keep working on this, I can see several uses already.
It's a node.js app that has until recently been a single-process app. I just integrated clustering to take advantage of the extra cores Linode gave me. It's a simple but high traffic service and node has handled it like a champ. I have a relatively inexpensive account with Linode that so far has suited the site's traffic just fine.
I'm exploring the idea of offering a higher-tier service. The general feature list is outlined on getjsonip.com but pricing isn't finalized. I'm still collecting signups and feedback right now.
The idea was to launch the proof of concept with public URLs, then if I discovered people were interested, create private URLs. Thing is private URLs require an authentication system, which is fine, but I didn't want to overdevelop the proof-of-concept.
I like the flexibility, but I would suggest you add a "generate GUID" button when choosing the group name in case the user does want a random URL, which seems quite likely.
Yes and no. You could argue that use of passwords or private keys is security through obscurity, but that definition is pretty meaningless. Security through obscurity doesn't really apply when you can make the search space arbitrarily (often exponentially) big very cheaply, as is the case of URLs here.
Manu times you need a counter just to get a unique id. If the counter is incremented by someone else it's not important. On the other hand if the counter can be reset because it hasn't been used for 48 hours it's a big problem...
Yep. I understand this is just a little toy thing, but people are really trying to start businesses off of "APIs" that do not much more than OP's.
The whole point of an API is that you defer a significant amount of serverside processing and logic to a remote host, and receive a nicely formatted and machine-readable response. You have to deal with the overhead of a TCP connection (handshaking / RTT), an HTTP request, and an HTTP response. If you're doing that every time a user visits your web page (or app view or whatever), especially multiple times, then you better have a good reason for doing so.
POST would probably be the usual choice for these operations. (Though, since you aren't really creating a subresource but are instead performing a defined transformation on an existing resource, you could probably make a case for PATCH.)
Web APIs are protocols implemented on top of HTTP, and should respect the semantics of HTTP unless there is a clearly-identified compelling reason not to.
[1] http://tools.ietf.org/html/rfc2616#section-9.1