Hacker News new | past | comments | ask | show | jobs | submit login

why should they respect the semantics? Those were arbitrary and out dated. I don't see any need for that.



Web crawlers and web page views index using GET; if GET affects your server state, so do page views and indexing.

GET is idempotent. That's not outdated, and it's not arbitrary: it is literally stated in the protocol specification. It's a useful distinction created and used by professionals to maintain sanity in complex schemes of communication.

Please do not ignore technical details because you think you're smarter than everyone else on the planet, or that the problems they solved a decade ago somehow went away in the face of node.js.

edit: Just in case you (or anyone reading this) is unaware of it,

http://www.ietf.org/rfc/rfc2616.txt (or pretty: http://pretty-rfc.herokuapp.com/RFC2616)

Find Section '9.1 Safe and Idempotent Methods'.


Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them.

I think that, depending on what his API is trying to do, GETing a counter URL to increment it could be acceptable if his service's main purpose is to be accessible via client-side javascript.

That being said, once you go down that road you have to begin taking into account everything that may depend on GET idempotence: send back proper caching headers as well as busting caches from the client side, decide whether or not you want to block javascript-parsing web crawlers, etc.


> why should they respect the semantics?

Because they are going to be used in an environment where there are lots of other things built based on the specifications and which are likely to rely on (or, in some cases, have inspired) the semantics described in the HTTP spec -- including lots of the infrastructure of the web.

> Those were arbitrary and out dated.

There may be some issues with them, but those would fall under the umbrella of "clearly-identified compelling reasons".


They were arbitrarily decided a long time ago by people who put much more thought into it than you or I, most likely.


Especially in the case of HTTP/1.1, it wasn't "arbitrary", it was based on quite a bit of experience with HTTP in the wild.


(I don't actually believe near any of these things are "arbitrary"--was being facetious in response to tone of the poster.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: