One of the complexity of the polling approach on a consumer side is having a long running poller. This is trivial to do in Java apps - start a polling thread - but not so straight forward in case of PHP apps, for example. In that case you'd have to setup a cron job or a separate polling script under some sort of process supervisor like systemd to poll periodically/continuously.
I wonder if the two approaches could be combined to simplify things for consumer apps at the cost of slightly more complexity on the producer side? Instead of POSTing the actual event data to webhook, the producer just uses consumer's webhook to "poke" it - to tell the consumer app "hey, you have new events waiting for you". On receiving the poke the consumer endpoint handler/PHP script can just turn around and do a GET to "/event" with anything > last downloaded event id query. That way you don't have to support long polling on the producer's servers and it's not a big problem if consumer misses couple of webhook "pokes". The next time it does receive a webhook "poke" successfully, it will download all the events and be all caught up. If real time notifications are not strictly required then producer side can even run the webhook dispatching code on a scheduled basis to coalesce multiple events in a single "poke" to a consumer to be more efficient, if desired.
This seems like a solid idea but it seems to me like it needs to be marketed on a per restaurant and it needs to be part of a more comprehensive suite that helps restauranteurs get customers into the restaurant, ultimately.
Shameless plug: http://zipl.ink is the bookmarklet I use for the exact same purpose which also keeps record of the interesting links I zipped to my phone locally as a nice side effect
Each pod has it's own IP address that is routeable anywhere in the cluster. This makes life much easier because you don't have to do port-forwarding onto the host node.
In all current k8s set-ups, each Minion/Worker node has a subnet that it allocates these Pod IP addresses out of. This isn't a hard requirement necessarily, but it tends to be much easier to make this work, since you only have O(Workers) routes to configure instead of O(Pods), but long term, I think we would rather do away with subnets per node, and simply allocate IP addresses for each Pod individually.
I have used this idiom to great results in my project (https://github.com/raksoras/luaw). Basically, request:read() hooks up into libuv (node.js' excellent async IO library) event loop and then yields. Server is now free to run next coroutine. When socket underlying the first request is ready with data to read libuv event loop's callback gets fired and resumes original coroutine.
That's awesome. You've me hooked. For postgres database access, what library do you suggest? Also, I remember lua-nginx can also do non-blocking IO in blocking style code. (http://wiki.nginx.org/HttpLuaModule, http://openresty.org) How does luaw compared to that?
Right now it's just a HTTP server and REST framework. It's a very first release and I don't have any DB drivers for it yet- they would need to be non-blocking as you mentioned.
I have plans to write dbslayer like "access DB over REST" service as a companion to Luaw so that it can use any and all databases that have JDBC drivers available without having to write non-blocking driver specially for each new database. This kind of arrangement where DB connection pooling is abstracted out of application server itself has other advantages related to auto-scaling in cloud and red/black or "flip" code pushes at the cost of slightly more complex deployment.
All depends on how much spare time I actually get :(