For those who want to explore the DNS based services there are a few options. These services seem to very similar and most of them are $5 a month.
I have been using http://playmo.tv for just over a year and I get my daily dose of Netflix, Hulu through my Apple TV.
Prior to discovering "geounblocking" I must admit how oblivous I was about the advances in streaming media :s I am happy to pay for the conveniences of streaming Netflix, etc.
It's also worth noting that those services have to "manually" enable support for sites. Playmo e.g. doesn't seem to support any of the BBC/iPlayer content. Unblock US does (and a few more). The only downside is that the Playmo website looks way more snazzy :)
There are a few options when it comes to geo unblocking dns services and I agree with harisenbon - although I am using http://playmo.tv (happily for over a year now)
Restricted access is all to familiar for me although the situation feels slightly better than in the rest of Europe.
A recent trend in bypassing geo-restricted content seems to be a DNS solution. I signed up for an account with playmoTV (http://playmo.tv/) a couple of months ago which has work quite well in streaming Hulu Plus, Netflix, Pandora and Spotify here in Iceland. In addition it's quite ironic to see DNS layer being used to grant access to restricted content while SOPA/PIPA plans to use it to restrict access even further.
Their focus is to enable access to US content but I have it confirmed from their help desk that UK support (BBC iPlayer, Sky Go and ITV) is soon to be announced. Might be something to look into for the Olympics.
Thanks for some very spot on points. I do mention multiple processes in the blog: "Fortunately not all problems require concurrent solutions as they are either IO-bound or can be scaled by forking multiple processes." And this can be a solution to many of the scaling needs. Some of the challenges I'm facing are however more CPU oriented and it gives me a warm feeling to have a concurrency model like STM to lean on.
I totally agree with you on the point regarding worrying about delivering a product instead of rewiting the work. That's why the services already written will remain in place and I'll there fore be running Python alongside the Clojure/JVM stack.
> I'll there fore be running Python alongside the Clojure/JVM stack.
Yikes, really? That sounds even worse than a clean break. You don't shed your perceived problems with Python and instead get to manage two codebases and stacks. Your operations team will absolutely love you in the future.
Based upon a casual read of what your service does, I'm really stressing to think of a CPU-bound situation that can't be pooled into a multiprocessing pool. All of the heavy work I can think of your service doing -- particularly crawling -- is going to end up network or I/O-bound, isn't it? I'm coming up empty and working with a theory. Perhaps you can share what your CPU-bound process is?
The Python services probably end up getting rewritten at some point but I want to focus on delivering a product. Right now the Python services can easily be deployed as the don't have that many dependencies on other libs. All further development will be done in Clojure though.
Of course most things can bee pooled into a multiprocessing pool. It's just a matter of the pain you have to endure while doing so. For some tasks it's a straight forward process while for other tasks it can be pretty painful. The crawler is IO-bound but most of the collaborative filtering algorithms are CPU-bound and having nice concurrency constructions in the language is a bonus.
It's wise of them to keep the old solution around while they're experimenting with a new technology. At some point they'll have an urgent need to deploy something that they can't figure out how to do on the Java stack, and they'll be able to fall back on their Python expertise to get it done by the deadline.
Just to collaborate a little bit on this point. With leiningen things are not too bad.
A typical deployment instruction file (project.clj) looks like this and with "lein deps" you're all set up in a few seconds.
(defproject leiningen "0.5.0-SNAPSHOT"
:description "A build tool designed to not set your hair on fire."
:url "http://github.com/technomancy/leiningen
:dependencies [[org.clojure/clojure "1.1.0"]
[org.clojure/clojure-contrib "1.1.0"]]
:dev-dependencies [[swank-clojure "1.2.1"]])
I am not familiar with leiningen or clojure, but it looks like you are just adding a list of requirements + version, which is exactly what you get with pip + requirements.txt file ?
I believe that's what's going on, yes. Leiningen runs on top of Maven and so it's downloading the jars you need and storing them in ~/.m2/ where they will be linked in at compile time.
A "lein uberjar" will roll your whole program up into one .jar file, ready for deployment.
https://github.com/zeromq/curvezmq
CurveZMQ was recently announced by Pieter Hintjens (one of the 0mq contributors) on his blog:
http://hintjens.com/blog:45