Hacker News new | past | comments | ask | show | jobs | submit login

He's talking about exactly what I am currently doing:

> It's now feasible to build a new virtualenv on every deploy. The virtualenv can be considered immutable. That is, once it is created, it will never be modified. No more concerns about legacy cruft causing issues with the build.

> This also opens the door to saving previous builds for quick rollbacks in the event of a bad deploy. Rolling back could be as simple as moving a symlink and reloading the Python services.

This is exactly what I do now: a new virtualenv from scratch on each deploy in the same directory with all other build artifacts (so that each deploy is in a self-contained, timestamped directory that is swapped out with a 'current' symlink). I just bite the bullet on the additional time it takes to deploy.

The part of this blog post that affects me is that upgrading to pip 7 would speed up my deploy times.

This part seems interesting:

> Another possibility is building your wheels in a central location prior to deployment. As long as your build server (or container) matches the OS and architecture of the application servers, you can build the wheels once and distribute them as a tarball (see Armin Ronacher's platter project) or using your own PyPI server. In this scenario, you are guaranteed the packages are an exact match across all your servers. You can also avoid installing build tools and development headers on all your servers because the wheels are pre-compiled.

I've looked at platter a bit, but I haven't really digested what will be needed to migrate to that point, and he doesn't really expand on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: