Docker does solve those problems, even if you and I prefer to use other solutions.
And by the way, you don't actually need virtualenv :) we run multiple webapps on a machine by just installing dependencies in a directory (pip -t) in the git repo, and then adding the appropriate PYTHONPATH in the service's startup config. As long as you don't need multiple versions of the Python interpreter, this works fine.
Is it really computationally expensive? Containers/jails/zones are essentially chroots "on steroids", isolating things other than the filesystem (networking, hostname, process IDs, users, etc.)
Once built a docker container is not expensive, but the idea with Docker is to create immutable, layered containers and rebuilding every time there is a change. Unlike with nodejs, it's harder to volume mount python packages installed in the docker container and so no one does it.
Change in requirements means rebuilding the container which is a very computationally and IO expensive process. So if you can get away with it virtualenv is better, but putting everything in a container so you can simply ship a container is also useful. It's a compromise.
A layer with your pip requirements being added and then installed is the most expensive part of building a python container so if you change your requirements you invalidate the cache for that layer (and possibly the other layers above).
And by the way, you don't actually need virtualenv :) we run multiple webapps on a machine by just installing dependencies in a directory (pip -t) in the git repo, and then adding the appropriate PYTHONPATH in the service's startup config. As long as you don't need multiple versions of the Python interpreter, this works fine.