...yet the guy from Google explaining the reasoning claimed higher memory usage as one reason Python was discouraged.
I don't know myself, as I've never written a full non-trivial, scalable business application side-by-side in Java and Python, but I'm definitely hesitant to disregard what a Google engineer says about it, as I'd imagine they have more experience with that sort of thing than most of the rest of us, and I can't believe they'd make technology decisions like that without measurements to back them up.
Is it possible that the benchmarks are not giving realistic estimates about how large apps scale in memory usage?
I thought that in any modern OS you would have the libraries loaded only once and the only thing that is multiplied across processes (or threads) is the working data for that specific thread or process. The overhead should be minimal.
If not, it's an OS problem outside the domain of the Java and Python maintainers.
What we are apparently seeing is that the working data is larger in Python.
I thought that in any modern OS you would have the libraries loaded only once and the only thing that is multiplied across processes (or threads) is the working data for that specific thread or process.
You'd think that, but apparently mmaping bytecode would be too easy for the JVM people so each process copies all its bytecode into its heap (modulo the class data sharing kludge). I think Ruby also enjoys this misfeature and I wouldn't be surprised if CPython does the same.
What we are apparently seeing is that the working data is larger in Python.
Not surprising since a Java object is a struct but a Python object is more like a hash table.
Sorry, but it is very uncommon for Java to use 10-100x as much memory.
Those benchmarks are pretty irrelevant because all the algorithms in there are short-lived.
As far as I know CPython uses reference-counting for GC. And while this has definite advantages, this creates the potential for memory-leaks when using cyclic data structures. Also, the heap can get pretty fragmented, which for long-running processes can leave the heap looking like swiss cheese.
This doesn't happen with Java, but as a side effect a compacting GC usually allocates twice the heap size needed since it needs space to defragment the heap. The JVM's GC is also a Generational GC, separating generations in multiple regions, such that short-lived objects can be collected faster and newer objects can be allocated faster with speeds comparable to a stack allocation.
On Python, reference counting has the advantage of being cache-friendly and pretty deterministic. And when it comes to web servers that fork processes for requests, this is alleviated by the fact that the Python process has a short life.
On memory consumption, yes, Java might have the heap doubled, and the garbage collection is more non-deterministic. But it depends on your application ... the JVM ends up using memory a lot more efficiently for long-running processes, although the upfront cost is higher.
Java 6 -server memory use reported on the benchmarks game site that's around 12,000KB - 14,000KB is base JVM use at default settings - so it probably isn't telling you much that's interesting.
Although you might see a couple of examples where CPython memory use is higher because of buffering before output from multiple processes can be synced -
Or you can look at the numbers for yourself on the shootout site. Java using 10-100x as much memory is not uncommon.