Everything in Python is late-bound, untyped, and mutable by default. Straight off the bat that’s three key design decisions that make for a slow interpreter. Defining numbers as heap objects, implementing structures as hash tables, and poor parallelization (<koff>GIL, dumb POSIX threading) are three more.
Sure, you can throw huge amounts of brains and resources at code analysis, opportunistic JIT, and the rest a-la V8, but at some point you have to say “Is this an effective use of those valuable resources?” Especially when every such optimization could potentially break existing, stable user code running in production.
..
But, let’s get back to larger perspective:
Faster overall takes into account not just the time it takes to run a user program, but also the time it takes to learn, implement, debug, and deploy. Only one of these is machine time, which these days is cheap as chips and nearly inexhaustable; all the rest are human labor, which is both expensive and limited.
Which is not to say that Python is faster at all those human tasks than other languages. To determine that would require real-world practical testing, plus a willingness to accept what those tests tell you (which might not be what you wanted to hear). But I’m willing to bet that the time spent on all those manual tasks vastly outweighs the time saved by 20% faster runtime for the vast majority of use cases.
So at some point you have to stop and ask: Are these fundamental changes adding genuine, measurable value for real-world users solving real-world problems? Or is it just code masturbation basement nerds whose idea of productivity is playing with internal guts in pursuit of some trivial abstract benchmarks?
Because, honestly, “20% faster” is an absolute joke. If I can’t make my program 200% faster just by adding a second hardware box, then I’ll want to know why. And if the answer is no more complex than “because the language isn’t very good at parallelism”, then all other arguments are completely moot.
..
Look, I’ve written slow interpreters. Implemented in Python, no less. A not-very-complex program might take 2 minutes to run. But then, 80% of that painfully long run-time is actually IO-bound operations, and even that is totally irrelevant when those 2 minutes of machine time have replaced 20 minutes of manual work.
That’s 20 minutes of paid human labor, eliminated by a really-slow custom interpreter written in pretty-slow CPython. You can easily put a dollar cost on that human time (salary, etc) and multiply it by the number of work units in a year, and you’ve calculated its real-world benefit.
Let us know when you can calculate the real-world benefit of a 20% quicker proprietary Python-like interpreter that may or may not execute user programs exactly the same as CPython. Otherwise, as I say, anything less than a magnitude’s improvement isn’t even worth getting out of bed for.
Everything in Python is late-bound, untyped, and mutable by default. Straight off the bat that’s three key design decisions that make for a slow interpreter. Defining numbers as heap objects, implementing structures as hash tables, and poor parallelization (<koff>GIL, dumb POSIX threading) are three more.
Sure, you can throw huge amounts of brains and resources at code analysis, opportunistic JIT, and the rest a-la V8, but at some point you have to say “Is this an effective use of those valuable resources?” Especially when every such optimization could potentially break existing, stable user code running in production.
..
But, let’s get back to larger perspective:
Faster overall takes into account not just the time it takes to run a user program, but also the time it takes to learn, implement, debug, and deploy. Only one of these is machine time, which these days is cheap as chips and nearly inexhaustable; all the rest are human labor, which is both expensive and limited.
Which is not to say that Python is faster at all those human tasks than other languages. To determine that would require real-world practical testing, plus a willingness to accept what those tests tell you (which might not be what you wanted to hear). But I’m willing to bet that the time spent on all those manual tasks vastly outweighs the time saved by 20% faster runtime for the vast majority of use cases.
So at some point you have to stop and ask: Are these fundamental changes adding genuine, measurable value for real-world users solving real-world problems? Or is it just code masturbation basement nerds whose idea of productivity is playing with internal guts in pursuit of some trivial abstract benchmarks?
Because, honestly, “20% faster” is an absolute joke. If I can’t make my program 200% faster just by adding a second hardware box, then I’ll want to know why. And if the answer is no more complex than “because the language isn’t very good at parallelism”, then all other arguments are completely moot.
..
Look, I’ve written slow interpreters. Implemented in Python, no less. A not-very-complex program might take 2 minutes to run. But then, 80% of that painfully long run-time is actually IO-bound operations, and even that is totally irrelevant when those 2 minutes of machine time have replaced 20 minutes of manual work.
That’s 20 minutes of paid human labor, eliminated by a really-slow custom interpreter written in pretty-slow CPython. You can easily put a dollar cost on that human time (salary, etc) and multiply it by the number of work units in a year, and you’ve calculated its real-world benefit.
Let us know when you can calculate the real-world benefit of a 20% quicker proprietary Python-like interpreter that may or may not execute user programs exactly the same as CPython. Otherwise, as I say, anything less than a magnitude’s improvement isn’t even worth getting out of bed for.