Hacker News new | past | comments | ask | show | jobs | submit login

This is asking for Guido to have predicted the multicore era and the success of Python in 1991, at a time when SMP architectures had only been commercially available on extremely expensive systems for 5 years. Much of what contributed to that success was in fact the rich C API, and the ease with which it could be embedded into larger applications, and I'm not sure what an alternative would look like that could produce efficient implementations of things like Numpy, etc.



This is what I find most interesting about the Erlang/Elixir world. The programming language design predated SMP by about 20 years - the actor model was chosen because it was a good way to reason about programs not for parallelism. Then multicore processors arrived, and it was 'just' a VM enhancement to make Erlang programs fully parallelised.

Sometimes you just get lucky.


Just a nitpick -- the actor model was not chosen, the BEAM just happens to look like an actor model. The BEAM architecture was chosen for fault tolerance and the features you want for fault tolerance naturally guide you to something actor-ish. Also, the actor model says very little about fault tolerance concern (its concern is in improving computational efficiency bottlenecks)


> This is asking for Guido to have predicted the multicore era and the success of Python in 1991

No, it’s not. My comment applies to single threaded performance as well (I wasn’t even thinking about the GIL when I made my comment)—why isn’t Python as fast as modern JS implementations, for example? The answer isn’t “JS has multi thread support”.

Moreover, a slim C extension interface is about flexibility so you don’t paint yourself into a corner when you don’t know what the world might look like tomorrow. Further, you can have a very rich C extension API without exposing the entire interpreter (e.g., h.py). Further still, Python has broken compatibility several times since its inception, so the idea that this was cemented in 91 is nonsense.


> why isn’t Python as fast as modern JS implementations, for example?

Because the project has explicitly targeted implementation simplicity, largely successfully, for almost 30 years. The internals are a joy to work on, and unsurprisingly the CPython Git repository has 4x as many contributors as v8, despite CPython contribution being largely voluntary and v8 contribution being largely commercial.

Even if performance were an explicit goal, it's important to remember v8 required absolutely massive funding and top tier engineering support to make it happen at all. The most comparable equivalent in the Python world, PyPy, was the product of an extremely dedicated group of mostly doctoral researchers working against incredible odds. V8 only has 2x as many contributors as PyPy. I hope by now you are recognizing a theme: the reason the language is so successful is also the reason we are here complaining about it.

There have been teams in at least Google and Dropbox who proposed major upheavals of the interpreter in the past. Both failed in large part due to the complexity of their proposals compared to the performance gains on offer


You're setting up a dichotomy between a small C-extension interface and code simplicity, but this doesn't make sense. A smaller interface is inherently simpler--it's less for developers to work around, and they can deliver more user value (whether performance or otherwise) per unit complexity.

> The most comparable equivalent in the Python world, PyPy, was the product of an extremely dedicated group of mostly doctoral researchers working against incredible odds

"incredible odds" refers to compatibility with the CPython C-extension interface, which is exactly what I'm talking about.

> There have been teams in at least Google and Dropbox who proposed major upheavals of the interpreter in the past. Both failed in large part due to the complexity of their proposals compared to the performance gains on offer

No, they failed because they had to work within the considerable constraints imposed by historically bad decisions (such as the C-extension interface). The proposals need to be complex because they can't break compatibility.

> I hope by now you are recognizing a theme: the reason the language is so successful is also the reason we are here complaining about it.

Not at all! A narrower C-extension interface doesn't imply that C-extensions would be more difficult to write. There are no downsides to a narrower interface (apart from breaking compatibility, but we're positing a world in which this decision was made in 2008 or earlier).

The real theme here is that historical bad decisions + compatibility guarantees add significant complexity to every single improvement if they don't preclude them altogether.


I think a more reasonable timeframe for this discussion is the release of Python 3 in 2008. That was much more recently, clearly in the multicore era, and a major opportunity to do a breaking change.


There were multiple discussions. The main issue was removing the GIL made single-threaded Python. Experiments in 1999 found single threaded performance was 50% slower - you would need 2 cores just to break even, assuming you had parallel code in the first place.

Here's a 2007 essay by van Rossum on the topic: https://www.artima.com/weblogs/viewpost.jsp?thread=214235 .

> I'd welcome it if someone did another experiment along the lines of Greg's patch (which I haven't found online), and I'd welcome a set of patches into Py3k only if the performance for a single-threaded program (and for a multi-threaded but I/O-bound program) does not decrease.


Arguably, Guido was in a great position to be thinking about concurrency in 1991: https://en.wikipedia.org/wiki/Amoeba_(operating_system)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: