Me too! SciPy has been my main reason for not switching for a lot of work I do. I'll see if I can switch soon now :) I just need Django to work and I'll be set to completely leave 2.x behind!
I read on reddit that they are working on it. They will actually do a code rush to release a Python 3 enabled version next weekend (http://www.ctpug.org.za/wiki/Meeting20110305).
CherryPy too. I think the wave of updates might be starting (a little bit at least). I know the larger projects are at least talking about going to P3. So I take this all as a good sign.
Sure. It doesn't fix the core problems. Global Interpreter Lock is still there. The semantic issues that make it very hard to optimize JiT are still there. It makes no effort to move towards the multicore era -- very minor changes would make the language potentially scale much better. Still no tail recursion. The C API still locks you into a bunch of non-scalable data structures. Those are the kinds of improvements worth breaking backwards-compatibility for.
On the other hand, it makes a bunch of syntax tweaks that don't make things much better, but break backwards compatibility. You no longer have the same kind of tuple expansion in function calls. Why? No good reason. Print syntax changed. It's a little more consistent, and a little less readable. Better? Maybe, maybe not. Just an arbitrary little change. Personally, I would have just made subroutines (in addition to functions) language constructs. C-style string formatting was deprecated. Why not just leave it around? It wasn't causing any harm.
Iterators replace lists. I can see why you'd want the performance -- but the way it is done makes the language a lot more complex for newbies. It also breaks backwards-compatibility. In many places, the break is arbitrary, capricious, and just lazy. E.g. .sort() no longer works. It'd be easy to make a .sort() that changes object type and just works. Instead, you need to make tweaks in thousands of lines of code to go to sorted().
And for each speedup, you've got a slowdown. Ints are now infinite precision. I think the old way (no iterators, normal ints) was probably both faster and more readable than the new one (iterators, long ints), but hey. We gotta swap things around. The old way is just so... old.
It's nice to have Unicode, but the way Unicode is implemented is seriously broken. You run into deep issues dealing with files. It's also seriously not backwards-compatible. It'd be easy to come up with schemes that are only a little not backwards-compatible.
Oh. And just to be ass-backwards, we'll call the executable "python" just like in Python 2. If you install Python 3, and scripts begin with #!/usr/bin/python, suddenly your system breaks.
I could keep going for a while, but personally, I prefer Python 2 to Python 3. Python 3 doesn't fix many things that were wrong (printing of floating points being the exception), but seriously breaks backwards compatibility. Most of the changes are just that -- changes, and not improvements. You can make an equally plausible argument that the old way was better. I fail to see the point.
While each of your individual complains may make sense, as a whole, they are amusingly self-contradictory.
You start with arguing for groundbreaking changes in the semantics and internal API, which practically render every single binary extension obsolete. Then you complain about minor semantics changes that forced you to do simple mechanical conversion.
Don't you think C extension authors would be equally or even more upset at having to rewrite their extensions from scratch? Even with the minimal incompatibilities on the C API level, it took ~2 years to port numpy/scipy, do you believe it would have ever happened if the changes you advocate for were introduced? Well, had the scope of Python 3 included removal of GIL and a JIT compiler, I doubt we had a single release yet while the usage of Python 2.x would be on decline (see the sad story of Perl 6).
Concerning your individual complaints.
It makes no effort to move towards the multicore era -- very minor changes would make the language potentially scale much better.
"Very minor changes"? I seriously doubt it. Besides, I'm sure you didn't miss the announcement: we are already in the "cloud" era. ;)
Print syntax changed.
It was probably done to remove two ugly special forms:
print foo,
print >>file, foo
You no longer have the same kind of tuple expansion in function calls.
Doesn't strike me as a good example. It was a problem both for introspection tools and C API, it was discouraged for a long time and hardly in use. I can't believe there's a single tear shed for the loss of it.
Oh. And just to be ass-backwards, we'll call the executable "python" just like in Python 2. If you install Python 3, and scripts begin with #!/usr/bin/python, suddenly your system breaks.
That's what `make altinstall` is for, which is thoroughly documented in the section "Installing multiple versions" of README. If somebody doesn't read installation instructions, perhaps it's their fault after all?
"very hard to optimize", "very minor changes", "seriously broken"
Seriously, it's not very specific. A lot of heat, not enough sense.
> While each of your individual complains may make sense, as a whole, they are amusingly self-contradictory.
I don't think they are. My basic take is that you break compatibility only when you really, really have to, and when it buys something significant. I don't see anything significant that Python 3 buys. I see a bunch of minor changes and tweaks to syntax, some of which are arguably better, and some of which are arguably worse. It's annoying and pointless. We'll have a split community for a decade so that print syntax can be slightly more consistent. Many corporate projects will just never move. On the other hand, it misses the whole multicore/GPGPU era coming up.
> Even with the minimal incompatibilities on the C API level, it took ~2 years to port numpy/scipy, do you believe it would have ever happened if the changes you advocate for were introduced? Well, had the scope of Python 3 included removal of GIL and a JIT compiler, I doubt we had a single release yet while the usage of Python 2.x would be on decline (see the sad story of Perl 6).
Perl 6's problem is that it is not released. It's a research project in language design. Larrry didn't design a target for what he was building, and so it had feature creep, and never shipped. It's got bits and pieces, but even now, it is unfinished. If Perl 6 had been the same language, but designed and shipped quickly, I doubt you would see the same problems (especially given source-level backwards compatibility -- Perl 6 can load Perl 5 modules).
Here, virtually all the development could be done incrementally, with a target design spec (we have models for changes from existing languages like Fortress), and in parallel in Python 2 and 3. I think it would be very possible to bound it in scope.
With regards to SciPy, Numpy, the answer is a resounding hell yes. Right now, the only reason they have for porting to Python 3 is because somebody tells them it is the future. The major problem with Numpy/Scipy for a very large number of users is speed. It is at least an order of magnitude slower than C. If a Python 3 port allowed me to write:
d=[(cos(x), sin(y)) for (x,y) in d]
And Python 3 could either JiT this to run at C speed, or better yet, fork it out to the GPU, I can assure you the porting process would be Numpy's/Scipy's top priority.
> If somebody doesn't read installation instructions, perhaps it's their fault after all?
Not when it's the recommended way of doing it to the point where distributions do it. Right now, you've got a situation where end-users have system installs where Python programs just don't work.
> Seriously, it's not very specific. A lot of heat, not enough sense
Because each of those is an essay, and was beaten to death elsewhere. See the Reddit articles, LWN articles, etc.
I would be interested too but I think there is no answer. It is incorrect. P3 corrects a lot of things from P2. I doubt you could support "poorly designed" unless you are talking about Python as a whole.
Well, I don't entirely agree here; there are some valid concerns about Py3-specific design decisions. Take, for instance, conversion of `str` to Unicode. That
* significantly increases the memory footprint as every character requires 2-4 bytes to encode.
* does not provide 1-to-1 correspondence with certain Asian encodings.
* creates an impedance mismatch between Python and byte-oriented filesystem/internet protocols.
Personally, I believe total Unicodification in Py3 was worthwhile, but from some perspective, it could be seen as a regression.
That said, I'd love to hear what specifically in Py3 could cause such a burst of hate.
Python3 has the 'bytes' type, which can be used for the same things that 'str' was used for in Python 2.x . So you don't lose any functionality, just rename your types.
As an added bonus, the name 'bytes' makes it very clear, that you're dealing with raw data.
Strings have become a data-type solely for human-readable text. For these, it makes perfect sense to only support unicode.