"JavaScript is becoming more and more an ubiquitous scripting language that challenges Python"... The Python dev team cannot change these conditions; even if they came up with the perfect programming language tomorrow.
Python3 has some nice features and some that could have been better designed, but personally I don't think it's as bad as this author makes it to be. It's pretty much a logical progression of the 2.x series. Python 3 is being adopted, slowly. I still think it's simply a matter of time, as Linux distributions have plans to move on. No one expected it to go quick.
And I like that Python 3 makes Unicode versus Bytes explicit. There's working with sequences of symbols (for humans) and working with bytes (for machines). I regularly hoped this would be done when working with binary data and making hw interfaces in Python, as there is a lot of confusion regarding bytes/unicode in Python 2 also in libraries...
It was interesting to read some discussion and arguments for/against 3.0, but it could have done with a little less "Python is now doomed" attitude...
Well, the author has direct experience porting and maintaining (very popular, well-written, well-tested) Python libraries, so I think it bears more weight than the platitudes I hear from "the other side" of "it will all work out in the end, it's just a matter of time." I think the latter line of reasoning is going to fail without any specifics of how it's really meant to "all work out" if issues like the ones he's given examples of in the blog post aren't treated seriously (particularly the straddling-2-and-3-issues).
2to3 isn't perfect but I've made it work for SQLAlchemy and Mako - the Mako port was very easy. SQLAlchemy was trickier, largely because we still support all the way back to Python 2.4 (we're moving to 2.5 for 0.8).
When I wrote Alembic just recently, I ran 2to3 on it and it literally needed two "file()" calls to be changed to "open()" and that's it.
The contradiction in Armin's post is that he starts off with "we don't like Python because its perfect, we like it because it always does a pretty good job, etc. etc.". Aside from Armin's very specific issues with codecs (I've never used codecs that way) and with the filesystem (OK, he's convinced me work is needed there), I don't quite see how 2to3 is any different than "not perfect, but does the job at the end of the day".
Also, we really shouldn't be fixating on 2to3 very much - it is already deprecated. We should be thinking/worrying very, very, much about 3to2. Since we really should be shooting to have our source in all Python 3 within a couple more years.
It may just be anecdotal at this point, but several highprofiles libraries have given up on 2to3 entirely and use the 2/3 common codebase approach: Jinja, django, ply. For a library, that's the approach that I find more practical.
I have a hard time seeing how one can use 2to3 on an everyday basis: it makes testing, distribution, etc... that much more painful because it is so slow (2to3 takes as much time as compiling all the C code from numpy, for example). It also removes one of the big plus of a language like python (modify code - see the result right away).
Amen, 2to3 in practice for anything but the smallest library is effectively unusable due to speed. Even if it weren't, I'm not much of a fan of maintaining generated code. That's an imposition I'm not really willing to put up with.
Just to further this sentiment, and perhaps even more worrisome, is how easily a language, even one as well established as Python, might fall out of favor. The fact that JavaScript usage is becoming ubiquitous as a scripting language was an example mitsuhiko was using of the challenges Python faces. Sure, the Python team can't stop that from happening, but they can stop pretending Python 3 is going to work just as soon as everyone gets on board with it. Hint: developers will adopt it when there are good reasons to use it and here the argument is that doing things more correctly (and consequently making life harder) is not a good enough reason when it comes to real world applications (Jinja2, Werkzeug, Flask, etc)
My objection to the idea that JS will so easily replace Python is that if you apply the same degree of analysis to JS that is causing you to reject Python, JS fares far worse. At least Python addresses the encoding issue, JS strings are far stupider, more dangerous, and require you to do a lot more of the paperwork if you can't assume an external environment that takes care of it for you (the browser, in this case). The next generation of JS fares somewhat better, if it is implemented according to the spec (and there quite a gulf betwixt a spec and an implementation), but when will you start being able to use that?
Try implementing even a medium-sized project in both Python and JS and tell me JS is a serious replacement, let alone for the large ones.
Armin mentioned JS as an example of a language that could come and be favorable to developers contra Python.
Like clojure, or something else. Indeed, as a newcomer to Python I wanted to pick up python 3 from the start, but went back to python 2 becuase of poor library support.
If in the future people still decide to go with what works, they might as well go with a completley different language, that just works, not becuase it is the correct way.
I doubt JS will ever replace Python and I doubt that is what Armin believes. That doesn't mean it cannot hurt Python significantly in terms of popularity and market share.
If JS gets to improve in the future, so does Python. Again, apply the same standards in both cases, which is my real point. Ranting about "accounting only the positives of one alternative and only the negatives of the other" is a bit of a hobby of mine. You can make anything look good or bad that way, but that appearance has no relationship to truth. You can't make good decisions that way, but people do it surprisingly often.
Also, encoding was one example. I could rant for hours about the ways in which Python is more suitable than JS(-of-today) for medium-or-greater sized projects.
I am certainly not denying that there are issues or saying that his issues don't need to be addressed.
However I'm a big proponent of 'breaking with the past' once in a while, to fix issues that have snuck into the language/library/system, and to clean up the cruft. Yes it will bring some frustrating porting, but the end result will be a cleaner more focused language.
Your original post (and to an extent your reply above) was a bit dismissive, I think. Dismissiveness is not really a unique sentiment here; I see a lot of posts along the lines of "well yes he's complaining but it's a break with the past and its for the better, it's just a matter of time, and it will all work out in the end, yada yada"..
My point is that for this to actually be true (as opposed to just being wishful thinking, which there's a lot of right now), people are going to have to not only port their libraries but they will also need to maintain the resulting code. If the library is at all popular already, the resulting code will, for some open-ended period of time (perhaps "forever") need to work on both Python 2 and Python 3. And right now, though producing a port isn't monstrously hard, maintaining the resulting code across the 2/3 straddle is just no fun. The code looks bad, it's harder to read, it's harder to maintain, it's just less fun overall. Basically, maintaining a Python 3 port just takes some of the fun and aesthetic out of maintaining an open source library. It's an imposition to those folks who want their code to be popular, forward-compatible, and beautiful due to the need to straddle.
IMO, I'm not sure that a Python 2.8 helps much here, but a more backwards compatible Python 3.3 would. For example: match py3 bytes behavior with py2 string behavior, add u'' literals back in, maybe readd iteritems/iterkeys/itervalues on dict as aliases so we can use a common API for dicts, add the print keyword back in maybe, and other minor things that are really easy to do and don't have much defensibility other than "it's cleaner to not have to have bw compat here".
Yes, a backward compatibility mode that could re-add those things would be nice (which could be enabled by adding a pragma "from __past__ import old_unicode_literals", for example). It would allow for a more gradual transition.
The pragma wouldn't really work. It'd have to work in all old versions of Python to make any sense, and it doesn't. For unicode literals in particular, it should just be true in Python 3.3+ that u'' = '' out of the box.
Breaking stuff is worthwhile if it brings other things in exchange. Cleaner is almost never a good enough reason in my experience. New features is what makes people willing to upgrade. People will evaluate whether porting is worth the pain over this feature vs effort line. Armin is not the only one who wonders whether it really is worths it.
But as more and more new features are being added to 3.x and not to 2.x (and assuming 3.x won't break compatibility with itself), won't there automatically be a point in which the features are worth the pain?
Even if not, the python devs are not trying to define this point for anyone, you can stay with 2.7 the rest of your life if you want to. I don't think a "2.x will no longer be maintained even for bugfixes" date has been established.
It's also worth mentioning that Python ships with a nice "batteries included" type setup, where installing the basic CPython interpreter and the bundled libraries give you a quite nice collection of tools for common computing tasks (e.g. making HTTP requests).
In contrast, JavaScript does not have a "standard" implementation and doesn't ship with any libraries. Sure, there's Node.js and probably others too.
Once you have a serious project, where you have dependencies and build systems and source control, it's not a very big issue to get install a few libs. But for small projects, Python's batteries come in handy. And some of those small projects turn into big projects.
What comes to Python 2 vs. Python 3, I feel it's a "can't make an omelette without breaking eggs" kind of issue. Unfortunately many people are running business critical applications with Python 2 and are not willing to put in the effort to migrate their code to Python 3. This "don't fix it if it ain't broken" attitude has slowed down Python 3 adoption.
I didn't get the "Python is doomed" attitude from it. To me, it's more of a rant that maintaining code that works on both 2.x and 3.x is far from optimal, and that we can and should fix this.
I think he's right in that there needs to be less of a gap, possibly with a Python 2.8. Migrating to 3 isn't the problem, it's maintaining the versions until 3 becomes dominant and 2.x support can be dropped. The Python core team is essentially deferring the difficulty of compatibility to library maintainers. And since Python 3 is essentially a new language, why not just use a py2js if/when it emerges? It'd be just as difficult. And with the additional benefit of entering a more mainstream community. So yes, let's lessen the gap.
I also agree with you in that Python 3 just needs time. And once 2.x is no longer supported, let's also remember this lesson: don't make such big leaps in language evolution.
Key line from the article: "Python 3 [...] changed just too much that it broke all our code and not nearly enough that it would warrant upgrading immediately."
To my thinking, Python, Ruby, and Perl make people productive primarily because of the availability of tons of high-quality packages that "just work". The Python Package Index (http://pypi.python.org) lists 18 thousand packages now. Many are very high quality and require essentially no "impedance matching" to use with Python 2.7 except "import package". If there's a genuine issue with a package, you can usually use a several-line monkey-patch and leave the package source completely untouched. Beauty.
Put simply: there's no way for a language design to make writing code easier than not writing code. IMO, this is why, despite the warts, these languages are winning. JavaScript doesn't have a standardized module/import system, so its packages are fragmented across a dozen frameworks. But this may change if the world settles on "one framework to rule them all" (or maybe two: jQuery for UI and node.js server side).
But Python 3 breaks many of the available Python 2.X packages, and in exchange for improvements that in most cases seem more like tweaks than major design fixes. Things that should be fixed in both branches (e.g., OpenSSL cert validation support) are now relegated to ad hoc patches to Python 2.X, because all the development effort is going into the 3.X series now.
Finally, the biggest improvement to Python IMO hasn't come from the core team at all: it's the absolutely brilliant work being done by the PyPy team. I would love to see "Python 4" merge some of the ideas from the 3.X branch in a fully compatible way with Python 2, and move the standard implementation to PyPY. Among many other benefits, this would allow the Python community to start seriously exploring adding static type-checking facilities to the language, which would make it far more suitable for larger projects. (I'm not saying make Python into Java, but it would be nice to be able to declare types as one can in modern Lisp implementations, and have the compiler both check correctness and optimize using such hints.)
I think the lesson is if you are breaking backwards compatibility better compress all the deprecation cycles you need to go over the years in one release.
Perl 6 is doing that. Its is going to be one major incompatible release. But that like compressing two decades of deprecation cycles in one release.
Yes! This is great. I also haven't switched to Python3 because there aren't any "killer features". 2.7 works fine for me, and it often feels that the reason for switching to python3 is mildly religious.
I've only just started looking at Python, but I wasn't aware that it has true CLOS-style multimethods (or multiple dispatch). I know that there are ways you can add multiple dispatch to Python - but is it really accurate to say that the entire language has a design that is based on multiple dispatch?
Note that I'd be rather pleased to find that multimethods are an integral part of Python - they were one of my favourite features of CLOS and I still miss them.
I think "multiple dispatch" is the wrong word. "Dynamic dispatch" is a much better word for what Python does. It's a ridiculously powerful feature. It lets you wrap and replace functions at runtime, which gives you incredible monkey-patching power, which is important to people programming in the real world.
My favorite example: Suppose you have a naive O(2^n) recursive factorial() function. I can write "factorial = memoize(factorial)" and suddenly your recursive calls are to my new wrapped function, which references your original implementation inside its closure. This is only possible because your recursive implementation dynamically dispatches by name. I have turned your O(2^n) implementation into an O(n) implementation.
In an unsafe systems language like C, I would only be able to accomplish the same thing with some severe memory hacks, and in a language like Java or C# I don't know how I would be able to do the same thing without some serious involvement in runtime reflection tools, and maybe even some decompilation.
Sounds like the author of the article has no idea what a multimethod actually is then. :-)
Dynamic binding (in various forms) is arguably a pretty common language feature and generally nothing like as powerful as full multimethod implementations (let alone what is possible in CLOS).
> Sounds like the author of the article has no idea what a multimethod actually is then. :-)
The author (me) knows what multimethods are. I did however not find a better term to refer to the `iter(x)` -> `x.__iter__()` / `x.__getitem__()` concept.
I'm genuinely interested (not being snarky, honest) - could you explain why you think that expression has anything to do with multimethods? Like I said previously, I'm new to Python and I'm curious what that expression actually means.
> could you explain why you think that expression has anything to do with multimethods?
You can override the behavior of a function based on the types involved. For instance ``coerce(a, b)`` calls ``a.__coerce__(b)``. By overriding ``__coerce__`` for one type you can customize this.
It seems to me there is a big difference between having a language that is truly based on multimethods and one where it is possible to implement multimethods fairly easily - from what I can see Python looks more like the latter.
That's called "Operator overloading" In this case the operator happens to look like a function, but it's not really any different than a + b -> a.__add__(b)
That's not just simple operator overloading. What mitsuhiko refers to is that iter() can iterate over an object not just when you implement __iter__() but also when you implement __getitem__() and __len__(), bool() has similar semantics as do several other functions and operators.
I've always heard them referred to as "Generic Functions", though the Wikipedia article is annoying opaque.
The important quality (in my practical understanding) is that a generic function's specialization is decoupled from the object system -- delegating to `x.__iter__()` doesn't require `x` inheriting a method from a parent; the process `iter(x)` uses to delegate might not even require any type information about `x` at all.
A key difference is that Haskell functions can only have one type signature, so you need to explicitly make "Bunny" and "Lion" data constructors for "Species". Multimethods operate on multiple types. You can fake multimethods in Haskell with multi-parameter type classes (with the language extensions to allow undecidable and overlapping instances).
Does Clojure do method combinations? I was very fond of :before, :around and :after methods of the standard method combination and loved the fact that, if you really wanted to, you could have your own method combinations.
I know that "Aspect Oriented" tools support some of this stuff, but the last time I looked at those (admittedly a while ago) they seemed, at least for Java, to be pretty hacky. Not that I use Java anymore...
> in a language like Java or C# I don't know how I would
> be able to do the same thing without some serious
> involvement in runtime reflection tools, and maybe
> even some decompilation
You code to an interface and use an IoC container or dependancy injection framework to provide an instance of an object containing the required functionality.
To some extent, Python's methods are a lot closer to multimethods than methods in other single-dispatch languages (largely because they have an explicit self).
I must add, risking down votes in this thread, that it is fun to see Python people argue the practicality of Python 2 over the idealism/correctness of Python 3.
It is very similar to when Perl people argue that CPAN (better OO with Moose, better infrastructure for modules, etc) outweigh the emphasis on simple syntax in Python 2... :-)
In fact if you go back in time and look at some of the first versions of Python it's a very, very ugly language and it does not come as a surprise that not too many people took notice of Python in the early days.
This is why I've always found it difficult to love Python. It just didn't seem to me that Guido was familiar enough with previous language designs or had a sufficiently refined sense of language esthetics to be a world-class PL designer. Over time the community has built Python into an extremely practical and useful tool, but I don't think I'll ever derive the same sense of pleasure from writing Python code that I do from languages with a stronger unifying concept like Ruby or Lisp or even OCaml.
I don't need to go back in time as I remember the 1990s. Even in 1995, Python was one of the top choices for high-level "scripting" or embedded languages. Perl was the hot language, and the other main choice was Tcl.
I used all three non non-trivial projects, and liked Python the best. It was better at handling complex data structures than the other two. Tcl was an easier language for my target audience (scientist/non-professional programmers) and it was easier to embed and extend Tcl, but Python's module and object system made up for it.
By the late 1990s, others in my field were already shipping Python-based applications, using Python bindings to Motif.
IMO, people didn't take notice of Python because of the "strange indentation", because high-level languages are seen as being too slow for real work, and because people coming from a statically compiled language often want the assurance that compile-time type checking gives.
I started using Python in 1997 and found it profoundly mind-expanding since my previous experience with languages was limited to C/C++ and Pascal. But over the years I learned other languages and came to see missed opportunities and outright mistakes in the original design that I don't think a more experienced student of programming language design would have made. Many of the original mistakes have now been corrected but the churn in the language this has required means that it no longer "fits in my head" the way Ruby or C do.
Wasn't it the same with Ruby? It looked much more like Perl[1] in the early days, but with time the language evolved and programmers developed best practices.
The original concessions Matz made to Perl have mostly fallen out of fashion now but the core ideas of Ruby, both its deep object orientation and ubiquitous use of blocks, have a simple elegance that have stood the test of time.
Right, but Ruby has its own baggage of design problems, like the lack of namespaces, which are a core idea in Python. I don't argue that one is better than the other, I just see a lot of similarities.
Sure. Ruby has its warts too. Conceptually I find it cleaner than Python overall but I don't think it's possible to design a useful language without making some mistakes along the way. There seems to be two kinds of programming languages: the beautiful ones nobody uses and the messy ones people use to get things done.
In the 9 years I've been using Ruby, I've not once missed having namespaces. A lack of namespaces is a valid complaint against C and Objective-C, not against Ruby—since you can use either nested modules or classes to achieve a similar effect, and often with better design results.
I thought Python 3 was already D.O.A. I haven't seen anybody using it nor have I seen any compelling reason to start using it myself, or any reason at all to even keep it on my radar.
When v3 was announced, IIRC even the Python folks themselves actually suggested that people just continue with v2.x until later when v3 becomes mainstream and it never did. In fact, I was surprised to see negative criticism about Python 3. It seems to me that nobody has been using Python 3, and therefore not complaining about it either.
If it was dead on arrival, it has risen from the grave quite nicely. I previously linked to two sources showing it has pretty good signs of life: PyPI packages and download numbers (posted here: http://news.ycombinator.com/item?id=3323908).
When it was announced, people did suggest everyone continue on with 2.x, but not just until everyone waits for it to be mainstream (that clearly wouldn't work). No one expected users to drop everything and port right away. As your dependencies come up to speed with 3, try your project with them. Create an experimental branch. Do something to keep up. You don't need to halt your own progress for it, but you shouldn't sit on your hands.
I've been using Python 3 at work for around 2 years now, writing test frameworks and tools in a C++ environment (working on a historical tick database). While a lot of the web people are stuck on 2, and that has been changing for a while and it's only getting better there, a lot of other areas have been available to and have been using Python 3.
There's a 5-year roadmap, and we're about halfway through it. Many of the most popular libs are already available in Py3. For example Django just released a version supporting it. https://news.ycombinator.com/item?id=3305021
They didn't cut a release with it, yet. The work has been done and it is or will be in a branch, but it still needs to be reviewed and "accepted" in order to move on.
For what it's worth, my startup (Mobile Web Up) uses Python 3 almost exclusively. Our core product is basically a very specialized web server built on Python3+WSGI.
There are some rough edges still. Our (marketing) website runs on Django, using Python2.7. There's been enough progress Py3k support for django recently that I'm hopeful we can migrate that by mid-2012. And I'd love to have solid Py3k support for a couple more libraries, like boto for EC2/S3/AWS.
All in all, for the particular things we need, Python 3 is practical now. We have paying customers whose services run on software written in Python 3.
"In fact if you go back in time and look at some of the first versions of Python it's a very, very ugly language and it does not come as a surprise that not too many people took notice of Python in the early days."
I don't know... Python in the early 90s looked pretty much the same as it does now. Unless you mean that some features (or lack of them) required inelegant workarounds?
I think most machines were just not powerful enough yet in the 90s to make Python a viable solution for many problems. As computers got faster, that became less of an issue. Also, there was already a scripting language with a large following back then (Perl, naturally). Whether it was "ugly" probably had little to do with it. (Quite the contrary in fact, I recall that Python was often perceived as clean, elegant, concise and very readable.)
I'm actually pretty new to Python, using it daily for the past few years, but I do have to say I have a real uneasy feeling about Py3.
Adoption seems very slow from the various libraries, and without those people just won't move over. And if that's the case, then the language will stagnate, along with the myriad of great libraries that make it so excellent.
Python 2.x suits me just fine right now, it is a pragmatic language that lets me get things done quickly and predictably. But I would be lying if I didn't admit to gazing over Ruby's way now and then and thinking that the grass sure looks green over there.
> Adoption seems very slow from the various libraries, and without those people just won't move over.
It started slow like we expected, but I think it's acceleration lately has outpaced what a lot of people thought would happen. The number of Python 3 packages on PyPI is steadily rising [0], the number of Python 3 installers downloaded from python.org is rising with each version [1], and the number of projects announcing Python 3 support in places like reddit.com/r/Python is rising every day.
[1] http://i.imgur.com/SLFDL.png - monthly download numbers for Windows installers for all downloaded versions over the last year (it's a rough draft, I just threw the download numbers in Excel quickly one day).
Yes, porting to Python 3 is more cumbersome than it should be. Yes, some of the decisions (like crippling the byte types, or implicitly changing behavior based on environment variables) turn out to be bad decisions, but it still sounds like there's already some work towards fixing these. As more and more people gets to work with Python 3, that seems normal to me. As we know, "There are only two kinds of programming languages: those people always bitch about and those nobody uses." It seems to me that Python 3 has started to get its healthy dose of bashing, and that's a good thing.
As for my anectodal experience with 2to3: I've recently been working on porting rpclib to Python 3. After skimming the diffs it produced for a simple `2to3 src/rpclib` call, I chose to ignore most of the transformations it applies.
Replacing commas in except statements by the "as" keyword or adding parentheses where missing work just fine. But wrapping every call to dict.keys() inside a list() call? That's bold.
Once 2to3 is tamed[1], I think the code it generates can be maintained. Certainly beats having to get the current exception from sys.exc_info.
Are you upvoting it because you agree with the rant, because you think it's time for a new debate over Python 2.8, because you think Python is losing space and turning into the future Pascal, because you hate 2to3, just because you think is nice to have some news about Python...
It would be very interesting if some of you elaborate a little bit on what parts of this article you agree with.
Not everyone up-votes because they agree with something. It's perfectly reasonable to up-vote something with the hopes that it hits the front page because you feel it will spawn an interesting discussion that you want to read and/or participate in.
Exactly, that is why, when i saw it with more than 20 votes but not a single comment that i dared to ask, honestly, why were they upvoting; I wanted to know the reasons. As you say, it can be for many different reasons, and I was interested in what the community thought about this.
Sadly, that meant I could be misunderstood, and in fact I was downvoted for just asking, in the first comment on the new, to please know the REASONS the others were upvoting.
I upvoted, because I thought about this very issue yesterday -- my primary project is a very large Python2 project that started after the declaration of 2.7.x as the end of the line. All of our dependencies were Python2, and many of them were too exotic or niche to have Python3 competitors.
I feel that my position is actually a majority in the Python community -- language users who are stuck with the branch that is considered unfashionable by the core developers and adding more resistance against migration to Python3.
The Python core developers are not against you. Your case is perfectly valid and understood. The core developers do, however, try to prepare the ground that will make it possible even for projects like yours to eventually make the move. It is hoped that in some time most popular libraries will have Python 3 versions, and many Linux distributions will come with Python 3 pre-installed (Ubuntu is making good steps in this direction). Eventually, a time will come when it will make sense for you to make the switch. We just hope it won't be too long.
All along, Python 2.7 is going to be maintained and bugs are going to be fixed. It is perfectly understood that the 2.x branch is currently by far the more used and deployed, and there's no plans to abandon it in terms of support. It just won't get new features.
> Now this all would not be a problem if the bytestring type would still exist on Python 3, but it does not. It was replaced by the byte type which does not behave like a string.
I was under impression that bytes is just an array of bytes and provides pretty much what `str` provided. What big thing is missing from that interface?
I agree on the `estr' idea. I agree that it "punishes you" when you want to try and deal with byte strings. It really gets in the way with handling decoding and encoding of email.
The context was XHTML. It was more correct but it was wrong (now dead) path. XHTML created only problems, but was "more correct"
To be precise, XHTML promised that pages render faster, which turned out to be browsers fault, not markup. PyPy solves "render fast" problem for python2. What problem Python3 does solve? Unicode? No...
> To be precise, XHTML promised that pages render faster, which turned out to be browsers fault, not markup.
I'm not sure XHTML ever promised faster rendering, I'm sure many people (including me) _assumed_ it would render faster. The truth is that XHTML rendered considerably more slowly! This days XML parsers have improved, but for a long time XHTML made partial rendering while loading and other things harder which meant loading an XHTML page took much longer than the plain-old HTML version.
I doubt rendering XHTML will ever be faster, at best it is/will-be not measurably slower than HTML.
XHTML was poised to offer much more than "correctedness". Reliable validation, custom DTDs, extensions, better interoperability. It just wasn't meant for the general web, where you have a massive number of non-technical authors. I don't think this comparison has any place in this discussion.
Really debatable, xhtml only added "extensions" in the "XML dialects, RDF!!!" sense, in the effective sense HTML is extended all the time.
> better interoperability
At the cost of interop with the real world. And that better interop was restricted to markup (an issue being much better solved through HTML5 as browsers are moving towards spec-compliant HTML5 parsers), it left the "new" interop issues (CSS, JS, DOM, ...) in place.
And XHTML offered this better interop by... mandating source correctness...
> It just wasn't meant for the general web, where you have a massive number of non-technical authors.
Which, interestingly, is also an issue with Python: it's used a lot in scientific fields and as an extension language (though less so than it used to be), which I'd guess would qualify as "non-technical authors" as they're not computer technicians.
"""XHTML was poised to offer much more than "correctedness". Reliable validation, custom DTDs, extensions, better interoperability. It just wasn't meant for the general web, where you have a massive number of non-technical authors."""
You just retold the whole of his Python 3 argument in terms of XHTML. P3 was also poised to offer more than "correctedness", and also "wasn't meant for the general real world where you have a massive number of different systems".
"""I don't think this comparison has any place in this discussion."""
Actually it's the perfect analogy.
XHTML -> add correctness, some new features, idealistic, unsuitable for the real world, didn't catch on.
Python 3 -> add correctness, some new features, idealistic, unsuitable for the real world, didn't catch on.
"I don't think this comparison has any place in this discussion"
I agree. Comparing markup languages to programming languages is even worse than comparing js to assembly. But since we are on the topic... I am not quite sure i undestand what " xhtml is unsuitable for the real world" means. Xhtml is a contract between a content author and a browser. Why do we call the browser's failure to implement the contract "unsuitable for the real world"?
Python 3 has one really significant problem for me -- many of my dependencies don't support Python 3 well or at all. That keeps me and my own modules locked in Python 2. Python used to be the language that bragged about coming with "batteries included", but it is slowly becoming the language that requires new batteries.
I'm also not sure your battery analogy works. The batteries that came with Python 2 (the standard library) are still charged up in Python 3. Third-party projects not porting certainly does affect you and others, but it's not the same thing.
For me, the strongest point is about which version of Python everyone uses at work. When you have many commercial users its really difficult to get everyone to move. Python3 is not currently a target for my code at work because just writing the features is a full-time job. The difficulties in porting would not currently be worth the effort and I would have an extremely hard time justifying the ports business value.
Lazy evaluation, however, has the benefit that, at least in theory, if conversion turns out to be unnecessary, one can skip conversion, and never pay the prize of conversion.
One could have an abstract 'String' type with concrete subclasses (ANSIString, UTF8String, UTF16String, EBCDICString, etc)
Assuming that any to-be-handled character strings can be round-tripped through UTF-8 (and that probably is a workable assumption), any function working with strings could initially be implemented as:
- convert input strings to some encoding that is known to be able to encode all strings (UTF8 or UTF16 are obvious candidates)
- do its work on the converted strings
- return strings in any format it finds most suitable
Profiling, one would soon discover that certain operations (for example, computing the length of a string) can be sped up by working on the native formats. One then could provide specific implementations for the functions with the largest memory/time overhead.
The end result _could_ be that one can write, say, a grep that can work with EBCDIC, UTF8 or ISO8859-1, without ever converting strings internally. For systems working with lots of text, that could decrease memory usage significantly.
Among the disadvantages of such an approach are:
- supporting multiple encodings efficiently will take significant time that, perhaps, is better spent elsewhere.
- the risk of obscure bugs increases ('string concatenation does not quite work if string a is EBCDIC, and string b is ISO8859-7, and a ends with rare character #x; somehow, the first character of b looses its diacritics in the result')
- a program/library that has that support will be larger. If a program works with multiple encodings internally, its working set will be larger.
- depending on the environment, the work (CPU time and/or programmer time) needed to call the 'correct for the character encoding' variant of a function can be too large (in particular, for functions that take multiple strings, it may be hard to choose the 'best' encoding to work with; if one takes function chains into account, the problem gets harder)
- it would not make text handling any easier, as programmers would, forever, have to keep specifying the encodings for the texts they read from, and write to, files and the network.
[That last one probably is not that significant, as I doubt we will get at the ideal world where all text is Unicode soon (and even there, one still has to choose between UTF8 and UTF16, at the least)]
I am not aware of any system that has attempted to take this approach, but would like to be educated on them.
Funny how people claim Py3k was DOA. I don't know anyone who uses Perl 6 for anything serious. I'd bet that there are 100 times as many Python 3 programmers as there are Perl 6 programmers.
I think Perl 6 still has a chance, though. No reason to dismiss it just because it's developing slowly.
The compatibility mode doesn't do Perl 6 much good, if you ask me. There are already too many ways to write Perl, if you ask a Python programmer. It's like writing C in C++.
Python 3 has been available and usable for quite some time. Not so Perl 6.
However, CPAN compatibility would be hugely important to a usable Perl 6, by simple fact that no other language has the breadth and quality and availability of libraries to rival the CPAN.
Perl 6, is a major incompatible release. Its like compressing years of deprecation cycles in one major release. Larry wall realized long back that a few problems with Perl 5 can be fixed with incremental releases.
I think there is CPAN compatibility mode, and Perl 5 programs are expected to run on Perl 6 compilers. Also Perl 6 is a total redesign but preserving the 'perl spirit' and its original design principles.
OTOH, Python is taking the path of incrementally correcting its problems at the expense of breaking compatibility as and when needed. The problem is every time you break backwards compatibility, you forcing an upgrade timeline on users and during that you are allowing rival languages and communities to flourish.
If people are using Python because it just 'works', then in its absence they will use something else too if it 'works'.
Me personally if I knew that a particular tool is going to continuously break my code base every now and then. I would avoid it all costs.
Perl 5 (or, more accurately, CPAN) compatibility mode is only really interesting in two cases: when converting a project piecemeal from Perl 5 to Perl 6, and in the time between when Perl 6 gets generally usable and when it gets sufficient library support with native Perl 6 code.
All of this depends on Perl 6 being generally usable and having a working Perl 5 compatibility mode.
In a comment here on HN. the author clarified. What he means is that a whole bunch of builtin functions operate by calling methods on the objects they are passed.
Python3 has some nice features and some that could have been better designed, but personally I don't think it's as bad as this author makes it to be. It's pretty much a logical progression of the 2.x series. Python 3 is being adopted, slowly. I still think it's simply a matter of time, as Linux distributions have plans to move on. No one expected it to go quick.
And I like that Python 3 makes Unicode versus Bytes explicit. There's working with sequences of symbols (for humans) and working with bytes (for machines). I regularly hoped this would be done when working with binary data and making hw interfaces in Python, as there is a lot of confusion regarding bytes/unicode in Python 2 also in libraries...
It was interesting to read some discussion and arguments for/against 3.0, but it could have done with a little less "Python is now doomed" attitude...