Hacker News new | past | comments | ask | show | jobs | submit login

python 2 vs 3 is a mess too.



?! Maybe was. A long transition... but it mostly happened already.

Nowadays I'm mildly annoyed if a project is on one of those "ancient" 3 but ≤3.6 Python versions and I can't use the nice features in 3.7+...


I keep encountering backwards incompatibility between various flavors of Py3. It's a mess and if you have a large codebase, staying on top of the latest version is a nightmare. Or if you ship a module and want to support the diverse ecosystem, it's a nightmare.


> Nowadays I'm mildly annoyed if a project is on one of those "ancient" 3 but ≤3.6 Python versions and I can't use the nice features in 3.7+...

No, it is a mess and it was a failure.

It tooks more than 10 years to happen, and even so...Many tools will die with python2 and never be migrated.

If your codebase is not usable anymore because your language evolved. That's a failure.

I can still compile the C's from the 80's today if I want to. Same for Fortran, Same with C++.


> If your codebase is not usable anymore because your language evolved. That's a failure.

> I can still compile the C's from the 80's today if I want to. Same for Fortran, Same with C++.

I couldn't disagree more. It's a cute trick but not useful that code from 80's is still compilable with a compiler from 40 years later.

There's so many better ways to handle legacy like that, like just freezing the toolchain. Which you've almost certainly long since done if you're actually working with code that old, as the platform it runs on is also frozen.

You can't actually run code from 80's out of the box today (16-bit legacy support has long since stopped being a thing), so why does it matter if it still compiles? And, critically, why does it matter if it compiles with the latest toolchain?

There needs to be a supported sliding window of a generously long time, but it doesn't need to be ~infinite support. That's entirely unreasonable & unnecessary. We don't expect that of any OS, library, etc... so why should we expect it from a compiler & language?


> I couldn't disagree more. It's a cute trick but not useful that code from 80's is still compilable with a compiler from 40 years later.

Still a good number of packages you are using right now on your Linux distribution have a core more than 15-20 years old and still they compile only with minor modifications with the last GCC.

Compatibility does matter. It should however not prevent evolution. If handle properly, nothing block us to get a proper versioning and evolution path on the toolchain we use.

And on many aspect C++ up to now has been very successful to do that.

> You can't actually run code from 80's out of the box today

Good C is close to immortal. Doom is from 93, and still you can compile it and run it with the last compiler in 2020 with minor modifications.


Of course compatibility matters but as you noted (twice!) it's not perfect and doesn't need to be. Minor modifications are necessary to keep up. That's all that's being proposed by the paper.

So now we're just debating the size of ongoing breakages instead of just their existence.

And no good C is not close to immortal. Good luck compiling K&R C with modern compilers, for example.


> You can't actually run code from 80's out of the box today (16-bit legacy support has long since stopped being a thing)

If you wrote your 1980s C code:

1. Using the proper C standard. 2. Using the standard library (and perhaps even common Unix libraries). 3. Were careful not to make assumptions about sizes (which, by the language standard, you shouldn't make anyway).

The your code will compile and work just fine. If you used Makefile's to build it, which you very well may have, and you were using a standard (= Unix) system, there's not a bad chance your build system might work too. :-)


> Using the proper C standard.

You mean the standard that didn't exist until 1989?

> Using the standard library (and perhaps even common Unix libraries)

That's also very late 80s (realistically early 90s) stuff.

Which was still yes a very long time ago that C made all these breaking changes, but it was also a good ~20 years after it's release as well.


> You mean the standard that didn't exist until 1989?

Fair point. But K&R C with System V or BSD libraries will probably-kinda-sorta work.


> If your codebase is not usable anymore because your language evolved. That's a failure.

Only if it claimed to be eternally BW compatible. Some languages never made that claim.

To me a language is a dependency like any API, just with a much bigger surface touching my code. I'd say it's the hardest API to replace in any code. "BW compatibility" is just as much a feature as "breaking BW compatibility", I just like languages (or any API really) to be upfront about it's policy toward BW compatibility.

And I've observed a lot of langs change their attitude towards it over time. Especially considering pre-1.0 times, I find a lot of langs happily break BW compatibility in their early days.


Why wasn't more effort put into 2to3? That is the real failure.


Because it's completely utopic to think that a conversion tool, even complex and well done could fix things like unicode handling.

Unicode change between 2 and 3 cause behaviour change on your input/output and any serialisation.

Even if your code could have been transpiled properly, the behaviour of your code would have still change, and still introducing bugs


The point of a conversion tool isn't to fix anything. The point is preserving behavior, even if that behavior is buggy.


GPs point is that strings are handled differently in between versions. Essentially, Py3 introduced two types (text, bytes) where only one existed before (str). You can't preserve the behavior of the old type that magically coerced between them if the new apis expect different types.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: