Hacker News new | past | comments | ask | show | jobs | submit login
Replacing Python (roscidus.com)
173 points by antics on Sept 29, 2013 | hide | past | favorite | 139 comments



I think the important insight from these articles is not which language the author ends up using (OCaml[1]), but rather which languages he's managed to rule out by now. In particular, I think it's good advice to avoid both ATS (however much I like dependent types) and Go, both for completely different reasons. You probably wouldn't want to use Rust in the short term either.

[1]: http://roscidus.com/blog/blog/2013/09/28/ocaml-objects/

ATS makes everything more difficult than it's worth except for some very specific use cases. Unlike some of the other languages like Haskell and OCaml, I don't think this is just because it's different; rather, it's because ATS is simply so much more demanding. It gives you extremely solid static guarantees and high performance, but it's a tool that really sacrifices expressiveness and programmability to get there. Haskell and OCaml may be difficult to learn, but ATS is actually difficult to use--a very different concept.

Go, on the other hand, is neither difficult to learn nor difficult to use. But, more generally, Go has very little to commend itself and--compared to the other options--quite a few shortcomings. If you're willing to put in a little bit of effort to learn something new, there are a whole bunch of options which are simply better. And you should be willing to put in the effort: your programming language is your single most important tool; it affects not only how you write your code and how you maintain it but even how you think. So it seems extremely shortsighted to choose a language because it's easy to learn and similar to what you already know!

Rust is awesome, but it's simply not ready yet. This is widely acknowledged by everyone in the project, and is part of their policy of open development. Once it is ready, it will be a very good choice for certain domains.


I think you may be overestimating the importance of the programming language over the whole programming environment. I think it is the programming environment that is the most important tool – not just the language. There are often many requirements that might lead you to choose a possibly inferior language because it lives in an altogether superior environment for your needs.

I have been writing large, server side software for many years. These are long-running applications that require superb performance as well as first-class monitoring tools. So, for me, one requirement for a language is that it runs on the JVM. The JVM gives you superb performance as well as unparalleled monitoring tools (made even better by the inclusion of the flight-recorder and Java Mission Control in the latest version of the JVM); you also get a vast ecosystem with a huge selection of very high quality libraries (and some other important features like dynamic linking) – all that before you even choose a particular language. You can also buy commercial support for every single component you're using if you need it.

On the other hand, if you're writing a command-line tool that needs a very fast startup time, or if your RAM is very constrained, then the JVM might be a bad choice for you, which would rule out the JVM languages no matter how good they are.

My point is that there are more important issues to consider than the mere merits of the language itself.

Also, you're putting a lot of emphasis on expressivity, while downplaying the importance of a short learning curve. I think the two are of similar importance. It's been my experience that programs written in some of the more expressive languages are actually harder to maintain than code written in the easy-to-learn ones. The reason seems to be that programmers use the expressivity of the former (which seems to be of a particular nature or natures: functional construct, higher-order types, meta-programming) to model their own thought-process, which may be hard to replicate for someone else maintaining the code. The easier-to-learn languages usually work at a lower level of abstraction, which can sometimes be easier for others to follow because its a well-understood common denominator.

Now, I'm not saying that worse is better, or that more "primitive" languages are better, or even that one should pick them over more expressive ones. I'm just saying that, especially when working in a large team or writing code that would need to be maintained for a long time, there are other issues to consider.


Programming will always be a matter of modeling your own thought process. That's what domain knowledge is: a model of the problem domain so precise that a machine can follow it.

The important part is how easy that model is to communicate to the other programmers on your team, because that's what source code is. And that's going to depend upon their backgrounds. Hire a bunch of hackers that are all familiar with idiomatic Python, and Python is a great tool for communicating. Hire folks who like things spelled out in more detail, and Java is a great tool for communicating. Hire people who all think at the level of the machine and C is a great tool for communicating.

I do think that one of the major benefits of choosing a language is that you pick the people who will be willing to work with you. If you pick Haskell you will end up with a lot of folks with mathematical backgrounds and an interest in programming languages for programming languages' sake, which may or may not be a good thing depending upon problem domain. If you pick Python you will end up with a lot of folks that have command of large toolbox of scripting functions, although that toolbox may not perform great when adapted outside its original domain. If you pick Java you will end up with a lot of folks that don't want to think too hard about their language.

Of possible interest: http://www.nhplace.com/kent/PS/Lambda.html


>The reason seems to be that programmers use the expressivity of the former (which seems to be of a particular nature or natures: functional construct, higher-order types, meta-programming) to model their own thought-process, which may be hard to replicate for someone else maintaining the code.

I agree with that. Working on code in a large and ever changing team is very difficult, and it seems to benefit from less abstraction, even if it leads to more verbose code. That's the whole secret of Java. I've been implementing a small part of my current project in Go and it has similar benefits.

That said, when working alone or with one or two others, I feel kind of dumb repeating some things over and over at the same level of abstraction. There are lots of things like this little piece of C++ code that I tried to translate into reasonably efficient Go:

  template<typename V, typename R>
  R defaulted(const V& val, const R& default_val) {
    return val == V() ? default_val : val;
  }
I came away scratching my head a little, probably because I'm a Go newbie.


As an aside, Go is also "a Java" in the same sense that Clojure is a lisp (similar syntax and concepts, same level of abstraction, same philosophy). In fact, Google seems to be turning out many of these "Javas" lately (Android, Go, Dart).

While Java and its variants' appeal at most large organizations is considered by some a sign of their conservatism, Google is anything but when it comes to picking the best tools. Google's fondness of Java is evidence of the merits of the Java philosophy when it comes to maintaining large codebases by large teams.

Creating a language that follows this philosophy while still boosting productivity (and maybe providing other benefits as well) is an interesting challenge. I think Go falls short. Kotlin looks interesting.


I always thought the Java love at Google came from needing speed more than a startup -- they automatically will get factors of ten more users directly for most everything they do. (Edit: Point is, "Java-like" is Google's optimum for other reasons than being conservative.)


Sure, performance is a big requirement for them, but there are other languages with good performance and better expressivity (like Haskell and Scala), yet they are not used at Google, and Google's new languages did not adopt their philosophies.


Google uses Java because they were able to pick up a lot of really skilled Java developers during the 2001-2004 recession, and those devs built many of the products that were introduced in 2004-2007. Once a product's been built and adopted by the marketplace it's very difficult to change the implementation language.

Most of the devs who were hired at Google from 1999 - 2002 still prefer C++, and products built in that era (Search and much of the infrastructure) are still in C++. In general rewrites from C++ -> Java have not gone well; I know at least one such frontend that was rewritten back in C++ a year later.


I'm not sure I get your point. Are you saying that Google's new programming languages adopt the Java philosophy because of developers they hired over ten years ago? Also, I'm not sure I understood what you were trying to say about C++ at Google.


I think his point is that the choice of language at google isn't exactly a managerial or strategic decision. They just happen to have a lot of java devs (probably at senior positions now, since they were hired a long time ago).Even if they have gone out, if most of your stuff is made in a language, it would be really difficult for new comers to shift that paradigm to some other language.

The part about C++ was to describe a similar scene. When they had mostly C++ devs, their language of choice was C++. and now they have mostly java devs, so java it is!


pron's point was [~ mainly] about the new languages from Google. nostrademons wrote mostly regarding what I wrote (and why it was wrong/simplified).


Wouldn't it be better to avoid calling the V() default constructor every time the function is called? You never know, it might have some long-winded side effect...

    template<typename V, typename R>
    R defaulted(const V& val, const R& default_val) {
      const static V v0;
      return val == v0 ? default_val : val;
    }


There was a time (before C++11) when not even local static variable initialization was guaranteed to be thread safe. And there are a few other reasons why it is discouraged: http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...

So it's basically defensive programming, but I agree with you that it's probably too defensive nowadays.


With GCC, at least, you can guarantee that it's thread-safe. (Of course, in some pathological cases you can get then get deadlocks during global static initialisation. C++ is such a lovely language!)

Also, I don't think Google's style guide is the be-all-and-end-all of good C++ style. Virtually non of Boost's code would comply with it. "We do not use C++ exceptions" indeed!


Google's styleguide makes it clear that the reason they don't use exceptions is because all code that calls a function that may throw an exception needs to be exception-safe, and Google has a large body of code that was written before exceptions were reliably supported by many compilers and so isn't exception-safe. It's a historical accident, in other words. The styleguide is also clear that if they were designing the it in the present without having to deal with a large body of legacy code, they might make a very different choice.


Google's style guide is written so that mediocre programmers don't get too trapped by C++ nuances.


You're starting from a false premise, which is that Boost is good. In reality, Boost is stuff that either couldn't make the cut to get into the C++ standard library, or is so new that nobody knows whether it's any good or not.


Boost is stuff that everyone wants to use but can't because they're forced to use an old or broken compiler that isn't capable of building real C++0x/C++2011 code. It's a staging ground. How else are you going to use std::bind or std::thread et. al. with visual c++ 2005?

Also, there are plenty of nice modules in boost that just don't belong in the standard library. ASIO is quite nice, but it has io_service implementations that are platform dependent. Statechart is pretty good, but it isn't the only state machine implementation in Boost and neither is clearly better than the other. Spirit has it's uses, but does it belong in the standard library?


You would never catch me saying that Boost is good. Heaven forfend! Some of it is good, and other bits are awful. The Boost "style" is, however, quite popular in commercial C++ projects.


Agreed. For example, if your environment is Google App Engine, your choices are more limited, and Go suddenly becomes a very attractive performance option.


> But, more generally, Go has very little to commend itself and--compared to the other options--quite a few shortcomings.

By what standard of measure? I personally appreciate the simplicity that Go brings to the table, usually at the cost of features we've come to expect in other programming language environments. Do I miss those features? Sure. But do I like simplicity and straight-forwardness? Definitely. Do I believe there is a way to combine them? I'm not sure. Rust is probably the closest contender. But certainly, languages like Haskell or Java don't do it for me (as frequently as Go does, anyway).

What I'm trying to say is that I won't tell you which side of the trade-off you should take in every instance, and maybe you shouldn't either.


> So it seems extremely shortsighted to choose a language because it's easy to learn and similar to what you already know!

But the benefits are that many new people will be able to pick it up quickly and easily. Other people's code will be easy to understand and maintain. So it's good for everyone in the long run. Being able to express complicated ideas in simple ways is very valuable.

If you think it's an advantage that it'd take someone else years of learning a language before they can understand the code you wrote, Go is not for you. [1]

[1] http://commandcenter.blogspot.com.au/2012/06/less-is-exponen...


In other words, Go is yet another "lowest common denominator"language,a spiritual successor to Java.

If you're not working with the lowest common denominator, Go is not for you.


Yes, in terms of concepts, Go is pretty much a Java 1 with a native compiler as default implementation.

However, it may still be quite good to move developers not doing kernel/drivers/embedded stuff, away from C into a more secure language.


>> However, it may still be quite good to move developers not doing kernel/drivers/embedded stuff, away from C into a more secure language.

Yeah, only that you wont do this sort of things in a memory managed language like Go.

They make you think is low level(PR ?), only that it isnt THAT low level ;)

Go should be able to switch off the GC, and allow people do to manual memory management if it really want to be that low-level..

Also, a way to take off the runtime, and work as skinny as any C program could..

Only than, you could make real low-level programming like OS, device drivers and the like..

The use cases for Go right now in its current incarnation, are things like: cloud/distributed stuff(against C++), or command line tools (against Python)

To make it a contender against C, there's a long road.. and im sure its one route they dont want to go..


No one forbids you to call into the OS and manually allocate memory.

Oberon family, Modula-3 are two examples of languages with OS that were used in real life situations, although in academic context, for several years.

Native Oberon System 3 and AOS were quite powerful desktop OS for the 90's.

Although I must concede their unsafe package is more powerful than Go's unsafe package.

Anyway, nowadays I personally prefer Rust or D nowadays.


The problem is not that you can't manually allocate memory, it's that you cannot not automatically allocate memory.


That's debatable, as you can write the code in such way that it does not trigger allocations in a specific situation where it might be critical.

Having used Native Oberon in the 90's, I became convinced that system programming languages with GC are possible.

Latest versions of AOS even had video players with good framerate written in Active Oberon, with some Assembly snippets.

However, in the Oberon family, besides NEW, the other implicit memory allocations are only triggered by string manipulation and starting tasks, if I recall correctly.


That helps, but in some domains, it's 100% unacceptable. For example, kernels or more intense, high-end video games. And can you tell Go's GC not to run for a period of time? I thought that it could stop the world at any time.

Don't get me wrong, I do 90% of my coding in a garbage-collected language. But any mandatory GC does limit your domain.


> That helps, but in some domains, it's 100% unacceptable. For example, kernels or more intense, high-end video games.

However such systems exist.

Kernels => Native Oberon, Blue Bottle, EthOS. All desktop oriented systems used at ETHZ in Switzerland during the 90's. Used for teaching OS design, and a few teachers even used them as their main system.

High end games => The Witcher for XBox 360. They made use of a GC runtime in the game engine.

> But any mandatory GC does limit your domain.

I tend to think the limit is more religious than practical.

Please note that I defend GC enabled systems programming languages, not necessarily Go for such domains.


Seems to be a common sentiment, but from where I'm standing there are at least a few huge differences between Go and Java:

* duck typing * goroutines * channels

And some smaller but still important differences:

* Very light-weight Public/private distinction * Slices * Objects that embed directly in other objects, not as references * Return-value error handling; only a very restricted form of exceptions


> goroutines * channels

java.util.concurrent offers tasks and queues.

As for the rest, they are not unique to Go, better languages offer similar features.


That is a huge mischaracterization and borderline flamebait.


Not even borderline! "Flamebait" was practically invented for that exact type of comment.


You didn't really say why you think Go is a bad choice. I think there are a lot of things Go has going for it, including clarity, productivity, native cross platform support, and a best of both worlds static typing system with implicit interfaces.


> best of both worlds

I'm just curious, how much time have you spent with a really strongly statically typed language like Haskell or OCaML?


I've done a decent amount of work in both - I've written a compiler in OCaml and a few applications in Haskell.

I do find Go to have the "best of both worlds" when it comes to the type system (ignoring all other features of the languages).

As you know, one of the advantages of writing in dynamic languages like Ruby and Python is that you don't have to think much before writing down the first few lines of (working, runnable) code. This makes it really easy to sit down at a keyboard and bang out a prototype, or even a one-off utility script.

With Haskell, I have to spend a lot of time thinking up-front about how to design my types, because the arrangement of the datatypes largely determines the functionality of your program. The upside is that, once you've sketched out what you need your types to look like in Haskell, the entire structure of your application has essentially been stubbed out - you simply need to fill in the gaps. This is really nice when making incredibly polymorphic functions - as has been pointed out, you can implement an entire Haskell library without understanding anything about the logic, operating solely off of the type annotations[0].

The downside is that it takes longer to get those first few lines of (working, runnable) code down. Maybe that's not a big loss, or maybe it is. But Go does provide a lot more type safety than any of C or Java (and certainly a lot more than any dynamic language than Ruby or Python).

I'm a functional programmer at heart, and given my druthers, I'd probably prefer a little more type safety in Go, rather than less. But having written a lot of Go over the last year, I don't find myself commonly running into the sorts of errors that a stronger type system would have prevented.

In short, (IMHO), the biggest additional benefit to strengthening a type system like Go's to resemble Haskell's would be in how it forces you to change the code-formulating process, not in how it prevents more errors. Unfortunately, this is also the biggest drawback to strengthening the type system - it means one can no longer use the same programming workflow that one uses for Python, Ruby, Java, etc.

[0] http://r6.ca/blog/20120708T122219Z.html


Thanks! Another data point...

I personally tend to see it as the worst of both worlds. I have to deal with types _and_ I don't really get strong guarantees. I don't run into problems in Ruby that Go's type checker could help with. But that's also my personality: I like extremes.


I'm curious what guarantees you're looking for that you don't get with Go that you'd get in other languages. I haven't worked with the more esoteric languages like OCaml and Haskell, so I might be missing something simple.


First of all, I just want to say that I don't think that Haskell or Rust are competing with Go, we're just talking type systems here. I'll use those two because they're my favorite. Three examples, one from each and one they share:

In Haskell, the type system ensures that side effects only happen in one place: things inside a monad of some kind. For example, if I'm given a function:

    foo :: Int -> Int
I _know_ for a fact that this function doesn't do any IO. It doesn't maintain any state. It won't launch the missiles. And I also know that everything needed to understand what goes on in `foo` will happen via the one parameter. Because it's annoying to pass around code that interacts with the outside world, you end up with a small shell of imperative, stateful code, and a large amount of stateless, pure functional code. Since Go (to my knowledge) doesn't enforce referential transparency, it won't do this.

Haskell and Rust both don't have the concept of null. This is fantastic, as even the inventor of null thinks it's a bad idea. It's borderline irresponsible to write a new programming language with nulls today. So how do they handle a computation that may fail? Higher order types:

    use std::option::Option;

    fn call_me_maybe(x: int) -> Option<int> {
        if(x > 5) {
            Some(x)
        } else {
            None
        }
    }

    fn main() {
        let i = 6;
        match call_me_maybe(i) {
            Some(x) => println!("Yes! {:i}", x),
            None    => println!("nope"),
        }
    }
This will print "Yes! 6". If you change it to 5, it will print "nope". The higher type wraps the output and tell us if we've succeeded or failed. Here's the kicker: what happens if we leave off the error case?

    rust.rs:13:8: 15:9 error: non-exhaustive patterns: None not covered
    rust.rs:13         match call_me_maybe(i) {
    rust.rs:14             Some(x) => println!("Yes! {:i}", x),
    rust.rs:15         }
It'll make us handle it. And since it has a different type, we can't pass it to something that takes an `int` either, because it's an `Option<int>`. Both Rust and Haskell can do this, and have tools to make this easier, too. For example, maybe you want an error message, so Rust's Result type has a message on the fail case. Or you need three or more states, you can build your own.

Finally, in Rust, data is immutable by default:

    let i = 6;
    i = i + 5;

    rust.rs:3:8: 3:9 error: re-assignment of immutable variable `i`
    rust.rs:3         i = i + 5;
                  ^
    rust.rs:2:12: 2:13 note: prior assignment occurs here
    rust.rs:2         let i = 6;
And pointers have explicitly one owner or many owners:

    fn main() {
        let i: ~int = ~6; // read: i is an owned pointer to an int.
        let j: @int = @6; // read: j is an managed pointer to an int.
        let k = i;
        let l = j;
        println(i.to_str()); // error: use of moved value: `i`
        println(j.to_str()); // this is fine
    }
So in Rust, you cannot share data that has multiple owners across threads:

    fn main() {
        let i: ~int = ~6; // read: i is an owned pointer to an int.
        let j: @int = @6; // read: j is an managed pointer to an int.

        do spawn {
            println(i.to_str()); // fine, prints
        }
        println(i.to_str()); // you can't use it after you've given it away, either!
        // error: use of moved value: `i`
        // println(i.to_str()); // 
        //        ^
        //note: `i` moved into closure environment here because it has type `~fn:Send()`, which is non-copyable (perhaps you meant to use clone()?)
        //do spawn {
        //    println(i.to_str()); // fine, prints
        //} 

        do spawn {
            println(j.to_str()); // error: cannot capture variable of
                                 // type `@int`, which does not fulfill
                                 // `Send`, in a bounded closure
        }
    }
Therefore, race conditions are a compile-time error. Go has a race detector, which can help, but it can't always help.

Anyway, hopefully that explains some of my beef with Go's types. I think Go gets a lot of things right, I think the type system is one place where it gets things really wrong.


Can you think of some niches/circumstances where ATS might be worth using?

EDIT: might it be possible to develop ATS programs that are easily called from other languages? ATS seems to have strong ties to C, so maybe one could call ATS code like one calls C code, or call C code that is a thin wrapper around the ATS code? Then you hopefully could have efficient, statically verified (to a high degree) code that can be use by other languages.


I've always thought it would be a perfect language for things like aerospace. NASA, for example, uses unsafe languages like C and compensates for that with extremely rigorous processes. Of course, this means that it takes forever to get anything done.

I can't help thinking that using a better tool like ATS would actually increase productivity without compensating safety. If anything, the types would provide an extra sanity check! Of course, just using ATS won't be enough--most of the existing checks and reviews will still be necessary, but I do think it would be some improvement.


If the mission is critical, choosing programmers matters more that choosing languages. If ATS is hard to use, then it's the right choice because the users who mastered such difficult technique must be the smartest guys...


Or the dumbest, because you dont pick something that is more difficult than its need to be.. well designed languages should help you to focus on the problem domain with little effort possible.. not on the language itself and its caveats..

Languages, as means to create programs that express some sort of logic, must get out of your way.. So i dont think its smart to pick a language because its hard, unless its for create new sorts of mind games and puzzles, and not by the sake of the final objective that is coding itself.. to create programs


Perhaps the title should be "Why Python is good enough." Between this post and the preceding one (http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-r...), the takeaway for me is that the author didn't find sufficient benefit in any of the candidate languages to justify switching an existing product--perhaps not even a new product.

It's true that Python can leave you with hidden crash bugs in unusual code paths, but the language brings so many benefits that it's easy to forgive this. And there seemed to be a vague concern about performance at the beginning of the article, but I didn't see anything substantive in that regard.

So yay, let's all just keep using Python. Back to work.


Except in the end, the author switches to OCaml[1]. So Python might be good enough, but he certainly found OCaml better.

And since when are we willing to settle for good enough?

[1]: http://roscidus.com/blog/blog/2013/09/28/ocaml-objects/

(In fact, the author writes that he's basically willing to move to OCaml in the conclusion of this post.)


"Good enough" the way I'm using it is nothing to be ashamed of. It is very difficult to find computing systems that are "good enough."

Here are some of the motivating factors identified by the author at the outset of the search for a better language:

- Canonical’s Colin Watson is worried about Python’s performance on mobile phones.

- Marco Jez and Dave Abrahams proposed a C++ version.

- Bastian Eicher would like a .NET version (though IronPython might work here).

It doesn't seem like OCaml effectively addresses the above very well. On the other hand, here are some items the author liked about Python which may now be lost:

- Widely known and easy to learn.

- A large standard library.

- You only need to ship source code (interpreted).

- Can run inside a Java or .NET VM (using Jython/IronPython).

- All current 0install contributors know it.

- The current code is all Python and is well-tested.

Of course the last item is the most alarming: see http://www.joelonsoftware.com/articles/fog0000000069.html (2000), Brooks' Mythical Man Month (1975) for the Second System Effect, etc.


Although F# isn't OCaml it is fairly similar, and compatible if you skip certain features of each: http://stackoverflow.com/questions/179492/f-changes-to-ocaml


> since when are we willing to settle for good enough?

Ever since we decided to get shit done, ship, finish our weekend project, etc. Ever since we first read "premature optimization is the root of all evil" and understood it as wide, abstract truism.


I seriously doubt OCaml's benefits over Python are so great that they overweigh not being able to hire someone proficient in it.


In practice, people have found that hiring good programmers is actually easier with certain more obscure languages. Paul Graham, amusingly enough, called this the "Python paradox" because, back in 2004, Python was one of the obscure choices! Oh how things change.

This certainly holds for OCaml as well. The main OCaml company I'm familiar with is Jane Street, and they certainly have no issues finding OCaml programmers. More importantly, many of their programmers come in with no OCaml experience or even no functional programming experience, and yet they have no trouble getting up to speed. In fact, these days they even teach their traders--often complete non-programmers--OCaml. And it works. I don't think I'd ever want to hire somebody incapable of learning a language like OCaml in a reasonable time frame.

So yeah, this is not an issue in the least. If anything, it seems to work out more favorably for the companies using OCaml, Haskel and the like!


I'm pretty sure that at Jane Street you get to solve engaging problems and get paid 250K+. OCaml or not, they're not going to have problems with hiring.


The other thing is his only complaint (slowness) isn't really a problem. If you have a critical hot spot in your server code, extract it, implement it in C / C++, and use ctypes to call it.


Having recently started doing python full-time, I would say that by far the biggest problem with python is its propensity to explode in your face. Code written in python seems so fragile; except for the most blatant bugs like syntax errors or too few arguments, nothing gets caught. You end up having to write tons of unit tests, which often expose bugs that would be trivially caught with any kind of typing. Performance is great, but code reliability is much more important. With python you get neither - just rapid development (slowed down by having to write a ton of tests) and expressiveness (which is excellent).


Don't let its ease of use fool you, there are a lot of ways to shoot your foot off in Python. Typically, it's kind of like pulling the trigger (make the mistakes), only to see your foot get blown off a few months/years down the road. I find that your luck with larger Python projects is largely determined by two things:

* Experience with Python, and general experience in duck typed environments.

* Discipline. Not just your own, but anyone else working on your project.

There are certain mistakes that will make your life miserable down the road. These could be bad organizational conventions, inconsistent exception raising/handling policies, lacking a strategy for documentation, etc. If you make mistakes like this, you probably won't get bitten by it until later.

Writing good Python can be harder than writing good <X Language with more rigidity>. With experience and time, this gets a lot easier.


This was exactly my sentiment when I started doing Ruby full-time.

I appreciate all the niceties that dynamic-typing language like Ruby provides. But I am not sure the trade-offs are enough (all you already described).


CFFI is the latest trendy way to call C, compatible with PyPy ;)


"slowness" is not the problem here - it's "slow program startup". If your process only lasts for tens of milliseconds, you're not going to make it any faster by embedding C code because 99% of time is still going to be spent initializing the python VM.


So, effectively, Python is not enough - you and your team have to be experts in C/C++ too?


If your team can't write a few hundred lines of critical section code in another language, or at least pick it up in a week - especially when the syntaxes are very similar (this isn't going from closure to java here) - you have bigger issues with your team than their language choice.


Yep, no single language except C(/C++) is enough by itself (potential exception here is Java, but even there it's still useful).

C has become the lingua franca of computing. You can call everything (except Go) from C and you can call C from everything. So you want to actually reuse standard libraries, you will want to know C. And if you're a good C hacker you'll know C++. I'm not saying you'll necessarily use it, but you'll know it.

The vast majority of python programmers do not know how to call C/C++ code and are therefore limited to pure python libraries exclusively. They usually use writing to files and os.system() as an alternative. Needless to say, the performance of this approach can be questioned.


The first Python example throws up a big red flag: manually parsing command line arguments instead of using argparse. Why reinvent the wheel when there is a standard library to handle it?

Likewise, asserting that "For storing general records, Python provides a choice of classes and tuples" completely ignores one of Python's fastest and most powerful data types: dictionaries.

Lastly, in get_value, the nest of ifs is unnecessary and makes the code seem more complex than it really is.


Over the last decade, there's probably been half a dozen "standard" command line parsing tools. getopt being the classic, then we had optparse and that got deprecated, then we got argparse. And there there's e.g. twisted.usage and the pretty cool docopt.

In this case the author wants to parse "program <foo | bar | baz> [optional args]" and does that in a few lines of code. You seriously think that's a "big red flag" ?

Do you also believe the author does not (despite having created the zero install tool 10 years ago) know Python has dictionaries?

This is a blog post where the author evaluates a number of languages/environment to replace his particular Python needs -- do you think that him not using a new standard library module invalidates his finding?


We've had optparse since at least 2003; programs that don't use standard tools to parse their command lines usually have lame bugs such as not responding properly to "--help" (must do nothing but write to stdout and return 0 on *nix).

I'm not the one who first mentioned that hand-written option parsing is a red flag. But it is one--especially in Python, which has nice things built in.


The actual 0install Python code does use optparse.

It wasn't stated in the post, but the next step after writing this code was to use it as a front-end to the real Python version. If invoked as "0install run NAME ARGS..." exactly then we handle it (the fast path), otherwise we fall back to the Python version. In particular, that means that the Haskell/OCaml version must NOT try to handle --help, etc. I wrote the comparison Python code to be similar to the other languages.

This OCaml front-end appeared in 0install 2.3. For 0install 2.4 there is a full option parser written in OCaml. I didn't use a library for this because a) it needs to be 100% compatible with the Python parsing and b) it needs to handle tab-completion too.


I like this article on that subject, which as it happens uses argument-parsing as the running example: http://www.yosefk.com/blog/redundancy-vs-dependencies-which-...


In his defense, the author makes the following disclaimer:

"As before, note that I’m a beginner in these languages."


He was referring to all of the other languages. In his previous article, he writes:

"I’m not an expert in these languages (except Python)."


There are many examples of archaic/non-idiomatic/gross Python. Old-style classes, unnecessary cyclomatic complexity, and so forth. Using Python effectively (i.e. expertly) means wantonly abusing hash tables, generators and its functional paradigms.


I was under the impression that using a class (with `__slots__` set) or a tuple is generally more performant than using dictionaries. At least, I assume that's why he didn't mention dictionaries.


The attributes of an object (and its class) are generally stored in a literal dictionary, so it's unlikely that a class will be more performant.

What a class buys you is behavior and a semi-defined interface.


Uhh... Did you not see my parenthetical reference to `__slots__`? See this SO post[1] for more details.

[1] - http://stackoverflow.com/questions/472000/python-slots


Whoops! I could've sworn that said "without", sorry :)

Though I'm not sure `__slots__` actually saves any time, as it's really a space optimization. I've tried it before for hot code and even seen very slight (probably not significant) slowdowns.


I do not understand at all author's statement that OCaml datastructures are big win over Python's

  let {cache; config} = b in
  print_endline (String.concat "," cache);
  print_endline (String.concat "," config)
  ;;
vs, actually I'm not even sure what the ocaml is attempting, some variation of

  print '%s,' % b.cache
  print '%s,' % b.config
Which, in Python, if your printing more than a few should be

  print ',\n'.join((b.config, b.cache, b.data))
Or if want all fields and they are in correct order

  print ',\n'.join(b)
>> The syntax [of NamedTuples] isn’t great, though, and you can’t do pattern matching on the names, only by remembering the order:

Other than misuse of term "pattern matching", that statement is true and is trivially overcome with two line function, one line lambda, or once and for all by subclassing NamedTuple. Here's the function variant:

  def GimmieThing(keyword args in any order)
     return ThingNamedTuple(args in correct order)


How is it a misuse of the term "pattern matching" ?


Generically http://en.wikipedia.org/wiki/Pattern_matching

And as for a language / call semantic look at Erlang for what actual pattern matching looks like.


That doesn't contradict the use of pattern matching in OCaml.


Both the OA and myself are talking about Python.


> A major benefit of OCaml and Haskell is the ease of refactoring. In Python, once the code is working it’s best not to change things, in case you break something that isn’t covered by the unit-tests. In OCaml and Haskell, you can rename a function, delete old code or change a data structure and rely on the compiler to check that everything’s still OK.

These sort of statement always bothers me when I encounter it. If you're not testing the code you are refactoring how do you know it still works or even worked in the first place?

Static typing gives you useful things (with a trade off), not having to write tests isn't one of them.


He never said anything about not writing unit tests. Rather, his claim is that--even with unit tests--refactoring is not safe in Python et al; in Haskell and OCaml, by contrast, many of the same actions are guaranteed safe by the type system.

Static types do not mean you don't have to write unit tests. But they do mean you can write fewer. (And, looking at libraries like QuickCheck, a static type system can make writing tests easier.)

Quite a bit of the safety you get in Haskell especially comes down to controlling effects. Mutable state implicitly couples all the code in a given scope--unless you've read and understood all of it, distinct parts might have effects on each other that you're unaware of. In Haskell, on the other hand, this is impossible, so refactoring like extracting variables and reordering your code can be entirely safe even if you haven't looked at exactly what the code does. The refactoring actions are guaranteed not to change the semantics of the code by the language itself.

The best way to think about it is that refactoring becomes a purely syntactic action. You just remember a few rules akin to algebra, and you can rewrite Haskell code in a bunch of different ways all preserving the original meaning. Regardless of what that meaning really is. This also extends to library code--most good libraries come with algebraic laws, which ensure you can rewrite their code in logical ways. Once you learn these laws, you can start doing things like changing multiple passes over a datastructure into one with the same confidence.

If you make a change that could break things, the type system will often help you find every place that does break. This extends beyond just type mismatches: Haskell also ensures that you consider every possible case in your functions; if you forget an option, it will give you a warning. This means it's safe to add a new alternative to an existing type: you will get a warning everywhere you haven't considered this new option.

I've found this to have a profound effect on how I program. With Haskell, I actually follow the rule of making any code I visit look better than before, simply because refactoring has very little mental overhead. I can move things around liberally, break them into multiple modules, condense them into fewer functions and even change the types, knowing that any mistakes I make will be caught by the type system.


> his claim is that--even with unit tests--refactoring is not safe in Python et al;

Well that claim is False. I and thousands of other Python developers disprove it daily. Just because they can't refactor dynamic languages doesn't me we can't. Something not-dynamic is probably the language for them. And Python is the language for us.

In other words there is far more variability between developers than there is between languages. Find the language(s) that work for you and quit believing they are the languages for everyone.


   Static typing gives you useful things (with a trade off), not having to write tests isn't one of them.
I disagree; static typing is, effectively, having an automatic set of tests automatically run for (by the compiler) you that you don't have to run/create/maintain yourself. As someone who's moved from statically-typed languages to Python (2.x), I can't count how many times I've made errors that "should/could have been caught for me by the compiler", if there were such a thing in Python. It slows me down in nontrivial ways. It would be good if we could use 3.x-series annotations to achieve much of the same thing, but the world hasn't gone 3.x yet.

Your point about the value of unit testing in general is well-taken, however.


I'm not saying static typing/compilation doesn't give you anything or that it doesn't catch errors for you. My point is that static typing/compilation means you don't have to worry about code coverage with your tests is a false assertion.


If you're not testing the code you are refactoring how do you know it still works or even worked in the first place?

There are other ways, different than testing, to convince yourself and others that a program does what it is meant to do. They complement testing. You use them already while constructing the deterministic parts of your program: much of what the machine does for you is predictable, and good languages and libraries are designed to make it so.

In fact, tests won't tell you that the program works, only that it doesn't fail the test cases. You then use the predictable aspects of your domain to convince yourself that the program works if it doesn't fail those test cases.

In the same way that programmatically renaming a variable does not usually warrant the writing of a test case, many forms of automatic refactoring are theoretically guaranteed to not break your program. If they involve type renames, you might be able to deduce that any resulting errors will be caught by the type checker.

(Please don't take this as an argument against testing, but against always requiring test coverage)


Of course it's possible to write non-working code in a typed language. But with a good type system you don't have to write the code in a way that can be wrong (or at least, that can be wrong in a way that unit tests would help with - if you've misunderstood the requirements then nothing can save you).


Cython might also be worth considering. It lets you add static type declarations to Python and produces compiled code. You could avoid a big-bang rewrite, migrating the slower operations first. I don't know whether it would meet all your cross-platform requirements though.

http://cython.org/


Isn't Cython a solution that frequently requires significant code changes (refactoring to C-like code) beyond adding typing?


It's selectively adding C-like type declarations to Python code speed up the slow bits. C code is then generated and compiled into a Python extension module. This is similar to adding type information to Common Lisp code to allow the compiler to optimise it.

http://docs.cython.org/src/quickstart/cythonize.html


My impression was that you sometimes have to refactor your Python to make the code more C-like. From what I have heard generators aren't supported, and there are other situations where you need to make your loops (and other elements of your code) more C-like.

That's not the case?

You just add typing to your python and you're good to go?


There's some info on semantic differences here: http://docs.cython.org/src/userguide/limitations.html - it doesn't bring up much, and the design goal is full language compatibility. I've only played with it so far, but it's case of optionally adding type info, and also writing some distutils scaffolding to say how to build the module as a cython extension. This is then callable from Python via the regular import mechanism.


Thanks for the link. I was just poking around the Cython site myself, and it looks like they've come a long way since I last looked into it. It seems that they're getting much closer to full compatibility.

It makes me wonder if we're closing in on the day when Cython gets rolled into Python proper, and you can just "import static-types" to activate optional typing, then add some static typing to your code, and you would get Java performance from your Python code.


"and you would get Java performance from your Python code."

Nah, I doubt it would ever use _that_ much memory.


:) I guess what I meant to say was Java speed (or greater) Python.


I find it suspicious that Java or Scala didn't make the list. Java may not be sexy but it checks many boxes... And I suspect that Scala would have been a serious contender in brevity too.


If I need to install the JVM just to try your software, then it's going to remain untried - however many design patterns you've managed to cram into it.


Why, because the JVM is too big?


Because it's too big, it's very slow to start, it's controlled by a company that is not very lovely, it's not installed by default on our system, it's free and opensource without in fact being very libre, it has frequent security flaws that don't seem to be addressed seriously... Some of this is probably only partially true, but it gives you an idea why people don't want to use it, whether they're right or wrong.


So it's a good thing you're propagating partial truths then, right?

The JVM isn't easy to beat for software that's more complicated than "Hello World". Sure you can beat it, but you're going to have to work very, very hard.

The security flaws exist in the area few people care about (and shouldn't even really be installed any more) -- the sandboxing and web start code.

Did you include gcc and friends in your "too big" calculations? Hard to program without them.


It's a crazy moving target that never seems to work right. Even if it does work right, pretty soon some update breaks it. And then there's the fact that most (desktop) Java applications look like arse and run like a dog.


The JVM is a crazy moving target? Can you give an example of that please?


Java Update.


Sounds more like trolling than participating to a rational discussion


Java aficionados don't seem to realise just how obnoxious many of us find the JVM. If you live with it, and do your work with it, I guess you come to accept it as a fact of life - always there in the background. If literally nothing that you do uses it, then it's a massive extra dependency to add to a simple desktop app - a dependency that is often quite difficult to manage and keep updated.


Re dependency: you probably have a point on Windows, but it's your choice to use an OS without a package manager to keep things up to date. I don't know about Mac but on Linux it is a non issue.

That said it does not do well for desktop apps due to slow startup, high memory use and lack of native toolkit. It does much better as a server side runtime.


Although one of the best desktop RSS readers (RSSOwl) and two of the best IDEs (Eclipse & IntelliJ) are written in Java.


Of these, I have only tried Eclipse. While certainly more full-featured out of the box, its speed, especially at startup, is really nothing to boast about when compared to Visual Studio.


When I see people arguing about which is the best IDE, I think of this... http://www.youtube.com/watch?v=0d-c4INM-Wk


C# was excluded for being too slow to start. JVM-based languages would not fare any better.


I can't say I've had that problem myself. They are quite startup heavy but typical c# console apps start up on an ssd based system in ~150ms and 700ms on rust disks.

If you're calling something thousands of times, fast process startup times are good. This isn't the model windows uses though which is the primary target of c#.


In the tests, his OCaML implementation takes 7ms, and his Python one takes 64 or 109 ms. So even on your SSD, that's 50% to an order of magnitude too slow, just for startup.


Do you really notice that though?


I cannot remember exactly, but in the discussions about page load times translating into revenue, 100ms was the number being tossed around, IIRC.

I certainly notice any time I boot something up that requires the JVM, I refuse to use the CLR, so I can't tell you much about my own experience with that.


Once it gets going though it screams. Not sure I want to trade the miniscule difference in startup time for that.

As for page load times, this is silly. There is no startup time on a page at all. Possibly the first hit but there are warm start options for that in CLR at least which make this a complete non issue.

To give you an idea, 98% of our page hits are under 80ms processing time and we have big, heavy pages (we're old school asp.net mostly).


I'm not talking about page load times. A server side app is a great use-case for something that has slow startup. I was just using that as a citation for perceptions on speed and how it can negatively affect experience.

I'm talking about command-line applications, like the one in the article and in your first post. Then you get the startup every single time you run the command.


For stuff like tab completion and for launching small command-line apps (think "grep", "sed", etc), absolutely. A penalty of hundreds of milliseconds for the _launcher_ alone can double (or more) the total execution time.

I use zeroinstall a lot (I'm a contributor), and I turned off tab completion because the lag (of the python implementation) made me wonder if my terminal had locked up, which was far more distracting than trying to remember the available arguments. I have re-enabled it in the ocaml port, because now it's effectively instant.


> C# was excluded for being too slow to start.

This is only the case if the code is JITed.

With ngen and mono-aot it can be compiled to native code directly.

> JVM-based languages would not fare any better.

It is all a matter of which JVM is used. Some of them offer AOT compilation and on disk cache of JITted code from previous runs, thus matching the startup time of native binaries.


Agreed, pretty large omission given that C# was included.

Not such a big fan of Java, but Scala seems to be a strong contender for Java.Next* , and is certainly a viable dynamic-to-static transition language given its terse syntax, deliciously rich collections library, and implicits support (for the MOP fans).

* Twitter being able to withstand spikes of 140K+ tweets per second without lagtime is impressive to say the least.


Yeah, it's a joke that Rust (which I think will be a great language, but all of the syntax hasn't even been decided yet) is being considered, when Java and friends are not.

Really, this article is about "I want to rewrite my program in the newest, coolest language", not about which is the best tool for the job. And that's fine, but the author should present it that way.


> newest, coolest language

I'm sure you meant "newest and/or coolest", or "recently become coolest", but it's worth noting that Haskell is about as old as Python, and OCaml is not much younger.


The syntax of Rust is 99.9% (approx) decided. There was only one (or maybe two?) syntactic change in 0.8, and it was trivial to fix: https://github.com/steveklabnik/rust_for_rubyists/commit/a18...

It's the standard library and such that are moving fast still.


It's already been said that Java is a very heavy dependency for a simple desktop application. Rust compiles to native code, that's the advantage over Java.


A quick comment concerning Haskell. I've been learning it on and off, with some help from my roommate who's a Haskell genius. The way he writes Haskell always amazes me. He starts with a very straightforward verbose version, then constantly refactors it (on the fly, it's not a separate step), abstracting stuff out in typeclasses and monads until it seems that most of the code is just monads and typeclasses definitions, and only a couple lines that seem to actually do something.

I always joke around saying that if your Haskell code doesn't have typeclasses and monads, you're doing something wrong. By which I mean that unlike other languages, there's a huge, huge, huge gap between writing great haskell, and regular haskell, with a very steep learning curve.


Interesting. I wonder about maintainability if the writer's "obvious" version is only there to be refactored out. Can you still read the intent when he's done?


I can't, by a long shot: I don't know Haskell deeply enough. But I think he can read his code without problem.


I like articles like this one, but this doesn't really bode well for 0install.


Any particular reason why? We just released 0install 2.4, which has around 10,000 lines of OCaml. It seems to be working well so far, but there are bound to be a few bugs... we can always use more testers!


> You still have classes, objects, functions, mutable data and low-level access to the OS, all with an easy and concise syntax, but you gain type checking, much better data structures and a huge amount of speed for no additional effort. Why aren’t more people using it?

I think it is a community issue actually. Not the language. It is just not converging. Just think of how many distinct OCaml code styles you have seen. Some use hardcore math notation, some not; some use recursions everywhere; some use imperative syntax; some use pseudo-OOP approach. Some even try to fork the language syntax. And a result. Well. Even with the language itself providing easy and concise syntax, ocaml code is not very readable. And not very popular.


The Xen hypervisor relies on a curious mix of Python and OCaml.


It seems pretty clear to me that Julia is gunning for Python's sweet spot. Worth checking out.

http://julialang.org/


Plus he will be able to use F# easily since it is derived from OCaml.


Wonder if he looked at Nimrod. On paper that is a language that would both fit his needs and would not be a far leap from his current Python implementation.


When I looked at Nimrod it seemed very raw to me: REPL malfunctions and crashes, the whole language seems to be more an enthusiast effort than a production-quality tool. Rust is very raw too, but it already looks pretty solid.


Nimrod by default compiles to C. Its REPL is just an extra feature which as of right now is not stable because few people use it. It for example doesn't even support the FFI currently. As for whether Nimrod is production quality, I would say yes. The compiler is written in Nimrod which is impressive in itself. I have written many projects in it already and even though the compiler is still not at 1.0 the projects continue to build with no (or very little) errors between new compiler versions. Examples of my nimrod projects: an IDE (https://github.com/nimrod-code/aporia), a build farm (https://github.com/nimrod-code/nimbuild) and a web framework (https://github.com/dom96/jester). The Nimrod forum (http://forum.nimrod-code.org) is also written in Nimrod.


Anybody knows or have any reasonable guesses when Rust will be ready?


I hope some time next year, but see recent discussion of it at https://news.ycombinator.com/item?id=6454455.


Why is it necessary to replace Python for this project? And what are the objectives for the replacement? It's not clear what the author's problems and objectives are.


This is all explained in the first part he so kindly links to in the third word ;)

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-r...


You could've avoided a lot of the "case of blah" nonsense in the Haskell code by using Data.Maybe and Data.Either, just sayin'.


python is nice




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: