Hacker News new | past | comments | ask | show | jobs | submit login

every single NoneType error I have gotten using in the python I'm paid to write would go away with a static type system.

Every single function I've changed the signature of and then failed to change one caller would go away with a static type system.

I cannot count the number of times I've had to stop and write down the types of things I wanted to reason about, an d then keep all that in my head, to figure out what was going on in a large code base. Guess what? Most of the types were static. If I had Haskell's good type inference, I wouldn't have had to do any of this time-wasting endeavor.

The statement isn't controversial. Correct code type-checks. When there are type errors in your code, in a dynamic language, they would be bugs found at runtime.

[edit for formatting]




Dynamically typed languages are used everyday to write robust software (erlang, lisp, etc...).

You're making a common mistake, which is to reason about static typing advantages everything else being equal. Sure, if static typing had no drawbacks, it would be insane for any language not to be statically typed, for some of the reasons you're giving (and others). But everything is not equal. Static typing is a tradeoff, and lots of its advantages are often caused by something else (or in combination with something else). Don't get me wrong, Haskell is an impressive feat (just starting learning it), but in haskell, typing is associated to advanced type inference, purity, etc...

In my experience being paid to to write python, a lots of NoneType, attribute errors, etc... often have a deeper underlying reason linked to bad architecture, and bad programming practices. I am not convinced typing would make improving those projects easier (certainly, a primitive typing system ala c/c++/java does not in my experience).


Type error: "everyday" is an adjective, adverb expected in this context ;)

The examples you cite would be detected by a type checker. The attribute errors wouldn't even need a complicated one. (I've never had an attribute error in C.) It's fine to complain that these things are due to bad architecture, in whatever way that might be, but if there were a type checker involved then the code wouldn't even build if there were a type problem. It's fine to rail against poor design, or systems that are hard to use for no good reason, but in the mean time the code usually has to actually work.

(And if there were no type problems, it would run fine without one, bad architecture or no.)


Most code is static, but dynamic runtimes make a bunch of things fairly easy that are either hard or impossible with static ones, like introspection and serialization (you can do serialization in a static language, but you can't just slap in a call to JSON.load on a file with complex structure and then access the result the same way as native types), proxy/remote objects, monkey patching (in various forms - raw monkey patching is bad style anyway, but even things like random global callbacks are hard or bad style in most static languages), objects that pretend to be A but are really B (perhaps to avoid changing existing code; perhaps to implement things like "pseudo-string that's an infinite series of As" or "rope" or "list that loads the relevant data from a file when an item is accessed" without having to change all the code to make "string" and "list" typeclasses/interfaces; perhaps for mocking during testing), dynamic loading, REPL (in running code, that gives you the flexibility to change anything you want), ...

The benefits of that kind of stuff are arguable, but I think the net effect is that static languages, even when they save you from having to write out a lot of types, encourage a fairly different style from dynamic languages, which I prefer.

p.s.: you don't need a static type system to use an option type. :)


Many people conflate the features of a statically typed languages with those of a language without a runtime. Many of these features(dynamic loading, REPL, ect) are obtainable in a language that is statically typed but provides a runtime, and it should be noted(especially since the words static and dynamic are heavily overloaded).


> The benefits of that kind of stuff are arguable

I'll argue for and against some of these points. My perspective should be somewhat contentious as a dyed-in-the-wool Haskell user. We take the side-effects and the static type stuff to an extreme. Should be more interesting to read.

> you can do serialization in a static language, but you can't just slap in a call to JSON.load on a file with complex structure and then access the result the same way as native types)

That's accurate. It's kind of the point. If a field of a data structure is going to disappear depending on the contents of some arbitrary input, I'd consider that a flaw. It's convenient to say foo.bar as opposed to foo ! "bar", but the latter (in Haskell) is explicit about the expected type it infers. For example, I can do this:

    λ> json <- readFile "/tmp/foo.json"
    λ> putStrLn json
    → {"firstName": "John", "lastName": "Smith", "age": 25, "address": { "city": "New York"}}
    λ> do let decade x = div x 10 * 10
              group n  = show (decade n) ++ "-" ++ show (decade (n+10))
          person    <- decode json
          firstname <- person ! "firstName"
          age       <- person ! "age"
          address   <- person ! "address" >>= readJSON
          city      <- address ! "city"
          return ("The name is " ++ firstname ++ " who is age " ++ group age ++ ", and he lives in " ++ city)
    → Ok "The name is John who is age 20-30, and he lives in New York"
Whether `person' is a string, or `age' is an int, is inferred by its use. I could also explicitly add type signatures. Type inference and type-classes give you something that you don't have in Java, C++, C#, Python, Ruby, whatever. The decode function is polymorphic on the type it parses, so if you add a type annotation, you can tell it what to parse:

    λ> decode "1" :: Result Int
    Ok 1
    λ> decode "1" :: Result String
    Error "Unable to read String"
Or you can just use the variable and type inference will figure it out:

    λ> do x <- decode "1"; return (x * 5)
    Ok 5
    λ> do x <- decode "[123]"; return (x * 5)
    Error "Unable to read Integer"
    λ> 
So you have (1) a static proof that the existence and type of things are coherent with your actual code that uses it, (2) the parsing of the JSON is left separate to its use. And that's what this is, parsing. The code x * 5 is never even run, it stops at the decoding step. Now use your imagination and replace x * 5 with normal code. If you take a value decoded from JSON and use it as an integer when it's "null", that's your failure to parse properly. What do you send back to the user of your API or whatever, a “sorry an exception was thrown somewhere in my codebase”?

If you want additional validation, you can go there:

    λ> do person <- decode json
          firstname <- person !? ("firstName", not . null, "we need it")
          return ("Name's " ++ firstname)
    Error "firstName: we need it"
Validated it, didn't have to state the type, it just knew. Maybe I only validate a few fields for invariants, but ALL data should be well-typed. That's just sound engineering. This doesn't throw an exception, either, by the way. The whole thing in all these examples are in a “Result” value. The monad is equivalent to C#'s LINQ. Consider it like a JSON querying DSL. It just returns Error or Ok.

Types can also be used to derive unit tests. I can talk about that more if interested.

> proxy/remote objects

Again, the above applies.

> monkey patching (in various forms - raw monkey patching is bad style anyway, but even things like random global callbacks are hard or bad style in most static languages),

Well, yeah, as you say, monkey patching isn't even a concept. I don't know what a random global call back is for. Sounds like bad style in any language.

> objects that pretend to be A but are really B (perhaps to avoid changing existing code; perhaps to implement things like "pseudo-string that's an infinite series of As" or "rope" or "list that loads the relevant data from a file when an item is accessed" without having to change all the code to make "string" and "list" typeclasses/interfaces; perhaps for mocking during testing)

That's true. There is no way around that. I was recently working on a code generator and I changed the resulting string type from a list of characters to a rope, technically I only needed to change the import from Data.Text to Data.Sequence, but it's usually a bit of refactoring to do. (In the end, it turned out the rope was no faster.)

> dynamic loading

Technically I've run my IRC server from within GHCi (Haskell's REPL) in order to inspect the state of the program while it was running to see was going on with a bug. I usually just test individual functions in the REPL, but this was a hard bug. I even made some functions updateable, I rewrote them in the REPL while it was running. I've also done this while working with my Kinect from Haskell and doing OpenGL coding. You can't really go re-starting those kind of processes. But that's because I'm awesome, not because Haskell is particularly good at that or endorses it.

GHC's support for dynamic code reloading is not good. It could be, it could be completely decent. There was an old Haskell implementation that was more like Smalltalk or Lisp in the way you could update code live, but GHC won and GHC doesn't focus much on this aspect. I don't think static types is the road-block here, in fact I think it's very helpful with migration. In Lisp (in which live updating of code and data types/classes is bread and butter), you often end up confused with an inconsistent program state (the 'image') and are waiting for some function to bail out.

But technically, Ruby, Python, Perl and so-called dynamic languages also suck at this style of programming. Smalltalk and Lisp mastered it in the 80's. They set a standard back then. But everyone seems to have forgotten.

> REPL (in running code, that gives you the flexibility to change anything you want), ...

See above.

> The benefits of that kind of stuff are arguable, but I think the net effect is that static languages, even when they save you from having to write out a lot of types, encourage a fairly different style from dynamic languages, which I prefer.

Yeah, some of them are good but static languages can't do, some are bad that static languages wouldn't want to do by the principle of it, some are good that static languages don't do but could do.

> p.s.: you don't need a static type system to use an option type. :)

This is a pretty odd thing to say, because you don't need an option type in most if not all dynamic languages, they all have implicit null anyway. All values are option types. The point of a stronger type system is that null explicit, i.e. null == 2 is a type error. And a static type system tells you that before you run the code.


> This is a pretty odd thing to say, because you don't need an option type in most if not all dynamic languages, they all have implicit null anyway. All values are option types. The point of a stronger type system is that null explicit, i.e. null == 2 is a type error. And a static type system tells you that before you run the code.

Well, that's a problem, actually. After encountering option types, it's hard to live without it. Because you want to be able to mark that parameter A and B should not be null, but C may be. And unless you have a very good static analyzer, you are constantly at the mercy of a nasty NPE somewhere in your codebase.


And then you need to write some serialization code or IPC and static type system is a pain or just makes it plain impossible to do (in which case this is a shortcoming of this particular type system, not the idea itself, but still).

No, I don't want to argue and I'm not religiously in favor of either unityped or typed languages. I have no problem in admitting that there are things that are made easier with static typing. I'm having good time writing both in OCaml and Erlang. On the other hand it worries me that proponents of static typing have difficulty admitting that even most sophisticated type systems are not suited for certain other tasks.


To me it seems that the point on serialization is completely moot in languages like Haskell(what we are talking about), serialization may be hard(er) to do in a language like Java or C++ where we have a rooted type hierarchy and we can't easily add new functionality, or do type based dispatch. Haskell's type classes allow us the freedom to define new kinds of functionality on top of existing types easily. You can look at libraries like Aeson, which allows serialization and deserialization from JSON with just a few lines of code. The biggest problem is that people's view is strapped to this old dynamic Lisp vs. C kind of paradigm that doesn't exist in modern functional languages.


> every single NoneType error I have gotten using in the python I'm paid to write would go away with a static type system.

Not true. Java has NullPointerException, and static typing does nothing to prevent it. Java doesn't have Maybe, or Option or whatever you call it, but `Option.IsNone` isn't any different from `if (obj == null)`.

> Every single function I've changed the signature of and then failed to change one caller would go away with a static type system.

Yes, static type systems are great for that. But if you are using Python, use pylint and rope.

> If I had Haskell's good type inference, I wouldn't have had to do any of this time-wasting endeavor.

I don't know about you but changing signatures is very low in my list of pain-points. If it's solved for my environment, superb. If it's not, it will sting once in a while but that's that.


It is worth noting that in Haskell you don't have something like NullPointerException (the type system won't allow for that), and using Maybe actually is a bit different than writing 'if (obj != null)' everywhere, if you are into stuff like monads or functors. Besides, in Java null pointer can pop up pretty much everywhere, but in Haskell you probably shouldn't store all your data in Maybe.


> It is worth noting that in Haskell you don't have something like NullPointerException, the type system doesn't allow for that - so while you're point is relevant to Java it doesn't hold here. The closest you can get to something like null pointer is using 'maybe' type, but when you do so, you have to everywhere explicitly handle what happens if variable has no value (or the value of 'nothing' more precisely).

Which, as I mentioned, isn't any different from checking from nulls everywhere. Or if you are so inclined, write an Option class with the desired interface and use it everywhere where the value in nullable. My point is using Maybe is the same as manually checking for null.

    case maybeValue of
      Just value -> ...
      Nothing    -> ...
is the same as

    if (val != null) {
        ....
    } else {
        ...
    }
If you see the compiler forcing you to always use Maybe for nullable types as an advantage, good for you. Personally, I don't see it as a big deal.


Maybe is a monad, which means Maybe computations can be chained together with `>>=` (or using `do` notation) without checking for `Nothing`. You can easily produce a large composition of potentially-failing computations while completely ignoring the possibility of failure.

The case analysis you give as an example is only required at the point when you want to extract the final result into a different monadic context, and even then you would typically use the `maybe` or `fromMaybe` functions to make it more concise.

Only a novice Haskell user would write:

    case comp1 of
        Nothing -> handleFailure
        Just r1 ->
            case comp2 r1 of
                Nothing -> handleFailure
                Just r2 ->
                    case comp3 r2 of
                        Nothing -> handleFailure
                        Just r2 -> handleResult r3
which is indeed, just as bad as explicit null checking in C or Java, with runaway indentation to boot. But anyone who understands Haskell's rich abstractions would instead write:

    maybe handleFailure handleResult $ comp1 >>= comp2 >>= comp3
The fact that you can't forget the "null check" without the compiler telling you about it is a nice convenience afforded by the strong type system, but it's far from the only benefit.


> Maybe is a monad, which means Maybe computations can be chained together with `>>=` (or using `do` notation) without checking for `Nothing`. You can easily produce a large composition of potentially-failing computations while completely ignoring the possibility of failure.

Like

     val = foo.bar.blah.boo rescue nil
Or

    try {
        val = foo.bar().blah().boo()
    } catch(NullPointerException ex) {
    }
Or

    try:
        val = foo.bar().blah().boo()
    except AttributeError:
        pass
Yes, I know Maybe is a monad, but that doesn't make a difference to me. A series of computation where a step depends on the results of the previous step and the previous step can return null is hardly an issue in any language.

> The case analysis you give as an example is only required at the point when you want to extract the final result into a different monadic context, and even then you would typically use the `maybe` or `fromMaybe` functions to make it more concise.

The case analysis I give is an example where either there is a value or null and I need the value. I don't really care how Haskell defines monadic context.


> isn't any different from checking from nulls everywhere

You would only check for nulls "everywhere" if all functions could potentially return null. Then, indeed, there is no difference. But (hopefully) not all functions will return null, so you'll probably have at most a single-digit percentage of functions that do.

The point is that by using Option, you are explicitly stating "This function can return null", making it impossible for the caller to ignore. If you write it somewhere into the docs, it is indeed easy to overlook.

This may not be as relevant when you are only working with your own code, but I find this (and static typing in general) most helpful when dealing with code from someone else, including the libraries one uses.


> But (hopefully) not all functions will return null, so you'll probably have at most a single-digit percentage of functions that do.

In a real world API, almost everything that returns an object can return null or throw an exception.

> The point is that by using Option, you are explicitly stating "This function can return null", making it impossible for the caller to ignore.

Throw an exception if impossible to ignore is the motive.

> If you write it somewhere into the docs, it is indeed easy to overlook.

It might be overlooked. But is it too much to assume that someone making a call will check the parameters and the return type?


There is a fundamental difference between throwing an exception and returning null. From http://en.wikipedia.org/wiki/Exception_handling: "Exception handling is the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional situations requiring special processing – often changing the normal flow of program execution." One should throw an exception when an anomalous situation appears, e.g. I can not connect to the database. Whereas returning null / returning an Option means that this case needs to be treated in the normal flow of execution, e.g. asking a Person-object for it's spouse. It is perfectly reasonable that a random person isn't married (so throwing an exception is wrong) but at the same time it should be impossible for the caller to ignore.

> It might be overlooked. But is it too much to assume that someone making a call will check the parameters and the return type?

http://news.ycombinator.com/item?id=4695587

Apparently it is too much to ask, even if the program is performing something super-important for security like SSL.


> It is perfectly reasonable that a random person isn't married (so throwing an exception is wrong) but at the same time it should be impossible for the caller to ignore.

The whole discussion started from the claim that Haskell makes the NullPointerException/NoneType non existent. And I am just saying it isn't any different from how you enforce it in any other language - you either handle nulls or throw exceptions.

> (so throwing an exception is wrong)

I am sorry, but I don't play the "throwing an exception is wrong" game. I use it for actual exceptions, control flow, must-handle scenarios. I don't see how exceptions are defined has to do with how I use it if it makes my program logic clear or translates to my intent. The only reason I think before using exceptions for things which aren't exceptions is the stack which is saved, which most of the times so minuscule that it doesn't matter. Ruby has the best compromise as in it defines catch..throw; most of the languages don't so I resort to using exceptions.


Sorry, but it is not a game, it is a convention that afaik holds true for ALL languages that use exceptions. Using exceptions for control flow is widely accepted as a code smell.


> Sorry, but it is not a game, it is a convention that afaik holds true for ALL languages that use exceptions. Using exceptions for control flow is widely accepted as a code smell.

Unless using exceptions either hinders performance(they are expensive but I am yet to see a case where it matters), or makes the control flow incomprehensible, it doesn't matter what is widely accepted. I need a reason for "don't use exceptions because...", and "exceptions for exceptional conditions" or "just because it's widely accepted" doesn't cut it.

I am pretty sure you have very strong opinions about goto as well, which I use a lot when using C. It's simply the cleanest way to directly jump to the cleaner instead of convoluting the flow with non-needed flags. Since you are placing much weight on what others deem acceptable, you can look at linux kernel code and Steven's code in Unix Network Programming.

Also, exceptions are control flow in every sense of the word, though non-local http://en.wikipedia.org/wiki/Control_flow#Exceptions. I don't know where the notion of exceptions not being control flow came from. Among other examples of exceptions for control flow, Python and Ruby raise StopIteration in their iteration protocol. And in Python, exceptions aren't that much costly.


You don't have to handle it everywhere explicitly. This is where Functor, Applicative, Alternative, Monoid and Monad are for. They will do the plumbing for you. Eventually you will unpack the values, but this is only necessary if you change environment.

Say we have a failing computation: failComp = Nothing

and a couple of succeeding computations: sucComp = Just 1 sucComp2 = Just 2

We can use the typeclasses to avoid explicit unpacking:

-- Monad result: Just 3

add = do x <- sucComp2 y <- sucComp return $ x + y

Applicative result: Just 3

-- the <#> should be the applicative operator. addapp = (+) <$> sucComp <#> sucComp2

Alternative: result (Just 1)

val = failComp <|> sucComp

Monoid: result (Just 2)

mon = failComp <> sucComp2

Functor result (Just 6)

-- # should be multiply operator

func = fmap (#3) sucComp2


(It's a little late now, but if you indent a line by 2 or more spaces, HN will format the text as code, so you don't lose the formatting.)


In Java, the compiler won't tell you that a variable may be null. In Haskell (at least when you compile with -Wall, strange this particular warning isn't the default) you'll get an error if you've failed to handle all the variations of data that you've provided.

There's a good presentation my Yaron Minsky about OCaml in the real world where he cites this as a major advantage (in OCaml, failing to match all patterns is an error by default).


> In Java, the compiler won't tell you that a variable may be null. In Haskell (at least when you compile with -Wall, strange this particular warning isn't the default) you'll get an error if you've failed to handle all the variations of data that you've provided.

    Connection con = DriverManager.getConnection(
                         "jdbc:myDriver:myDatabase",
                         username,
                         password);

    Statement stmt = con.createStatement();
    ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table1");

    while (rs.next()) {
        int x = rs.getInt("a");
        String s = rs.getString("b");
        float f = rs.getFloat("c");
    }
It's not that bad. In the above example, you either get an SQLException or a ResultSet. ResultSet is your option type here. It wont' be null - it might or might not contain values.

There might be a lot of good things about Haskell(I am not that familiar to make a call), but seriously, Maybe doesn't look that great.


That's right, I've changed my comment to be more relevant reply, sorry for confusion.

> If you see the compiler forcing you to always use Maybe as an advantage, good for you. Personally, I don't see it as a big deal.

One of reasons I like Maybe is I can accidentally put null somewhere I know I should not; Haskell's type system will prevent me from that. I wouldn't use Maybe unless I really need to - whereas in languages with less strict type systems, almost everything is a 'Maybe'. Besides, it seems it's easier to build some layer on abstraction on it, which you can reuse and save some time (for example, you have monad instance for it - however I don't think I've ever used it).


> whereas in languages with less strict type systems, almost everything is a 'Maybe'

I don't know Haskell(I do know F#), so help me with this: How is everything not a Maybe in Haskell? In the languages I know, when you return an object from a method, (it can throw an Exception or return null) or return a valid val. When it throws an Exception, there is no Maybe, or else it's a Maybe. How is it any different in Haskell.


i'm a huge fan of python, and yet, after just the most superficial introduction to haskell i couldn't agree with you more.

i don't think it is the static typing alone that saves time, though. it is the static typing plus type inference.


I like dynamic typing for short scripts, but after working in Fortune 500 projects, I got to love static typing.

Projects with 50+ developers using dynamic languages become unmanageable after a few months of development time, even with unit tests.

Nowadays I rather advocate static typing languages with type inference. Most use cases where dynamic behavior is handy can be handled with reflection or meta-programming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: