Hacker News new | past | comments | ask | show | jobs | submit login

I will probably get stomped on for this but to me it's a giant elephant in the room.

When that one purist on the team proposes mandating having 100%, 80%, or any hard number of unit test coverage, most sane people tend to disagree and understand its wild impracticality and counter-productivity.

However - when one (ie, a Haskeller) says that the code must universally pass 100% strong static type verification coverage -- arguably SO MUCH worse cost/benefit than unit testing -- and you say "no that's impractical" it's suddenly a different game.

The elephant in the room is this. Haskell and strongly typed languages are highly impractical, academic pursuits. They do not at all respect the dynamism of the real world. You can certainly build production systems in any language including Haskell but by picking Haskell you are throwing sticks between your legs unnecessarily.




Just because an expressive static type system is available, doesn't mean you have to use all of its expressive power. There's nothing stopping you from writing your system using little more than String, Int, lists, IO etc. Of course, you probably have a higher chance of bugs, but that might be the right tradeoff for your usecase.

I agree that haskellers have a tendency to disregard the cost of using sophisticated type machinery. Which is unfortunate because haskell's greatest strength (IMO) is its ability to opt in to strong static verification for the 10% of code where it provides the most value.


> Just because an expressive static type system is available, doesn't mean you have to use all of its expressive power. There's nothing stopping you from writing your system using little more than String, Int, lists, IO etc. Of course, you probably have a higher chance of bugs, but that might be the right tradeoff for your usecase.

This is not accurate or prove me wrong. Opting out of Haskell's type system may be "do-able" but only in a sense. And even so the programming ergonomics will be terrible vs. a language designed with typelessness in mind.


> And even so the programming ergonomics will be terrible vs. a language designed with typelessness in mind.

Not my experience at all. I switched from Javascript to Typescript and I am way more productive now. So it is definitely not in general true that "the programming ergonomics will be terrible vs. a language designed with typelessness in mind".


This is opinion. Haskellers have opinions too. Whatever you like and whatever gets the job done: bravo


Nope. Not everything is "every way is just as good as the other".

>Whatever you like and whatever gets the job done: bravo

"The job" is this: it's an optimization problem of maximizing development throughput. That's the job in the business world, at least.

There can and should be a conversation about doing this job well.

Mandating strong static typing across the board is exactly what Haskell does. My claim is this is death to productivity/throughput when you compare it to alternatives. The death part can be argued --the best leg Haskellers can stand on afaict is that overall throughput is increased due to decreased attention to the kinds of bugs prevented by strong typing. This needs to be proven by them.

Otherwise -- by definition -- strong type systems are asking you to take special care to meet a type proof that is not necessary to do to implement correctness in other PLs.

Every time I've implemented a solution in a strongly typed system I always run into the type prover complaining about something that Might be but that I can prove is not the case. Here is the limit of the type checker - it is simply not aware enough to understand what we're doing. So it puts needless requirements left & right. This is hostile to productivity.

Now you go.


I believe you're exaggerating the frequency that the compiler rejects valid programs. It happens, but no more frequently than random runtime type errors occur in dynamic code.

In any case, one can just as easily argue that the value of static types come not just when the code is first written, but under the legion of modifications that need to occur. Not to mention how self documenting it is which also aids modification.

Some people just don't like static typing, which is fine, but making the statement that it's categorically worse is hard to defend.


This is not an opinion thing. If you’re saying that type checkers can reject code at no worse than the same rate as dynamic runtimes then you are just plain wrong. In any case the burden is on you to show this with the slightest sketch of a proof to how this is possible. And what is this magic type system that can do this, bc it sounds like some super AI.

The reason I’m confident in my objections is because this is what I get from static typing fundamentalists: never a concession as to its cons and costs. Is it that static typers can’t see the relative costs bc they are not using the full power of dynamic languages?


> the full power of dynamic languages?

There is nothing you can do in a dynamic language you can't do in a typed language. There is no "special power" that only dynamic languages have. If you truly believe that dynamic languages have "special powers" that typed languages can't have then I recommend studying a bit of basic computer science. Starting with the Turing Machine or Lambda Calculus.


> There is nothing you can do in a dynamic language you can't do in a typed language.

There is nothing one can't do in assembly that you can't do it in a higher-level PL.

The argument is not what one can or can't do.

The argument is that one world (dynamism) allows for higher throughput than a static-checked environment. This can be discussed but the "can do" argument is of no interest.

You "can do" anything given enough time. One doesn't even need a programming language. You can manually execute instructions by hand. But this is not the discussion.


> The argument is that one world (dynamism) allows for higher throughput than a static-checked environment.

I have programmed in both dynamic languages and typed languages for 35+ years. And I am most definitely more productive using a typed language. Otherwise I wouldn't. So I wonder why our experiences are so different?


> This is not an opinion thing.

Of course it is an opinion thing.

> never a concession as to its cons and costs.

Pretty much what I get all the time from people who don't know how to use type systems productively, and then think that because they can't nobody else can. If types doesn't work for you, don't use them. They work for me so I use them.


I can use strong typing productively. I can use dynamic systems _way_ more productively.

When all things are equal, the impedance is not from lack of skill. It is staring at you straight in the face: it's a rules-based type prover that -- by definition -- significantly restricts the set of valid programs.

This is the definition of a type prover.

The argument should be that the value of this restriction outweighs the cost. But no one is yet making this claim. Which should make one wonder.


> But no one is yet making this claim. Which should make one wonder.

I for one are most definitely making that claim. Based on 35+ years of programming in both single typed and typed programming languages. But clearly your experience is different.


> My claim is this is death to productivity/throughput when you compare it to alternatives.

A claim with no evidence to back it up. My experience (for example) is exactly the opposite.


If it happens every time then it should be easy for you to give a small example. Please do! It’s very hard to understand what you mean otherwise.


Pattern matching for one. I can prove that 3 cases won’t be met but I have to ‘fill in’ them anyway.

ADTs with non-Maybe fields. I can prove that all is fine if this ‘slot’ isn’t filled in.

Trying to pass an opaque reference down a function call chain.

There’s 3.


Sure, thanks, I know where Haskell's type system will ask you to try again if you haven't provided all the information it wants. I'm asking for examples where it genuinely wasn't sensible to change your code to match what the compiler wants.

> Pattern matching for one. I can prove that 3 cases won’t be met but I have to ‘fill in’ them anyway.

This is a funny one because, no, GHC will not ask you to fill them in! You have to turn on a warning for it to complain. The fact that the warning is not on by default is one of my pet peeves with GHC!

> ADTs with non-Maybe fields. I can prove that all is fine if this ‘slot’ isn’t filled in.

Another funny one, because you can just use `undefined` or `error` and GHC won't complain! Personally I'd probably prefer that it did.

> Trying to pass an opaque reference down a function call chain.

I don't even know what you mean by that.


> Pattern matching for one. I can prove that 3 cases won’t be met but I have to ‘fill in’ them anyway.

>> This is a funny one because, no, GHC will not ask you to fill them in! You have to turn on a warning for it to complain. The fact that the warning is not on by default is one of my pet peeves with GHC!

Okay, so let's do the inverse then. I've got a pattern matched case that the type prover proves will never occur (so it throws type error) but I want it to occur.

The answer I know is obvious: I have to add it to the type union. But it should be obvious at this point that the moves I am making are not driven by the outcome I want but by the rules of the type checker. At some point you have to say, Why am I playing these games? Do they really yield tangible, business-facing value?

> ADTs with non-Maybe fields. I can prove that all is fine if this ‘slot’ isn’t filled in.

>>Another funny one, because you can just use `undefined` or `error` and GHC won't complain! Personally I'd probably prefer that it did.

But I don't want an error. I can prove that the absence of the field is OK/not-an-error.

If you say then make it a Maybe, I will tell you sorry I don't have the time to update my entire code-base to contend w/ this novelty to the type checker (I say to the type checker, but this is never a novelty to me and my dynamic code).

> Trying to pass an opaque reference down a function call chain.

>> I don't even know what you mean by that.

my value --(flows at runtime through)--> [some framework, say a web framework code to a, say, a Label that has no idea about my value's type domain] --(flows through pixels to)--> user's eyes


It's really interesting because I hear these kinds of claims a lot from Clojure programmers. How did they arise in the community? I find it hard to believe that so many Clojure programmers have actually tried Haskell and come to exactly the same conclusion.

Does Rich Hickey repeat these points somewhere? If so could you point me to where? I'd love to see them. I'm a big fan of Are We There Yet, Simple Made Easy etc. and find Haskell a great implementation of them.

> I have to add it to the type union. But it should be obvious at this point that the moves I am making are not driven by the outcome I want but by the rules of the type checker.

That's not how I see it at all. I wish I could understand why you do see it that way!

> But I don't want an error. I can prove that the absence of the field is OK/not-an-error.

You misunderstand. If you know that a field of a data type is going to be unused you can fill it in with an `error` call that you know will never be hit. You can even omit the field entirely! That's considered bad form but if you like it then knock yourself out.

    data Foo = Foo { bar  :: Int
                   , baz  :: String
                   , quux :: Bool
                   }
    
    doSomething :: Foo -> IO ()
    doSomething foo = do
      print (bar foo)
      print (baz foo)
    
    example1 = Foo { bar  = 42
                   , baz  = "Hello"
                   , quux = error "quux will never be inspected"
                   }
    
    -- > doSomething example1
    -- 42
    -- "Hello"
    
    example2 = Foo { bar  = 42
                   , baz  = "Hello"
                   }
    
    -- > doSomething example2
    -- 42
    -- "Hello"

> my value --(flows at runtime through)--> [some framework, say a web framework code to a, say, a Label that has no idea about my value's type domain] --(flows through pixels to)--> user's eyes

What's wrong with a polymorphic parameter?


What I find crazy about these conversations is that I will say “I don’t like that I have to do X...”

And the response is always “But that’s easy you just do X...”

The argument is not that you can’t do some task in Haskell.

With enough time you can do anything in any language.

That’s the given.

The argument is I want throughput.

If I’m doing X I want to do it because it is necessary to the problem solution.

This is why you hear these laments from Clojure programmers. Because when we code we are doing the minimal amount of work to produce correct programs.

Strongly typed systems add fanfare that Clojure programmers have found no need for.


Thank you for your response.

I'm afraid I don't understand. Your point seems backwards. On the specific point in question you said (paraphrasing) you don't like filling in a field of an ADT if you can prove it's not used. I pointed out that in Haskell you indeed need not fill it in. You said (paraphrasing) you don't like matching on a constructor if you can prove it's not used. I pointed out that in Haskell you indeed need not match on it. You don't want to do X. I'm telling you that you don't need to do X!

Anyway, I'm more interested in the general point. Where do Clojurians get these ideas about Haskell? Are there specific blog posts or videos you can point me too? I'd rather confront the general case than specific instances buried deep down in forum threads.


Thanks for your reply, as well.

But there’s no indoctrination going on. It’s right there:

> If you know that a field of a data type is going to be unused you can fill it in with an `error`

I don’t want to fill anything in.

I don’t want to have to tell the checker what I know and the world already knows just because it thinks it needs to know it.

There’s no indoctrination. We don’t have to “fill in” anything in Clojure.

I wonder sometimes if this is a thing where you can’t see the air because it’s all around you?

In static-type space you are (by definition) a slave to the type checker. Sometimes static-type enthusiasts will say they lean on the checker, it helps them. But I have a weird sense that this is a kind of Stockholm Syndrome.

Sorry I’m a bit caustic sometimes, but it’s all good natured I promise - the hope is it draws out a point that will prove me wrong.

You clearly know your domain well, and I appreciate the back & forth. But I do stand by my position (until knocked out of it.)


> Thanks for your reply, as well.

I'm glad we're still going deep down in this comment thread after everyone else has gone home.

> > If you know that a field of a data type is going to be unused you can fill it in with an `error`

> I don’t want to fill anything in.

You don't have to! I said as much a couple of comments back "You can even omit the field entirely!". I gave a code example demonstrating that.

Anyway, (what I assume to be) your larger point still stands, i.e. that you don't even want to have to define the data type upfront.

I'm a huge fan of Rich Hickey's talks "Simple Made Easy", "Are We There Yet?", etc.. Haskell is such a good implementation of his philosphy yet he doesn't seem to realise it. Clojure fans are some of the most vociferously opposed to Haskell.


Sorry to mishear your prior point about omitting the field entirely..

But let's set that aside and focus on the pre-declaring of your ADTs, as you suggest.

This is no trivial matter. In fact this is the crux of the issue if not a large part of the issue.

Pre-declaring what knowledge is "allowed" to flow through your system -- if you throw all other objections away -- this one is a enormous blocker still for the Clojurist. (Well, I should only speak for myself at least..but I suspect.)

Clojurists rely almost universally across the board on ad hoc knowledge (ie dynamic maps) with zero ceremony required before getting to introduce and capitalize on this knowledge .... and Clojure is the perfect 'glove' for handling these ad hoc packets of knowledge..it was explicitly designed afaict on this exact way of handling data.

So if Haskell is not oriented around this same construct then I don't think your read that Hickey's philosophy is much-realized by Haskell is accurate.

You can take this further and make the very obvious point that in real life we never ever have to go through any pre-declarations or ceremony like this.

And in my experience if productivity in coding is a function of anything it is _how close the programming model is to your natural way of thinking about the world_. When that cognitivie impedence is reduced massive creativity is unleashed.


> Sorry to mishear your prior point about omitting the field entirely..

That's OK. Forum discussions are not the highest fidelity communication medium.

What I mean about Rich Hickey's philosophy and Haskell are things like the quotations below. I think they're great insights and they apply really well to Haskell, at least to Haskell written in the way that a significant subset of us like to write it. On the other hand I'm not much convinced by arguments about the "natural way of thinking about the world". Firstly, imperative programming proponents are convinced that the world is imperative and it is natural for humans to think imperatively. Secondly, I became a much better programmer using Haskell that using Python, despite Python naturally "fitting my brain" (a popular, and valid, quote amongst Python programmers) and me having to "fit my brain to" Haskell.

Anyway, that's mostly just my opinion. On the other hand, an objective technical innovation that could bring a huge amount of ergonomic benefit to Haskell is a type system feature called "row types". That would allow us to have anonymous records which we could add fields to and delete fields from at will, just like in an untyped language, whilst also providing the type system guarantees that we're used to. I think that would be hugely beneficial but for some reason that doesn't seem to be on the horizon.

====

Are We There Yet?

* "Someone who's building houses out of bricks does not have to worry about the insides of the bricks"

* "The best units for that are the pure functions. Takes immutable values, does something with them, has no effect on the outside world. Returns another immutable value."

* "Mutable state - it makes no sense. Mutable objects - they make no sense."

The Value of Values

* "Place itself has no role in an information model. It is only an implementation detail."

* "Values can be shared freely and the way you share them is by aliasing them."

* "Reproducible results are another great benefit of values."

* "Easy to fabricate"

* "Values make the best interface"

* "Values aggregate ("compose") PLOP doesn't compose"

The Language Of The System

* "What's wrong with SQL is that there's no machine interval to SQL. They only designed a human interface to sequel." (foreshadowing embedded domain specific languages)

* "We want to seek out and build libraries and languages that are like instruments in all the ways I just talked about, in particular the simplicity aspect."

====


I take your point that all of these Hickey comments resonate with ideals of Haskell.

The last piece of the puzzle would be to discuss why Hickey -- in addition to these points -- also is overtly against static typeing and the kinds of idioms that show up in static-type-oriented programming (like ADTs and Maybes etc--he rails against these as well. See "Effective Programs" talk just for once citation.)

So we may be at the point where we're both like "Wow Hickey is so right about all these points" but where you depart from him on this static typing question. (Which should be somewhat interesting given that it is such a big/fundamental thing and his experience has led him to all these great points and yet he is making a huge misstep on his opinion about types? This is no point to 'win' this argument on or anything but it should be of interest.)

Anyway -- so we have these problems with ADTs and Maybes and such and then perhaps we can get past those and say well what about "row types"?

If we squeeze the toothpaste tube down to this point where we can say, yes, no ADTS, just row types... then we're not far off at this point. But it should also be noted that at this point the delta between statically typed and not is now possibly fairly small enough that the comparison is of not much interest. It's pretty easy at this point (it seems to me) to be like, Cool this type system can tell me if I try to access an "age" property as a String.

Okay I guess that's kinda cool but I think the benefit at this point is not earth shattering.

And I also think at his point that I for one would get tired having to explicitly do type coercions when indeed I did want a nil or I did want a String.

I would at this point just keep squeezing the tube until I had Clojure.

In other words, I think Rich Hickey didn't just get those other points right and this one wrong, I think he really did factor in the whole picture. I think he made the right choice and he stands by it of course after years of Clojure's existence. And he is certainly one to keep revisiting a problem when he thinks its not quite right. And I also agree, I think his aversion to types is correct from a practitioner's perspective.

Obviously this stuff has some subjectivity to it (in a sense) but I also participate in these conversations so fiercely b/c it's not just subjective. There is a real consequence in terms of hours-to-delivery that I really do think matter. And I think you can build stable, correct programs (this is demonstrably true) w/o having to contend with a type checker. And no matter which way you slice it you do have to pay for the type checker.

The kinds of things that static typers usually say they're getting from the type checker, I just have never found a need for them. And when I"ve been forced to use a type checker my productivity increases as much as I can find a way to work around the type checker. All of this AND I'm running production, serving customers with demonstrably stable/correct software. So what is the type checker going to give me?

To the point of the cognitive impedance between programming and thinking -- I don't really care when people claim that XYZ is more conducive to thinking ... to say we think imperatively (and I know people say that) but that is just wrong. I'm not talking about how novice programmers want to program. I'm talking about how we as people in the real world think -- how we talk, how our brains work in general. We think very declaratively--...imperative thinking slows down most if not all human brains. Also, we think in terms of vocabularies (words with semantics)... we do not think in ADTs and taxonomies of types. Whenever I'm forced to think in ADTs in a code base my creativity sinks to the floor. Because all an ADT is is someone's conception of what some Idea was at a given (static) point in time. So when I think of a creative use for a concept I often can't pursue it if an entire code base is predicated on a static conceptualization.

The static typer will claim that "Look you can change it and everything will break and tell you where to fix it" This is looked at as a feature. This is not a feature. In my dynamic code base these changes are severely localized so that a creative decision over here does not cause a production capability over there to fall over.

Sorry if this is a bit abstract. Wondering if any of it resonates or if it is coming to abstract. To me it's very tangible in practice. I want to be able to make ruthless locallized changes in my code bases without the entirety of it breaking. This latter thing is usually what happens in programs that are "wholly proved" and where every piece of code is connected to most other pieces via.. types.


At this point I wonder if we're at the end of our journey, at least for now. I get so much value out of Haskell's type system I can't ever imagine giving it up. Row types would be the cherry on top (if they're ever implemented). It seems that you get so much value out of lack of a type system that you can't imagine ever giving it up. I have no basis on which to understand this point of view. It just leaves me dumbfounded. Perhaps it doesn't help that my most familiar point of reference for untyped programming is Python. I guess Clojure is very different.

> In my dynamic code base these changes are severely localized so that a creative decision over here does not cause a production capability over there to fall over.

This is a very specific claim which it might be helpful to dig into further. How can you ensure that removing a record from a field at the start of a pipeline will not cause a function at the end of the pipeline to fall over, since it was expecting to see it but didn't (and the intermediate parts of the pipeline were completely agnostic to it)?


Firstly, I think there's some sort of time limit on posting on HN stories. If we get cut off then please email me because it would be interesting to finish off the discussion!

http://web.jaguarpaw.co.uk/~tom/contact/


Right. And when another programmer changes the ADT so that this is no longer true your dynamic language will now crash in production instead of telling you right away before putting it into production? Or are you telling me that your code always have 100% code test coverage checking all possible execution paths for all pattern matches?


Here’s another that kills me every time, I want to compose where I know types are mismatched and I am okay with a runtime fault downstrean.

This is hugely powerful to move very fast in, eg, REPL-driven development environment where you build and verify pieces of code incrementally.

Not being able to do this is a enormous productivity killer, relative to say a dynamic/Clojure.


> I want to compose where I know types are mismatched and I am okay with a runtime fault downstrean.

Sounds like another good candidate for `error` or `undefined`


And these are insanely expensive to deal with over time when you compare to the alternative.


> insanely expensive

Wow that sounds really expensive! How did you actually measure that in the real world?


I spent too many years programming in all kinds of dynamically typed and statically typed systems and I kept a clear head and didn't cling to a blind religious persuasion. I've also watched the performance of static type enthusiasts closely.

I thought that when I updated a type that all of my running around fixing type errors had to do with me missing something but alas that was not the case. This is how the static enthusiasts do their work, all the while claiming that they are being more productive. When I call a static typer our on this the answer is always the same: "But you'd have to go update those case statements/maybe matches/type signatures anyway!" Meanwhile I'm over in a dynamic system not having to do any of that. Refactors are minimal not cross-the-entire-codebase.

Static type systems are training wheels. When a static type enthusiast takes his training wheels off, the bike falls over and s/he screams "See! I told you. I'm more productive with types!" And then they put the training wheels back on and think that they have proven that static verification is superior. What they don't know is that there is enormous missed opportunity once you learn how to ride a bike w/o training wheels.

The truth is you can produce much faster without a rules-based static type prover between you and your runtime environment if you know what you're doing. This should be obvious b/c in one world you have a filter/prover that you must past to ship. In the other world you do not. If the filter/prover was worth its weight then you should be able to clearly see the win. But in business systems the kinds of bugs found by filter/provers are of the "null pointer" variety -- the fastest and simplest bugs to fix when they are found; having a heavyweight filter/prover save me from these simplest of bugs to fix is simply not worth it.


If you have a place where you emit sum types, and you increase the set of possible cases you can emit then you do have to cross-the-entire-codebase to update all consumers of that sum type to handle the new case.

If you have a place where you consume product types, and you increase the set of fields that you inspect on that sum type, then you do have to cross-the-entire-codebase to update all producers of that product type to emit the new field.

I don't see how it could be any different. How would you handle this in Clojure?


Look I have been programming for 35+ years in both dynamic and typed languages. And my experience is that I am more productive when using a typed language. It sounds as if your experience is the opposite. I wonder why our experiences are so different? It would be interesting to find out.


astuary is not suggesting opting out of Haskell’s type system. He/she is suggesting using constructs whose interaction with the type system is straightforward.


> SO MUCH worse cost/benefit than unit testing

My experience is exactly the opposite. Using the type system is less effort, guides your development more and, in a way, provides more consistent guarantees than spending time on a wide unit test coverage.

Obviously, these things are not totally mutually exclusive though.


If you are truly doing an apples to apples comparison here, I suspect you’re using a struct subset of your dynamic language. Otherwise I can’t see how the addition of a rule-based static-check pass filter gets you faster.

What are the two static & dynamic languages you’re comparing by the way?


For example now I'm building a Servant API on Haskell. It's basically a strongly typed API that you don't need to validate with tests. You only need to test the handlers that handle business logic.

In other languages with a less sophisticated type systems I would need to build validation functions manually and obviously test them. Hell, an interpreted language might even boot up with functions that have faulty code, ultimately producing runtime errors when invoked. Haskell generally would not (because of the type system). Adding type constraints basically limits the range of possible inputs/outputs, requiring you to write far less tests.


My examples would be Javascript vs Typescript. I am SO much more productive in Typescript than in Javascript and the number of bugs have dropped substantially.


> by picking Haskell you are throwing sticks between your legs unnecessarily.

You clearly have no idea how types work and how to use it to make you more productive as a developer. Being able to switch from Javascript to Typescript made me so much more productive.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: