The dueling rhetoric is the same rhetoric that has been around for decades: Some people really feel type systems add value; others, feel it's a ball and chain. So which is it? The answer is probably "yes." We should all believe by now since history has proven this correct. Most of the time you start with no type system for speed. Then you start adding weird checks and hacks (here's lookin' at you clojure.spec). Then you rewrite with a type system.
I'm a devout Clojure developer. I think it delivers on the promises he outlines in his talk, but I also have no small appreciation for Haskell as an outrageously powerful language. Everyone robs from Haskell for their new shiny language, as they should. Unfortunately, not a night goes by where I don't ask God to make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST. Clojure talks to me as if I were a child.
Rich Hickey is selling the case for Clojure, like any person who wants his or her language used should do. His arguments are mostly rational, but also a question of taste, which I feel is admitted. As for this writer, I'm glad he ends it by saying it isn't a flame war. If I had to go to war alongside another group of devs, it would almost certainly be Haskell devs.
Most of the time you start with no type system for speed. Then you start adding weird checks and hacks (here's lookin' at you clojure.spec). Then you rewrite with a type system.
You seem to be in the camp of gradual types. Which Clojure falls more into, though experimentally. Racket, TypeScript, Shen, C# or Dart are better examples of it.
make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST.
That's the thing, it doesn't radically change it. Static types are not powerful enough to cross remote boundaries. Also, monads don't need static types, and fully exist in Clojure. Haskell is more then a language with a powerful static type checker. Its also a pure functional programming language. It will help you if you don't complect static types with functional programming. There's more design benefits from functional programming then static types. Learning those can help you write better code, including Json over rest api style applications.
Clojure and Haskell are a lot more similar then people think. Clojure is highly functional in nature, more so then most other programming languages. So is Haskell. Haskell just adds a static type checker on top, which forces you to add type annotations in certain places. Its like Clojure's core.typed, but mandatory and better designed.
The fact that Haskell is smarter than me is exactly why I have been keeping at it!
There is no fun left if you know all the overarching principles of a language, and you realize it still doesn't solve your problem. This happened to me when learning Python, this is also why I don't really look at Go or Rust. They're good languages, I might use them at a workplace someday, but you can get to the end of their semantics, but be still left with the feeling that it's not enough.
I love how I just keep learning Haskell, and keep improving, despite how much I already did it.
That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless. But differently from Haskell, Python is not a good teacher, so I tend to only create messes when I go out of the way exploring them.
> That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless.
Don't do it, 99.9% of the time. It's that simple.
There is seldom a reason to use more than just defs - and a little syntactic sugar (like list comprehensions) just to keep it readable.
Even the use of classes is typically bad idea (if I do say so). Just because: there is no advantage to using it, except when everything is super dynamic. And if that's the case, I suggest that's a smell, indicating that one is trying to do to many things at once, not properly knowing the data.
Nevertheless using classes (or any other superfluous feature) makes everything more complicated (less consistent - need new ways to structure the program, new criterions whether to go for a class or not to or where to bolt methods on,...).
Don't use "mechanisms" like monkey patching just because they exist. They are actually not mechanisms - just curiosities arising from an implementation detail. The original goal is simplicity: Make everything be represented as a dict (in the case of python)
> The possibilities with monkey patching and duck typing are endless.
I think there are many more "obvious" ways to do things in Haskell than in Python just because you as a developer need to draw the line between static and dynamic. And if you later notice that you chose the line wrong, you have to rewrite everything.
In Python - or any other simple language - there is typically one obvious way to do things. At least to me.
Classes definitely give you a lot of rope to hang yourself with (metaclasses, inheritance, MULTIPLE inheritance), but they have their place. I'll usually start with a function, but when it gets too big, you need to split it up. Sometimes helper functions is enough, but sometimes you have a lot of state that you need to keep track of. If the options are passing around a kwargs dictionary, and storing all that state on the class, I know which I'd pick.
You can memoize methods to the instance to get lazy evaluation, properties can be explicitly defined up-front, and the fact that everything is namespaced is nice. You can also make judicious use of @staticmethod to write functional code whenever possible.
You can always opt for explicit dict passing. You are right that it's more typing work (and one can get it wrong...), but the resulting complexity is constant in the sense that it is obvious upfront, never growing, not dependent on other factors like number of dependencies etc.
When opting for explicit, complexity is not hidden and functions are not needlessly coupled to actual data. Personally I'm much more productive this way. Also because it makes me think through properly so I usually end up not needing a dict at all.
Regarding namespacing, python modules act as namespaces already. Also manual namespacing (namespacename+underscore) is not that bad, and technically avoids an indirection. I'm really a C programmer, and there I have to prefix manually and that's not a problem.
Yup, this open field to do whatever with meta classes, inheritance, properties, etc. was what hanged my interest. Since all this "multiple meta monkey patching" was possible, there was no way of telling (for me) what's a good way to implement something in an elegant way. Simple was not good enough, but complex had no rules.
The fact that Haskell is smarter than me is exactly why I have been keeping at it!
I tend to think of Haskell as an eccentric professor.
Sometimes it's brilliant and what it's developed lets you do things that would be much harder in other ways.
Sometimes it just thinks it's clever, like the guy who uses long words and makes convoluted arguments about technicalities that no-one else can understand to look impressive, except that then someone who actually knows what they're talking about walks into the room and explains the same idea so clearly and simply that everyone is left wondering what all the fuss was about.
I tend to ignore 99% of the clever haskell stuff and get by just fine in Haskell.
I keep learning about stuff like GADTs and whatnot, but they're more like the top of the tool drawer special tools than the ones you break out every day.
I think people learning/using haskell tend to go for crazy generalized code first, versus what gets me to a minimal working thing that I can expand/change out later.
Or I just suck at haskell, probably a little from column a and b, for me more sucking at haskell than anything.
You suck at Haskell about as much as Don Stewart :) In this talk he describes how he builds large software systems in Haskell and eschews complicated type system features
I’m not a rust user per se, but I’m surprised to see it listed alongside Python and Go as a language without a lot of depth. Rust not only has quite an advanced type system (not Haskell level, but certainly the most powerful of any other language as mainstream), but it can also teach the user a lot about memory management and other low-level aspects of programming that Haskell (and many other languages) hide. I mostly write Haskell for my own projects, but one of these days I hope to get better at Rust.
Fwiw, the monad quote is actually pretty digestible if you know what monoids and functors are.
A functor is a container that you can reach into to perform some action on the thing inside (e.g. mapping the sqrt function on a list of ints). The endo bit just tells you that the functor isn't leaving the category (e.g. an object, in this case a Haskell type, when lifted into this functor context is still in the Haskell 'category'). A monoid is something we can smash together that also has an identity (e.g. strings form a monoid under concatenation and the empty string as an identity). So, in other words, monads are functors ('endofunctors') that we can smash together using bind/flatMap, and we have an identity in the form of the Id/Identity functor (a wrapper, essentially - `Id<A> = A`).
>"a monad is just a monoid in the category of endofunctors"
Once I discovered the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory, everything made much more sense. As a bonus I now (sort of) understand Category Theory. Much the same as Relational Databases have not very much to do with Relational Algebra.
This is a very important point although I would slightly tweak this
> the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory
to "the Monad thing in Haskell is a very simple special case of the Monad in Category Theory". Thinking you have to "learn category theory" before you can use a Monad in Haskell is like thinking you have to learn this
Hopefully we all agree that static types and dynamic types are useful. Those who use hyperbole are attempting some form of splitting. I think the point where we disagree is what the default should be. The truth is this discussion will rage on into oblivion because dynamic types and static types form a duality. One cannot exist without the other and they will forever be entangled in conflict.
Well, I think that static types are much more useful than dynamic ones. Static types allow you to find errors with your program before execution and that is very important. And if you are going to go through the effort of defining types, it is much better to use static types because then you get this additional error checking. Furthermore, with static types the compiler can help in other ways, e.g. by organizing your data in memory much more efficiently.
I am not sure what you mean when you talk about the duality of static and dynamic types. One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
> Static types allow you to find errors with your program before execution and that is very important.
It depends on how valuable it is in your situation to be able to run a program that contains type errors.
Sometimes it's a net win. If I'm prototyping an algorithm, and can ignore the type errors so I can learn faster, that's a win. If I'm running a startup and want to just put something out there so that I can see if the market exists (or see what I should have built), it's a net win.
Sometimes it's a net loss. If I'm building an embedded system, it's likely a net loss. If I'm building something safety-critical, it's almost certainly a net loss. If I'm dealing with big money, it's almost certainly at least a big enough risk of a net loss that I can't do it.
Forget ideology. Choose the right tools for the situation.
This is a rather glib response. Of course one should choose the right tools for the situation. Personally if I'm prototyping an algorithm I'd rather do it with types so I don't write any code that was clearly nonsense before I even tried to run it.
Personally, I work the same way you do. But I've heard enough people who want the faster feedback of a REPL-like environment to accept that their approach at least feels more productive to them. It may even be more productive - for them. If so, tying them down with type specifications would slow them down, at least in the prototyping phase.
That certainly seems like a reasonable hypothesis to explore and I'm curious to try a Haskell "EDN"-like type as defined in the article to see if that helps me prototype faster!
One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
It never seemed like that much of a prohibition to me. Dynamic types take one grand universe of "values" and divide it up in ways that (ideally) reflect differences in those values -- the number six is a different kind of thing than the string with the three letters s i x -- but what the types are is sort of arbitrary. Is an int/string pair a different type than a float/float pair? Is positive-integer a type in its own right? Is every int a rational, or just convertible into a rational? What if you have union types? After using enough dynamically typed languages, the only common factor that I'm confident holds across the whole design space is that a dynamic type is a set of values. That means static typing still leaves you free to define dynamic types that refine the classification of values imposed by your static types, and people do program with pre-/postconditions not stated in types. You just don't get the compiler's help ensuring your code is safe with regard to your own distinctions (unless maybe you build your own refinement type system on top of your language of choice).
By a similar process, dynamic typing leaves you free to define and follow your own static discipline even if a lot of programmers prefer not to. This is more or less why How to Design Programs is written/taught using a dynamic language. The static type-like discipline is a huge aspect of the curriculum, but the authors don't want to force students to commit to one specific language's typing discipline.
Dynamic types are definitly more useful. That's why since the last 40 years, we've almost never once had a language without it. That's why Haskell also has dynamic runtime types. Erasing them at runtime, and stopping to check their validity at runtime would be folly.
Static types are an extension, they say, do not allow types to only be defined at runtime when possible. Its not always possible, which is why static typed languages also include a runtime dynamic type system.
The debate is if the benefit of static checks on types outweighs the negative of having to spend time helping the type checker figure out the types at compile time, and limiting the use of construct that are too dynamic for the static checker to understand at compile time. That's the "dichotomy". To which someone proclaims: "Can we have a static type checker which adds no extra burden to the programmer and no limits to the type of code he wants to write?" To which OP has missed the point entirely and simply shown that you can spend more time giving the Haskell type checker info about EDN, and gain nothing since its now neither useful, nor less effort. Which was a bit dumb, but he did remark that he did it for fun and laughs, not for seriousness.
A more interesting debate for me would be the strengths and weaknesses of gradual/external typing systems like TypeScript/Flow and MyPy vs something like clojure.spec. Especially since there are still dynamic languages like Ruby that haven't really adopted a system like this yet
Amusingly, history is showing that there are as many rewrites from type systems to no type systems as the reverse. Consider all of the things that are getting redone in the javascript world.
I'm a devout Clojure developer. I think it delivers on the promises he outlines in his talk, but I also have no small appreciation for Haskell as an outrageously powerful language. Everyone robs from Haskell for their new shiny language, as they should. Unfortunately, not a night goes by where I don't ask God to make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST. Clojure talks to me as if I were a child.
Rich Hickey is selling the case for Clojure, like any person who wants his or her language used should do. His arguments are mostly rational, but also a question of taste, which I feel is admitted. As for this writer, I'm glad he ends it by saying it isn't a flame war. If I had to go to war alongside another group of devs, it would almost certainly be Haskell devs.