The dueling rhetoric is the same rhetoric that has been around for decades: Some people really feel type systems add value; others, feel it's a ball and chain. So which is it? The answer is probably "yes." We should all believe by now since history has proven this correct. Most of the time you start with no type system for speed. Then you start adding weird checks and hacks (here's lookin' at you clojure.spec). Then you rewrite with a type system.
I'm a devout Clojure developer. I think it delivers on the promises he outlines in his talk, but I also have no small appreciation for Haskell as an outrageously powerful language. Everyone robs from Haskell for their new shiny language, as they should. Unfortunately, not a night goes by where I don't ask God to make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST. Clojure talks to me as if I were a child.
Rich Hickey is selling the case for Clojure, like any person who wants his or her language used should do. His arguments are mostly rational, but also a question of taste, which I feel is admitted. As for this writer, I'm glad he ends it by saying it isn't a flame war. If I had to go to war alongside another group of devs, it would almost certainly be Haskell devs.
Most of the time you start with no type system for speed. Then you start adding weird checks and hacks (here's lookin' at you clojure.spec). Then you rewrite with a type system.
You seem to be in the camp of gradual types. Which Clojure falls more into, though experimentally. Racket, TypeScript, Shen, C# or Dart are better examples of it.
make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST.
That's the thing, it doesn't radically change it. Static types are not powerful enough to cross remote boundaries. Also, monads don't need static types, and fully exist in Clojure. Haskell is more then a language with a powerful static type checker. Its also a pure functional programming language. It will help you if you don't complect static types with functional programming. There's more design benefits from functional programming then static types. Learning those can help you write better code, including Json over rest api style applications.
Clojure and Haskell are a lot more similar then people think. Clojure is highly functional in nature, more so then most other programming languages. So is Haskell. Haskell just adds a static type checker on top, which forces you to add type annotations in certain places. Its like Clojure's core.typed, but mandatory and better designed.
The fact that Haskell is smarter than me is exactly why I have been keeping at it!
There is no fun left if you know all the overarching principles of a language, and you realize it still doesn't solve your problem. This happened to me when learning Python, this is also why I don't really look at Go or Rust. They're good languages, I might use them at a workplace someday, but you can get to the end of their semantics, but be still left with the feeling that it's not enough.
I love how I just keep learning Haskell, and keep improving, despite how much I already did it.
That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless. But differently from Haskell, Python is not a good teacher, so I tend to only create messes when I go out of the way exploring them.
> That said, Python is also smarter than me. The possibilities with monkey patching and duck typing are endless.
Don't do it, 99.9% of the time. It's that simple.
There is seldom a reason to use more than just defs - and a little syntactic sugar (like list comprehensions) just to keep it readable.
Even the use of classes is typically bad idea (if I do say so). Just because: there is no advantage to using it, except when everything is super dynamic. And if that's the case, I suggest that's a smell, indicating that one is trying to do to many things at once, not properly knowing the data.
Nevertheless using classes (or any other superfluous feature) makes everything more complicated (less consistent - need new ways to structure the program, new criterions whether to go for a class or not to or where to bolt methods on,...).
Don't use "mechanisms" like monkey patching just because they exist. They are actually not mechanisms - just curiosities arising from an implementation detail. The original goal is simplicity: Make everything be represented as a dict (in the case of python)
> The possibilities with monkey patching and duck typing are endless.
I think there are many more "obvious" ways to do things in Haskell than in Python just because you as a developer need to draw the line between static and dynamic. And if you later notice that you chose the line wrong, you have to rewrite everything.
In Python - or any other simple language - there is typically one obvious way to do things. At least to me.
Classes definitely give you a lot of rope to hang yourself with (metaclasses, inheritance, MULTIPLE inheritance), but they have their place. I'll usually start with a function, but when it gets too big, you need to split it up. Sometimes helper functions is enough, but sometimes you have a lot of state that you need to keep track of. If the options are passing around a kwargs dictionary, and storing all that state on the class, I know which I'd pick.
You can memoize methods to the instance to get lazy evaluation, properties can be explicitly defined up-front, and the fact that everything is namespaced is nice. You can also make judicious use of @staticmethod to write functional code whenever possible.
You can always opt for explicit dict passing. You are right that it's more typing work (and one can get it wrong...), but the resulting complexity is constant in the sense that it is obvious upfront, never growing, not dependent on other factors like number of dependencies etc.
When opting for explicit, complexity is not hidden and functions are not needlessly coupled to actual data. Personally I'm much more productive this way. Also because it makes me think through properly so I usually end up not needing a dict at all.
Regarding namespacing, python modules act as namespaces already. Also manual namespacing (namespacename+underscore) is not that bad, and technically avoids an indirection. I'm really a C programmer, and there I have to prefix manually and that's not a problem.
Yup, this open field to do whatever with meta classes, inheritance, properties, etc. was what hanged my interest. Since all this "multiple meta monkey patching" was possible, there was no way of telling (for me) what's a good way to implement something in an elegant way. Simple was not good enough, but complex had no rules.
The fact that Haskell is smarter than me is exactly why I have been keeping at it!
I tend to think of Haskell as an eccentric professor.
Sometimes it's brilliant and what it's developed lets you do things that would be much harder in other ways.
Sometimes it just thinks it's clever, like the guy who uses long words and makes convoluted arguments about technicalities that no-one else can understand to look impressive, except that then someone who actually knows what they're talking about walks into the room and explains the same idea so clearly and simply that everyone is left wondering what all the fuss was about.
I tend to ignore 99% of the clever haskell stuff and get by just fine in Haskell.
I keep learning about stuff like GADTs and whatnot, but they're more like the top of the tool drawer special tools than the ones you break out every day.
I think people learning/using haskell tend to go for crazy generalized code first, versus what gets me to a minimal working thing that I can expand/change out later.
Or I just suck at haskell, probably a little from column a and b, for me more sucking at haskell than anything.
You suck at Haskell about as much as Don Stewart :) In this talk he describes how he builds large software systems in Haskell and eschews complicated type system features
I’m not a rust user per se, but I’m surprised to see it listed alongside Python and Go as a language without a lot of depth. Rust not only has quite an advanced type system (not Haskell level, but certainly the most powerful of any other language as mainstream), but it can also teach the user a lot about memory management and other low-level aspects of programming that Haskell (and many other languages) hide. I mostly write Haskell for my own projects, but one of these days I hope to get better at Rust.
Fwiw, the monad quote is actually pretty digestible if you know what monoids and functors are.
A functor is a container that you can reach into to perform some action on the thing inside (e.g. mapping the sqrt function on a list of ints). The endo bit just tells you that the functor isn't leaving the category (e.g. an object, in this case a Haskell type, when lifted into this functor context is still in the Haskell 'category'). A monoid is something we can smash together that also has an identity (e.g. strings form a monoid under concatenation and the empty string as an identity). So, in other words, monads are functors ('endofunctors') that we can smash together using bind/flatMap, and we have an identity in the form of the Id/Identity functor (a wrapper, essentially - `Id<A> = A`).
>"a monad is just a monoid in the category of endofunctors"
Once I discovered the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory, everything made much more sense. As a bonus I now (sort of) understand Category Theory. Much the same as Relational Databases have not very much to do with Relational Algebra.
This is a very important point although I would slightly tweak this
> the Monad thing in Haskell has pretty much nothing to do with the Monad in Category Theory
to "the Monad thing in Haskell is a very simple special case of the Monad in Category Theory". Thinking you have to "learn category theory" before you can use a Monad in Haskell is like thinking you have to learn this
Hopefully we all agree that static types and dynamic types are useful. Those who use hyperbole are attempting some form of splitting. I think the point where we disagree is what the default should be. The truth is this discussion will rage on into oblivion because dynamic types and static types form a duality. One cannot exist without the other and they will forever be entangled in conflict.
Well, I think that static types are much more useful than dynamic ones. Static types allow you to find errors with your program before execution and that is very important. And if you are going to go through the effort of defining types, it is much better to use static types because then you get this additional error checking. Furthermore, with static types the compiler can help in other ways, e.g. by organizing your data in memory much more efficiently.
I am not sure what you mean when you talk about the duality of static and dynamic types. One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
> Static types allow you to find errors with your program before execution and that is very important.
It depends on how valuable it is in your situation to be able to run a program that contains type errors.
Sometimes it's a net win. If I'm prototyping an algorithm, and can ignore the type errors so I can learn faster, that's a win. If I'm running a startup and want to just put something out there so that I can see if the market exists (or see what I should have built), it's a net win.
Sometimes it's a net loss. If I'm building an embedded system, it's likely a net loss. If I'm building something safety-critical, it's almost certainly a net loss. If I'm dealing with big money, it's almost certainly at least a big enough risk of a net loss that I can't do it.
Forget ideology. Choose the right tools for the situation.
This is a rather glib response. Of course one should choose the right tools for the situation. Personally if I'm prototyping an algorithm I'd rather do it with types so I don't write any code that was clearly nonsense before I even tried to run it.
Personally, I work the same way you do. But I've heard enough people who want the faster feedback of a REPL-like environment to accept that their approach at least feels more productive to them. It may even be more productive - for them. If so, tying them down with type specifications would slow them down, at least in the prototyping phase.
That certainly seems like a reasonable hypothesis to explore and I'm curious to try a Haskell "EDN"-like type as defined in the article to see if that helps me prototype faster!
One can exist without the other and most statically typed languages either forbid or strongly discourage dynamic typing.
It never seemed like that much of a prohibition to me. Dynamic types take one grand universe of "values" and divide it up in ways that (ideally) reflect differences in those values -- the number six is a different kind of thing than the string with the three letters s i x -- but what the types are is sort of arbitrary. Is an int/string pair a different type than a float/float pair? Is positive-integer a type in its own right? Is every int a rational, or just convertible into a rational? What if you have union types? After using enough dynamically typed languages, the only common factor that I'm confident holds across the whole design space is that a dynamic type is a set of values. That means static typing still leaves you free to define dynamic types that refine the classification of values imposed by your static types, and people do program with pre-/postconditions not stated in types. You just don't get the compiler's help ensuring your code is safe with regard to your own distinctions (unless maybe you build your own refinement type system on top of your language of choice).
By a similar process, dynamic typing leaves you free to define and follow your own static discipline even if a lot of programmers prefer not to. This is more or less why How to Design Programs is written/taught using a dynamic language. The static type-like discipline is a huge aspect of the curriculum, but the authors don't want to force students to commit to one specific language's typing discipline.
Dynamic types are definitly more useful. That's why since the last 40 years, we've almost never once had a language without it. That's why Haskell also has dynamic runtime types. Erasing them at runtime, and stopping to check their validity at runtime would be folly.
Static types are an extension, they say, do not allow types to only be defined at runtime when possible. Its not always possible, which is why static typed languages also include a runtime dynamic type system.
The debate is if the benefit of static checks on types outweighs the negative of having to spend time helping the type checker figure out the types at compile time, and limiting the use of construct that are too dynamic for the static checker to understand at compile time. That's the "dichotomy". To which someone proclaims: "Can we have a static type checker which adds no extra burden to the programmer and no limits to the type of code he wants to write?" To which OP has missed the point entirely and simply shown that you can spend more time giving the Haskell type checker info about EDN, and gain nothing since its now neither useful, nor less effort. Which was a bit dumb, but he did remark that he did it for fun and laughs, not for seriousness.
A more interesting debate for me would be the strengths and weaknesses of gradual/external typing systems like TypeScript/Flow and MyPy vs something like clojure.spec. Especially since there are still dynamic languages like Ruby that haven't really adopted a system like this yet
Amusingly, history is showing that there are as many rewrites from type systems to no type systems as the reverse. Consider all of the things that are getting redone in the javascript world.
> Much of the rhetoric that is currently flying around is a false dichotomy.
The author here is missing the rhetoric. The rhetoric is not about the programming language but about how we should be doing information processing. Except that the author isn't missing that point:
> In Haskell we typically “concrete” data with record types, but we don’t have to.
Great. That is the dichotomy. And it's not a "false" one. This is the question: should we be "concreting"? That's the whole dichotomy/point that is being made. By encoding EDN/Clojure in Haskell the author has gone through a cute intellectual puzzle but hasn't contributed to the crux of the discussion. (Indeed, he's tried to dismiss it as "false".)
The ergonomics that he ends up with are fairly lean (at least in the examples he's shown), though the Clojure expressions are a little leaner. But that's probably because Clojure has actually taken a stance/belief/opinion on the very real question/dichotomy at hand.
It's a little bit more than just a cute intellectual puzzle. One could build an efficient EDN library based on that or very similar type. See Haskell's JSON library:
> edn supports extensibility through a simple mechanism. # followed immediately by a symbol starting with an alphabetic character indicates that that symbol is a tag
OK, so it's trivial to add that as a constructor to the Haskell EDN type in the post, and you can even support it in JSON with a dictionary like
1) When I read #uri "http://google.com" my app code sees (java.net.URI. "http://google.com"), not Tag "URI" "http://google.com" or whatever. clojure.core/map does not see tagged values, it does not know that the values were ever read from edn.
2) Extension happens in userland library code, you don't need to go modify the hardcoded pattern match in core. (Talking about reifying actual instances that we can code to, not reader tags)
3) Data is just information. Information isn't coupled to code, it's abstract values, totally separate from the concrete implementation. As RH says: "code is data. But data is not code until you define a language around it." Typeclasses are about code.
4) EDN values are cross platform and a platform's EDN reader can reify the value into an idiomatic type for that platform. E.g. a haskell edn reader could reify #error "foo" into Left String; a Java reader a Throwable to be re-thrown later.
5) The whole prism diversion is sophomoric. Once you've read the EDN into concrete values of whatever platform type, you can use whatever platform abstractions you like to manipulate them. Clojure has lenses too: http://funcool.github.io/lentes/latest/#composition
You can watch the EDN talk or read the transcript if you'd like to learn more. This topic is very deep but this thread is not doing it justice.
I think it would be a great service if someone would write up a technical introduction to this for non-Clojurists. You seem to be communicating a subtle point that not many of us are getting ...
Watching Hickey's talk, many of the complaints seemed to be valuable, but they didnt seem to be about static types. Rather, it was about some problems with existing data types locking one into a rigid data model when the domain is constantly expanding.
The default model of algebraic data types is too inflexible.
There are different extensions which work with this issue. Good record system, Extensible cases, using free monads etc. We can have concise syntax for automatically declaring an interface for a algebraic data type based on some field values (ie, customizable deriving statements). Namespace qualified keywords, so we have rdf like attribute based semantics.
The post doesnt respond to the issue but suggests that if you want to do the same thing in clojure with error handling etc, you will need to think about this stuff.
Also, Hickey's mention of Monads was again not about static types. Monad laws are not typechecked. Their motivation is purity. The only slight inconvenience in a dynamic context is that you dont have return type polymorphism, so you have to type IOreturn instead of return.
This is not true and it's important to clear up this misconception lest anyone thing "the only good reason to use monads in Haskell is because it's a pure language".
* The use of monads in functional programming arose purely technically as an innovation in denotational semantics
* Then someone noticed you could use it to wrap up IO purely in Haskell
* Then it was noticed you could use it for all sorts of other stuff besides dealing with IO in a pure language.
Yes, sure, monads, applicatives etc. have plenty of applications beyond IO, which is part of why keeping that abstraction separate from any particular use is of value.
My point is that this doesnt have much to do with static typing vs dynamic typing per se, as we dont check them statically. They are just important examples of an interface with implementations for many data types, which can be useful even in a dynamic language like Clojure. People who write a parser in a dynamic language might benefit from learning about distinction between applicatives and monads.
I wonder if there's an at-least type system so one could say it needs Person (n,a,c) and if it gets a Person (n,a,c,h) well that's consider a superset and thus accepted.
I wanted to like Haskell, but just never could get to the point where I enjoyed using it. It always felt messy and complicated to me. I think the language extensions were a contributor to these feelings. I also felt as if I spent more time wrangling with the type system than actually solving my business problems.
Yet I really do like Clojure, F#, and PureScript. There's an experimental C++ back-end to PureScript now [0]. I wonder if that will ever be a viable production target?
Anyway, one of the things I like about PureScript is the row-types. Does anyone know if there's a plan to get row-types into Haskell?
> There's an experimental C++ back-end to PureScript now [0]. I wonder if that will ever be a viable production target?
Obligatory reminder to anyone enjoying PureScript so much they want to compile it to executable binaries for their backend work (instead of Node or such) --- I'm still hacking along on my PureScript-to-Golang trans/compiler (GH to follow in profile if interested). Unlike most alternative backends (to date) it's not a parallel fork of the purs compiler but works off the official purs compiler's `--dump`ed intermediate-representation files. Seemed more tractable to me to do it that way.
I also really dislike this extension system where you can unlock some magical features if you could just know what magical keyword to put at the top of your file.
It's not surprising though, since usefully typing control effects with delimited continuations would require at least answer type polymorphism, but even more usefully something like session types.
I recently added Flow Type to a Javascript project and it changed my opinion on this debate. I realized that I really don't care about static vs dynamic typing. What I care about is a hierarchy of the ways that my code can be more likely correct. Given the same level of verification, integration tests and are worse than unit tests are worse than runtime checks are worse than compile time checks. There may be cases where static typing makes code more performant, but I usually care a lot more about development speed and correctness. In this world, I just want a way of verifying my code as quickly as possible. Gradual typing lets me specify some validations of my code that will run at compile time. This is a huge win for me in both correctness and development speed.
I don't know if we will ever invent the perfect static type system, but I do know that having the ability to specify some types in a pretty good type system, is better than not being able to specify any types.
I'm convinced that a language with a progressive type system is strictly better than one without. Therefor, any debate that compares static vs dynamic, instead of static vs progressive is not interesting to me.
Fwiw, you can map over Maps and Strings in clojure:
(map identity "foo") ;; a seq for String is its chars
;=> (\f \o \o)
(map identity {:foo :bar}) ;; a seq of a Map is the
;; pairs of key/values
;=> ([:foo :bar])
Would you care to expand on that? It's not clear what you mean, neither from your comment nor the linked Reddit post.
> Transcript of Rich Hickey talk OP linked, C-f "edn":
What do you mean? There are these three occurences of "edn", none of which is enlightening.
* That's great, I'll start shipping some edn across a socket and we're done.
* How many people ever sent edn over wire? Yeah
* So the edn data model is not like a small part of Clojure, it's sort of the heart of Clojure, right? It's the answer to many of these problems. It's tangible, it works over wires
It sounds like he mostly cares about edn because of wires.
You have the primary source right in front of you!!!!!!!! What do you need me to explain it worse for? Print out the damn paper, sit down with a highlighter and read. FFS.
He looked at the primary source, and doesn't see it saying what you claim it says. So he's asking you for where, from the primary source, you found the source for making your claim. Given that he already looked at the source you cited, that doesn't seem like an extraordinary request...
What's really sorely lacking from these discussions is concrete examples of functionality that's easy to write in Clojure and hard in Haskell. I don't mean functions like `assoc-in`. I mean real functional parts of programs.
It's about the way you think, not about what you can and can't do. Haskell lets you safely think the really complicated types needed to do programming with zero effects; you couldn't think those thoughts without Haskell because the types we use today are too complex. Clojure encourages you to think in terms of data, to push as much logic as possible out of the code and into the data, and then write simple programs to transform that data.
http://hyperfiddle.net/ (my startup) is an example of a data driven system. Hyperfiddle itself is implemented as a large amount of data + 3000 loc to interpret it. If the system is only 3000 loc, you're really not at the complexity scale where all that category theory gymnastics really pays off.
That's not especially convincing to me. Haskell also encourages me to "think in terms of data, to push as much logic as possible out of the code and into the data, and then write simple programs to transform that data.".
A dynamically typed language is a statically typed language with precisely one type.
It is extremely easy to use haskell in "dynamic mode". Just use `ByteString`(or Data.Dynamic for safety/convenience) for all your data. Types just present a way to encode some statically known guarantees about the structure of your data/code. You are free to not encode any properties if you want to.
But it is very rare that the data you are working with requires the full generality of `ByteString`. You usually have some sort of structure rather than just working with strings of zeros and ones.
Being able to work with "Any" implies either working with Strings (since you can encode anything in strings) or it implies a memory unsafe language (e.g. working with void* in C) or it implies subtyping and hence an OOP language.
But OOP subtyping is already about solving polymorphic call sites at runtime. And because you carry a vtable around for every instance, thus objects being tagged with their classes, you can always do upcastings and downcastings. So OOP languages are already very dynamic on that scale and fairly unsafe.
Clojure has type hints, I can add ^int to a symbol and get some “static typing”. That is: shitty static typing. However, this is about the same as the average Any type in a static language. In order to implement a good Dynamic type, you’d need to implement reflection, caching dynamic dispatch, etc. Haskell’s Typeable is an OK implementation, but not nearly as good as say the JVM’s or JavaScript, despite their many flaws.
The point is, types give you an option to encode invariants at compile time. You can choose to use this to your advantage, or not use it at all(use ByteString for everything).
With dynamic types(or just one type), you don't even have the option to do this.
Except I really don't have that choice because the language and library design matters. If I chose to use ByteString for everything, I'd first have to implement Tcl in order to get anything done.
But, yes, you're right, most dynamic languages lack good tools for stating invariants and checking them early. I would like to see that change. However, I'd rather the solution account for runtime dynamism, extensibility, and partiality. We're _slowly_ getting there with more and more advanced type system features. It's time to take that knowledge and repackage it at the foundational level of typed languages.
There's just one type at compile time, but many more at runtime. This is still a strongly typed language. The only problem is our static analysers are too dumb to prove things without providing ample explicit hints, or changing the way we code to restrict certain ambiguities that it can not resolve at compile time. Haskell has chosen to try and push the boundaries of such static analyzer, but there's still limits, and it can't infer everything, and still restricts certain designs. I admire it for its efforts.
Clojure has a different strategy, it creates a new time, REPL time. So you can test your types at REPL time. Not when it compiles, but a little before it runs. It won't prove what you don't try though. So in practice, its using a statistical model where the programmer is the heuristic. You best guess the edge cases, and try them at REPL time. This will not catch all static errors, but will also catch some runtime errors. So it creates a disjoint set of errors that it catches. This is a trade off. Static types and REPL time will catch some of the same things, but also different errors.
Now both adding static type info, and doing REPL time testing comes to a cost to the programmer. Its one more thing we have to do. Some like me, fond more value most often at the REPL, it helps me explore and innovate my code, and is just more fun to me. I also prefer the kind of bugs it catches. Others think the opposite.
What most people seem to agree on though, is that doing both is way too much effort. That's why you don't have REPL time be a popular activity in Haskell, or core.typed be popular in Clojure.
> That's why you don't have REPL time be a popular activity in Haskell
Is it not a popular activity to use the Haskell REPL (ghci)? I though it was pretty common to use it when developing code, though I admit I don't have any hard data.
The Haskell REPL is pretty good for a static language, but the experience is dramatically different to how a Clojure programmer would use want to use it. To be fair, Node and Python also have totally not usable REPLs for this style.
The thing with dynamic languages is that the development style is basically println-driven.
It goes like this: because you can't keep anything longer than a 1-page script in your head and because you can't remember the APIs of other people and hence you can't trust anything you write, in order to keep some sanity, you have to execute every freaking line of code that you write in order to verify that what you wrote actually works — and the sooner you execute, the better, because if your program crashes, the triggered error can happen far away from where the mistake is actually made.
This happens for every dynamic language, not just Clojure. This is why the read–eval–print loop is so important.
However the development experience changes dramatically in a good static language (no, not talking of Java or Go), because you can write more than one line of code before feeling the need to verify it — when compiler type checks a piece of code, at the very least you can be sure that the APIs you used, or the shape of the data you're interacting with are correct.
Refactoring is also painless. Ever done refactoring of projects built on dynamic languages? It's a freaking nightmare and no, the tests don't help that much, the tests actually become part of the problem.
This is also why dynamic languages folks complaining about long compile times are missing the point — those long compile times are necessary to give you guarantees that in a dynamic language you don't get at all, changing the experience, because in turn you don't have to run your code that often.
Since your specific about your statically typed language, which I'm assuming you mean Haskell. Are you also specific about your dynamically typed language? Are you talking specifically of Clojure?
Its unfair to club Clojure and imperative object oriented dynamic languages together. The same way its unfair to club Java and Haskell together.
You're right about the print-ln style. You do run your code everytime you touch a single line. That's what I like about it. But its a personal preference, like some people prefer to compose music on a sheet, others rather have their instrument in hand.
And you're forgetting the trade offs. With haskell, you wrestle the compiler, and every line you write has a compile error at first, until you get it right. This takes as much time if not more, at least for me, then it does running each of my lines of code in my REPL.
I guess I fall in that category where I kind of enjoy the beauty of both, though at the end of the day, I find myself having more fun coding when writing Clojure.
I've never suffered from a Clojure refactoring. You have to be a little more careful, but its never been that painful to me. Again, could be how I perceive "coding pain" is different from others.
I prefer being forced to keep my program simple by making complexity intolerable over encapsulating it. Your preference may differ.
I find I have to refactor my dynamically typed programs less frequently than my statically typed ones. Your mileage may vary.
No amount of type safety will prove my game is fun, or that my user can understand the UI. I want fast iteration times, since I can’t wait on the compiler to test a new enemy behavior or GUI layout.
> No amount of type safety will prove my game is fun, or that my user can understand the UI
No, but what it can ensure to some extent is that your game runs, and doesn't crash randomly. If the game crashes constantly, no one is going to play it no matter how fun it is.
That's the keyword here. Maybe I'm missing critical data, but I've never perceived the reduction in defect from Clojure to Haskell. I've looked for studies on it, and they all point to either no difference or incredibly close. Never I've been shown a case where the reduction in defects would have an impact on the business I work for. Enterprise software is a domain that isn't that sensitive to defect. Anything less then 5% difference would go unnoticed, and affect in no way sales.
My conclusion, it comes down to your own enjoyment. Which one do you have more fun using and are the most productive in, that's the one you should be using.
I allow muself to change my mind if Haskell really proves to be 10% to 30% or more lower defect, maybe in a later version, with some GHC extension, maybe liquid haskell, I'm not closing my mind to it if it happens I'll be there.
If no other language has any functionality similar to how Clojure does things then I think we'll need references to explanations or videos before we can even begin to understand your claims!
Also interesting is the other end of the spectrum: Forth. Instead of mutation, offers snapshots and restores of the “dictionary”. See this video: https://youtu.be/mvrE2ZGe-rs
I don't fully understand what you're getting at. It would have been nice to see no examples with code entered at a REPL and the results, in both JS and Clojure, say.
Are you saying that you want to be able to make bindings that are refreshed on REPL reload? For example if I have a file that contains
Well, the three key aspects, in my preference order, are:
1. Server repl with editor integrated clients. So your text buffers in your favorite editor is the repl. Look at the gifs here https://atom.io/packages/proto-repl to give you an idea for it.
2. Reifed language constructs. You can read about it here http://www.lispcast.com/reification . An easy example is if you have fn A depend on B. If you change B and call A, A will use new B. That's because the information is still availaible at runtime for A to figure out the latest version of B when calling it.
3. Functional programming / emphasis on small independent code blocks that compose. This is where you hear things like immutability, functions that take functions, purity, side effect free, managed references, etc. Basicly state in Clojure is hard to corrupt. That means if you alter state in your repl, it rarely messes up the full state, allowing you to keep working long sessions with your app state still being valid and usable.
I don't have a link for #3. So I'll give an example. Say you have a map you want to add data too. Say this map is read by something else, but you want to try adding something deeply nested to it. In Clojure, you can try as much as you want, experiment until you succeed to mold the map the way you wanted. The other thing reading the map never saw any of your changes, because it sees an immutable view of it. So after your done, if you use that other thing, it'll still work, because you didn't mess up the state it was depending on.
Is this like a Jupyter notebook or some different sort of functionality?
Its similar in some ways, but not quite exactly the same thing. The repl is a server, and doesn't have an interface. So it reads over a socket port, and prints a response back over the socket using a common protocol. So you can build any client you want for it. What is most common is to take an existing editor, like emacs, vim, eclipse, atom, etc. And write a plugin for them which interacts with the server repl. So say your in eclipse, you have a Clojure project open, you can have eclipse send your project code to the repl for you. In practice that means you just work on your code files directly, and just sync them to the repl as you go. Some clients try to be even fancier, creating visual representation of code output like graphs, or gui controls like drilling into a nested map.
Does it work for integer values, say, as well as functions? Suppose my source code says
Y would be 12.
(let [x 1
y (+ 10 x)]
y)
If you load this it'll return 11. If you change x to 2 and reload this, it will return 12.
Globally you'd do:
(def x 1)
(def y (+ 10 x)
Now y is equal to 11. If you change x to 2, and only reload x, y would still be equal to 11. You'd have to reload y also if you want it to be 12 now.
That's because y is bound to the value of the expression, not to the expression itself. And the value is calculated at load time.
Now you could bind it to the expression by using a function.
(def x 1)
(def y #(+ 10 x)
#() is Clojure's shorthand for lambda.
So now the caller is in charge of deciding when to evaluate y.
Calling: (y)
Would return 11 and if you change x to 2, calling it again would return 12.
You can also use reactive constructs instead. So when setting x to 2, an event is published, so you can listen to it and have it reset y to the new value of evaluating (+ 10 x).
(require '[foo :refer [f]])
; edit f in foo.clj
(require '[foo :reload])
(f 1) ; should call NEW f.
Node doesn’t have a reload construct. If you hack it in by mucking with the module cache, you still won’t get the new f in your module’s local copy of it.
If f were deleted, it would still be in memory unless you explicitly call ns-remove to clean it up. In practice, this is rarely an issue, but I do wish the experience was a little cleaner there.
The same things do apply to integers, but if you use them at the top level (outside a function) then they will be dereferenced immediately (there is no delayed function body to wait for) and so you will get the initial value only once. If you want to enforce a delay, you can use (var x) or the shorthand #’x and later derefence that with @
The difference basically is that the mindset is to work within a running environment, swaping things out as its running. Its closer to a Jupyter notebook, or an excel sheet in some ways, if that helps you visualize it.
"If EDN is an improvement over JSON, then it is marginal at best."
Why is it only a marginal improvement? It adds considerably more semantic information.
"Utilizing EDN also promotes a lot of invisible coupling. Some may tell you that dynamic types don’t couple, but that is incorrect and shows a lack of understanding of coupling itself. Many functions over Map exhibit external and stamp coupling."
Coupling implies a bidirectional connection. Functions rely on data types, but not vice versa.
No need for typeclasses/existentials. That is trying to approximate some typed/untyped middle ground. You would just use 'dynamic' if you want true dynamic behavior
The Edn type given here is closed. That's a correct definition of Edn, which is a closed sum. Edn accomplishes extensibility via the Tag type. However, not all Clojure data is Edn. In order to implement the clmap and clget functions with their full generality, they need to support an open set of types. For example, both `#inst "..."` and `(eval '(Date. ...))` are separate types: TaggedLiteral and java.util.Date respectively.
You need either Dynamic or existentials because Clojure enables you to pass data structures between two functions expecting collection elements of differing capabilities without either A) whole program / inter-module analysis or B) an O(N) type translation.
I'm a devout Clojure developer. I think it delivers on the promises he outlines in his talk, but I also have no small appreciation for Haskell as an outrageously powerful language. Everyone robs from Haskell for their new shiny language, as they should. Unfortunately, not a night goes by where I don't ask God to make me smart enough to understand how a statement like "a monad is just a monoid in the category of endofunctors" can radically change how I implement marginably scalable applications that serve up JSON over REST. Clojure talks to me as if I were a child.
Rich Hickey is selling the case for Clojure, like any person who wants his or her language used should do. His arguments are mostly rational, but also a question of taste, which I feel is admitted. As for this writer, I'm glad he ends it by saying it isn't a flame war. If I had to go to war alongside another group of devs, it would almost certainly be Haskell devs.