Hacker News new | past | comments | ask | show | jobs | submit login

> Still, I wish I could see more info on this. At what point does the additional cognitive burden of advanced type system features become a worthwhile tradeoff for program correctness?

As a professional Haskell programmer, I find the cognitive burden to be lower in Haskell than in, say, Java, where I have to do a lot more bookkeeping about design patterns and how they're glued together than in standardish Haskell where compositional forms fall into a compact set of powerful concepts amenable to reasoning.

I say standardish Haskell because the sweet spot in my experience is a few lightweight GHC extensions but mostly shunning some of the seriously experimental stuff in the language. For example, i agree with your doubt when it comes to dependent types and in particular singletons, a halfway house implementation that can be used in Haskell today.

Some of my coworkers had unsuccessfully attempted to write mobile games in Haskell. That was 10 or so years ago and many of the technical hurdles that impeded them then are no longer there.

The only major one I know of at the moment that prevents Haskell from being used in a AAA game engine is a guaranteed low latency garbage collector. I expect someone to implement such a thing for Haskell in the next 10 years.

The space is moving fast, our understanding of how to write big Haskell apps has advanced drastically since when I first started using the language! I expect big things to come in the next couple of years.

This is not to say that the space is getting rewritten all the time, it's just that more useful concepts are being discovered and matured. For example, Applicative only 11 years ago and scalable FRP like 4 years ago or so.

I should say what type concepts compose my compact tool set:

* higher order functions with syntax optimized for using them.

* algebraic data types (sparingly generalized algebraic data types)

* type classes + higher kinds (The synergy is far greater than the individual features)

* monad transformers (the promise of aspect oriented programming actually realized)




> I find the cognitive burden to be lower in Haskell than in, say, Java

I don't think Java is a meaningful competitor. It's a language completely based on 90's hopes that OOP was a good idea. Turns out it is mostly good to program simple things as complex "ravioli code". Better compare against languages that games, kernels, and compilers are typically implemented in.

> monad transformers (the promise of aspect oriented programming actually realized)

How do monad transformers realize aspect oriented programming? In my experience they lead to pretty verbose code and lots of boilerplate. I think the way to achieve "aspect-orientedness" (I think this name is based on the same insight that lead to names like "separation of concerns", "cohesion", or "cross-cutting concerns") is simply to draw modules boundaries by shape of data, not in an OOP style where most objects do a million different things (etc. cat must eat, walk, sleep, meow...)


> It's a language completely based on 90's hopes that OOP was a good idea.

I find the dismissive tone rather amusing.

A large majority of the code running on our planet today is OOP.

You could argue that there might be better ideas out there, but OOP is certainly an idea that has not only proven itself to be tremendously useful, but that has also been able to adjust and adapt through decades of changing requirements. It's pretty much the only software paradigm that's survived for that long.


Every objects-first codebase I've seen was terrible. OOP survived mostly because people push hard for it, because they think there must be value in overly taxonomic code, but in the end they never seem to get value out of it, only more and more incompatible objects (when I hear "mock object" it's time to run).

In OOP >50% of the LOC is just stupid bureaucrazy, setting up object graphs in the name of "isolation" (the irony), half-initializing fields, conforming to the right interfaces etc. This is completely meaningless, do-nothing code. Worse, it gives the illusion to remove some contextual dependencies this way, but the code never seems to work outside of the context it was created in. It's only much harder to read because the context is files away.

OOP is the wrong-minded idea that a program should be a bundle of many "self-contained" objects. But that's wrong, we're writing ONE program here, not thousands. It tries to repair this wrong idea with inheritance (which is at least as bad an idea).

And it makes it really hard to cope for "cross-cutting concerns", which are actually 90% of all we care for, not just a side concern. The complexity is in the edges (i.e. how is information moved/transformed), not in the objects!

OOP mostly survived where performance / architectural scalability is not super important (e.g. Python or similar scripting languages, where it enables dynamic typing). And it survived where the big money is, but not necessarily technical competence (where it enables Object-verb type code completion).

That relates to OOP as in languages like Java - not Alan Kay's idea of OOP, which he emphasizes was very different, but I still don't get what's the idea :p

> A large majority of the code running on our planet today is OOP.

Good example code base?

> It's pretty much the only software paradigm that's survived for that long.

Maybe check on your history? Many people are totally happy with procedural programming.


> when I hear "mock object" it's time to run

Do Haskell programmers not create mocks to test external components?

> OOP is the wrong-minded idea that a program should be a bundle of many "self-contained" objects. But that's wrong, we're writing ONE program here, not thousands.

The number of programs isn't the relevant metric. Complexity is. Any complex system is going to trend toward modularity. Modularity requires standard interfaces, which inevitably lead to bureaucracy.

A 1MM line Haskell program is going to be similarly bureaucratic. There are going to be standards you have to adhere to in order to play nice with the rest of the system. That's what typeclasses are, after all.

OOP is traditionally defined by three things: polymorphism, encapsulation, and inheritance.

Polymorphism: Modern Non-OOP languages can also be polymorphic, so that's no longer a differentiator.

Encapsulation: You definitely want encapsulation if your data is mutable.

Inheritance: This is the only truly problematic feature, and it's certainly abused, but it has its place. I don't always want to compose and delegate 20 methods when I just want to change the behavior of one.


> Polymorphism: Modern Non-OOP languages can also be polymorphic, so that's no longer a differentiator.

Haskell had ad-hoc polymorphism way before Java was a twinkle in its creator's eye. Before Haskell, Miranda (the language Haskell was based off of) could have kicked Java's polymorphism to the curb. Neither Java nor OOP invented polymorphism. If anything, they butchered it by introducing subtyping.

> Do Haskell programmers not create mocks to test external components?

The equivalent in Haskell would be having some kind of 'effects' system. An effect system differs from a mock object in that it limits in its totality what kind of interactions can take place. Typically, each layered effect also has a set of laws. Pure interpreters can be written for these effects, but the impure (i.e., real-world) interpreters are not privileged in their consumption of this effect. The pure interpreter also provides a proper implementation, such that you should be able to replace your real program with all pure interpreters, supply all your input at once, and still have a correct program. In other words, a Haskell program is typically polymorphic over which effects it uses in a way that other languages simply aren't.

> Encapsulation: You definitely want encapsulation if your data is mutable.

Again, Haskell, Miranda, and Lisp had encapsulation long before OOP came about, and Lisp has mutable data.


I think we're in violent agreement here. AFAICT, the feature sets of OOP and non-OOP languages have converged so much that inheritance is really the last differentiator. Maybe you could throw dynamic dispatch in there, but there's no reason in principle an OOP language couldn't add dynamic dispatch.


Mutability is a huge differentiator for some languages.

And dynamic dispatch is already part of Smalltalk and Objective-C, I believe.


Sure, but mutability is orthogonal to OOP.


As fond as I am of Erlang, I still find it hard to picture an immutable OO language.


> I still find it hard to picture an immutable OO language.

Picture an object-oriented procedural language like Java, and then make everything immutable; for every 'void' method, return an updated copy of 'this'; for every non-void function, return a tuple of the updated copy and the value you were going to return. No change to const methods, obviously. And you're done, and you can still take advantage of encapsulation, polymorphism, and inheritance without any hoops. You can even do it in Java itself as a style thing without too much effort and only a modest amount of boilerplate. Alternatively, it's not that much work to build this same thing out of lambdas and dictionaries, if your language has those but not objects (adding mutation into that object system would then be trivial) (and of course, you can build dictionaries out of lambdas and lambdas out of objects, if need be).


Write in scala using only "vals" and there you have it!


Not quite because you can still have a val that points to a mutable object.


> Do Haskell programmers not create mocks to test external components?

Do we create test substitutes, alternate implementations of the same interfaces? Yes. But dedicated mocking frameworks are crazy. In Haskell-like languages if you want an implementation of interface foo that returns bar when called with baz, you just... write an implementation of interface foo that returns bar when called with baz. If the easiest way to do that in your language is some kind of magical reflection-based framework, something is very wrong with your language.


> If the easiest way to do that in your language is some kind of magical reflection-based framework, something is very wrong with your language.

Languages have strengths and weaknesses. Certain tasks are easy in some languages, and certain other tasks are not. Throwing ones hands up and saying "something is very wrong with your language" because one is not familiar with a technique or tooling popular in another language is immature, IMO.

For example, if you explain how Debug.Trace works in Haskell to programmers familiar with Java, and they'd call it crazy.


> Languages have strengths and weaknesses. Certain tasks are easy in some languages, and certain other tasks are not.

Agreed, but we mustn't fall into the fallacy of assuming that means no language can ever be better or worse than another. There are good and bad language design choices, and "an implementation of interface foo that returns bar when called with baz" is not some obscure specialized feature, it's the basics of general-purpose programming.

> Throwing ones hands up and saying "something is very wrong with your language" because one is not familiar with a technique or tooling popular in another language is immature, IMO.

I'm very familiar with the techniques and tooling of mocking frameworks. I do not make these claims lightly.


> Agreed, but we mustn't fall into the fallacy of assuming that means no language can ever be better or worse than another. There are good and bad language design choices.

Agreed.

> "an implementation of interface foo that returns bar when called with baz" is not some obscure specialized feature

It isn't some obscure specialized feature in Java either.

    Foo foo = new Foo() {
      public Bar method(Baz baz) {
        return new Bar("bar");
      }
    }
What mocking frameworks do is to provide a DSL to describe behavior of such implementations, use dynamic bytecode generation (not reflection, BTW) to create implementations of the interfaces dynamically, and bind them to simulate various test conditions. What the makes the language "worse" to require or allow doing this?

My Haskell is rusty, but given Haskell psuedocode like:

   main :: IO ()
   main = do
    f <- foo
    if (f == 1) then
      putStrLn "Got 1"
    else
      putStrLn "Didn't get 1"    
   
   
   foo :: IO Int
   -- ...
how would you test that the two branches of main behave appropriately?

This is not a snark; I am truly interested to know how Haskell gets rid of the need to bind alternate implementations of an interface for testing purposes.


> It isn't some obscure specialized feature in Java either.

To the extent that that's true, fine. I'm sure I see a lot of developers writing something along the lines of:

    Foo foo = mock(Foo.class)
    foo.when(method(any())).return(new Bar("bar"));
instead of that, not because they need any of the mocking features as such but because it takes up fewer lines on the screen, particularly when there are more methods in the interface. (Partly a cultural problem of having overly large interfaces rather than a language problem per se, perhaps).

> use dynamic bytecode generation (not reflection, BTW)

How is it not reflection ("the ability of a computer program to examine, introspect, and modify its own structure and behavior at runtime")?

> What the makes the language "worse" to require or allow doing this?

Reflection or code generation means stepping outside the language and its usual guarantees - any time the programmer is forced to do it it's because the language didn't provide a good way to solve the problem within the language itself. It means you can no longer e.g. extract common expressions, because they don't necessarily mean the same thing; if you have some common mock setup code you can't just blindly automatically extract method, you have to think carefully about when the mocks get instantiated and when the expectations are set.

> how would you test that the two branches of main behave appropriately?

> This is not a snark; I am truly interested to know how Haskell gets rid of the need to bind alternate implementations of an interface for testing purposes.

It doesn't - as I said, you still write test implementations of your interfaces. What it does remove the need for is mocking frameworks, which people use in e.g. Java either because implementing the interface the normal way in the language is more effort (not a problem in Haskell), or because they want to test the specific interactions with the object (e.g. "verify that method foo was called twice") because those methods are used for side effects.

Haskell avoids that one by making it easier to represent actions as values; you can use e.g. a free monad to represent actions that will be performed later, so rather than testing that your complex business logic method called deleteUser(userId) on your mock, you instead test that it returns a DeleteUser(userId) value. To a certain extent you can do this in Java too ("the command pattern"), but without higher-kinded types you can't have a standard implementation of e.g. composed commands or standard methods for working with them, so it gets too cumbersome to really do in practice.

Even in Java you wouldn't want to use mocks for testing methods that operate on simple datatypes: to test e.g. a regex find method, you wouldn't pass in mock strings, you'd just pass in real strings and confirm that the results were true or false as expected. A language like Haskell just expands the space of what you can test in the same these-inputs-these-outputs way, by making it easier to represent more things as values.


> not because they need any of the mocking features as such but because it takes up fewer lines on the screen

Why is this a bad thing? How is this different from, say, using Template Haskell?

> It means you can no longer e.g. extract common expressions, because they don't necessarily mean the same thing; if you have some common mock setup code you can't just blindly automatically extract method, you have to think carefully about when the mocks get instantiated and when the expectations are set.

I have never found the existence of tests using mocks being a hindrance to refactoring in Java. Can you provide a more specific example?

PS: > How is it not reflection ("the ability of a computer program to examine, introspect, and modify its own structure and behavior at runtime")?

Agree that it is reflection, by that definition.


> Why is this a bad thing?

Because it shows the language could be a lot better. A common, basic task shouldn't be so much easier outside the language (via reflection) than inside it.

> How is this different from, say, using Template Haskell?

It's the same thing. To the extent that people feel the need to use Template Haskell to do basic and common things, something is very wrong with Haskell.

> I have never found the existence of tests using mocks being a hindrance to refactoring in Java. Can you provide a more specific example?

I mean you can't refactor the test itself. Just basic things like: if you do expect(methodOne(matches("someComplexRegex"))) ; expect(methodTwo(matches("someComplexRegex"))), if you try to pull matches("someComplexRegex") out as a variable you'll break your test (you have to make it a function instead). You can't move an expect() above or below another method call without checking to see whether that was the method it was testing. Individually these things are trivial, but they add up to a chilling effect where people don't dare to improve mock-based tests a little as they work on them, so they end up as repetitive code with subtle variations, just like main code would if you never refactored it.


> Individually these things are trivial, but they add up to a chilling effect where people don't dare to improve mock-based tests a little as they work on them

In my experience, I have not come across such effects. People understand the purpose, strengths, weaknesses and limitations of the libraries they use and try not to "cut against the grain".

> people don't dare to improve mock-based tests a little as they work on them, so they end up as repetitive code with subtle variations, just like main code would if you never refactored it.

I understand this is a subjective preference, but I try not to refactor test code too much. I strive to make my test code not have branches ("if-less code" as some people call it). Sometimes this lead to slightly more verbose code, but in the long run I have found it useful for my test code to be rather boring.

----

I now understand the point you are making, and agree with it technically. I don't agree that those technical points lead to the social effect you call out, because I have not come across it.

Overall, Java makes two bad design choices - nullability by default, and mutability by default. But in the codebases I have worked with in the last few years I, and my colleagues, tend to not opt in to these defaults. This leads to pleasant, testable codebases to work with. We also enjoy acceptable performance, good tooling, easy-to-reason memory usage, great library ecosystem etc.


> Overall, Java makes two bad design choices - nullability by default, and mutability by default.

There are a few more, even today: using a weird secondary type system to track what kind of errors can occur (checked exceptions), classes being non-final by default, universal methods (every method in java.lang.Object except possibly getClass() ought to be moved to interfaces that user-defined types have the choice of not implementing), a bunch of syntactic ceremony around blocks (braces required everywhere, "return" being mandatory) which gets even worse once you want to move away from mutability by default, variance at use site only, no sum types, no HKT...

> This leads to pleasant, testable codebases to work with. We also enjoy acceptable performance, good tooling, easy-to-reason memory usage, great library ecosystem etc.

Sure. There are a lot of good things about the Java ecosystem, and if you see the language as a modest, incremental step over C++ then it is an improvement on that front at least. At the same time I do think ML-family languages - even ML itself - offer a lot of advantages especially if we're talking about them just as languages. In practice I work in Scala and gain most of the advantages of the Java ecosystem but with a language that has most of the advantages of Haskell as well.


> That relates to OOP as in languages like Java - not Alan Kay's idea of OOP, which he emphasizes was very different, but I still don't get what's the idea

It's bad of me to react to a single poster because I've seen many similar well reasoned arguments about why OOP is bad, but refer to something I would not call OOP -- I would just call it bad programming.

To put it on it's head, there is a lot of terrible code out there. I wouldn't say that code whose authors believe it is OO is measurably worse than any other code I see. What I agree with is that it also isn't any better (which is often a surprise to those authors). I've seen some good OO code. I've seen some good procedural code. I've seen some good functional code. I've seen some good declarative code. But I've mostly seen terrible examples of all of them :-)

One of the very unfortunate things that happened in the late 80's and 90's was the idea we should write self contained components that we would somehow plug and play all over the place. Despite the many, many horrible systems we wrote like that, the idea refuses to die. Some people believe that this is OO. They are wrong :-)

We have exactly the same problem with "unit testing". Some people mistakenly believe that a "unit" is a piece of code taken in isolation. Then they think, "Hey... it's isolated... what else is isolated? Oh yeah! Objects!" So an object becomes a "unit" and it's tested in isolation... How do I isolate it? Oh yeah! I'll make fake objects for it to talk to.

Yeah, it's a tyre fire. But even though it is popular and even though it is popularly called OO, I think it's a bit unfair. It would be like saying, functions are procedures that return values so the only difference between functional programming and procedural programming is that you always have to return a value. Yeah, it's super wrong, but it's so simple that you could convince a whole bunch of people it's true and then complain about how useless FP is.

I realise that you realise that Alan Kay's idea of OOP is different, but the key part is "I still don't get what's the idea". When you find out, it would be nice to find out your reaction. Otherwise it's really just a strawman rant about how terrible mainstream programmers are (and we are.. terrible, that is ;-) ).


No, not a strawman at all. It's a rant about what bad qualities come from commonly accepted methodologies that fall under "OO". If you think that's not the common understanding of OO, you should state what you think it stands for instead and why you think it's still a good idea. You didn't do that.

And pretty much nobody really understands what exactly is Alan Kay's vision, simply because it's rather vague and somewhat removed from practical reality. He seems to want "extreme late binding" (which need not be bad) and asynchronicity (Actor Model? can remove control and thus be problematic). He also has some vague idea of extreme parallelism that seems just very far removed from computational reality today. I think it's more a philosophical idea of how the physical and biological world could be seen as concurrent processes. I don't want to say his are bad ideas, and he is obviously an extremely intelligent and educated guy, but at the same time he also does not seem like an experienced programmer from a practical standpoint - but more of a visionary. So that's my idea, and if you have a different one, feel free to head over to the C2 wiki to convince yourself that most people don't really know what he means.


Can I please upvote this 100x? The "is-a" relationship is so much encapsulation gone wrong. Trying to encapsulate functionality in an object can not represent the multitude of things you want to do with them. Yet an amazing number of even CS grads religiously repeats OOP pattern.


>> A large majority of the code running on our planet today is OOP.

>Good example code base?

The majority of code running Amazon the retail site and AWS. Large swathes of Microsoft and Google online services. Most code running bank back office and many trading systems. Likely 90%+ of the line of business applications run by the Fortune 500. Most desktop GUI programs. Should I keep going?


But was that because of OOP? Or in spite of it? There are many reasons OOP is used in all of these systems which aren't related to it's technical "superiority".

If popularity is a valuable metric, then javascript is probably the greatest language ever invented (sigh).


>> A large majority of the code running on our planet today is OOP.

> Good example code base?

It would be easier to give examples of non-OOP crucial programs, for they are far less.

OOP codebases: -- Almost all major AAA games.

-- Almost all GUI systems.

-- All major browsers.

-- Almost all video editors (NLE),

-- Almost all audio DAWs

-- All of Adobe's Suite

-- Almost all 3D and CAD tools

-- All office suites (MS Office, OpenOffice, iWork),

-- All major IDEs (Visual Studio, IntelliJ, XCode, Eclipse)

-- Most of Windows and OSX standard libraries

-- Clang (and GCC now that they went to C++? Not sure if they use OOP)

-- the JVM

need we go on?


> OOP codebases: -- Almost all major AAA games.

Nah. As far as I hear, most of them moved away from OOP a long time ago. "Component systems" have been hot for more than five years.

> Almost all GUI systems

Come on, what a GUI does vs a non-GUI "realtime" app is pretty trivial. A bit of layouting, routing a few input events. I'm not saying that e.g. building a scene graph or box model would be the wrong thing (there are other ways as well), but yeah... I certainly don't think that inheritance makes GUIs easier. Interfaces/Function pointers/dynamic dispatch? Yes, you might want that for decoupling a few containers of heterogeneous data/behaviour. But that's hardly a monopoly of "OOP", and also you don't need that in most places.

The other projects, don't know them. Again, I'm not against abstractions per se (the Stream abstraction is one I regular use, although it does require unwrapping for proper error handling. And I have a constant number of "objects" in each of my programs, namely modules with global data). But OOP culture, especially objects-first mentality and fine-grained data encapsulation, gluing implementation to pristine data - I believe it does only harm and leads to plain wrong program structure and overcomplicated code to jump into all these boxes. I prefer gliding through arrays :-)


>Nah. As far as I hear, most of them moved away from OOP a long time ago. "Component systems" have been hot for more than five years.

Nope, that's just hype. Good ole C++ still rules the day, except for specialized (and smaller in scope and speed needs) games.

>Come on, what a GUI does vs a non-GUI "realtime" app is pretty trivial.

You'd be surprised. A guy is not just "call x program with some flags" as you might believe.

Something like a NLE editor for example or even an IDE can have GUI needs that go far beyond hundreds of thousands of LOC...

(Not to mention that I was referring to GUI libraries themselves, complex beasts on their own, not GUI code as used by applications to build their GUIs).


> Nope, that's just hype. Good ole C++ still rules the day, except for specialized (and smaller in scope and speed needs) games.

Sure, it's C++, and in my perception much C-style C++. And C++ != OOP.

> (Not to mention that I was referring to GUI libraries themselves, complex beasts on their own, not GUI code as used by applications to build their GUIs).

I will admit I haven't written a NLE program, but integrating complex logic with a complex and hard to understand GUI framework like Qt, meaning there are a lot of states to synchronize, is a lot of inessential complexity. I'm pretty sure it's much easier when you only use GUI primitives and do most of the coding on your own. E.g. a standard box layouting and event bubbling algorithm can't be that much work, and it's MUCH MUCH easier to do it on your own and choose the appropriate structure, instead of writing many helper classes trying to bend the rigid framework.

Taking the example of a NLE program - it has lots of domain-specific state which you must absolutely understand if you want to write such a program. And you absolutely must have a vision how this state should be reflected on the screen. Choosing how to do it is a lot of work. Actually drawing it should be the smaller amount of work, by far. Just separate the state from the GUI library. If you scatter it over thousands of objects, inheriting implementations that you don't own and don't understand - well, of course! that's really hard.


>Taking the example of a NLE program - it has lots of domain-specific state which you must absolutely understand if you want to write such a program. And you absolutely must have a vision how this state should be reflected on the screen. Choosing how to do it is a lot of work. Actually drawing it should be the smaller amount of work, by far. Just separate the state from the GUI library. If you scatter it over thousands of objects, inheriting implementations that you don't own and don't understand - well, of course! that's really hard.

My point was rather than just the essential GUI functionality and interactions for such a program can be very complex.

In other words, when to show this or that, how to structure code to show it fast, how to present it best, etc is also essential logic (not some afterthought on top of the domain logic), and it can also be super-hard to code (depending on the kind of program).


> how to structure code to show it fast

It's pretty trivial, at least not any sort of GUI-specific hard problem. There is only a low number of objects on any given screen. Don't raster pixel-by-pixel on the CPU, of course.


It's only pretty trivial if you're doing an app that has some forms and does some CRUD and so on.

For anything above that, it can hairy real fast.


>> Nope, that's just hype. Good ole C++ still rules the day, except for specialized (and smaller in scope and speed needs) games. Sure, it's C++, and in my perception much C-style C++. And C++ != OOP.

Sorry, meant to write "good old OOP still rules the day".


> Every objects-first codebase I've seen was terrible.

Does this include Smalltalk and CLOS codebases?


No. I don't know these languages but color me sceptical.


you should take a good look at CLOS, you'll be colored rainbow-happy instead.


> Alan Kay's idea of OOP, which he emphasizes was very different, but I still don't get what's the idea :p

I think his idea of OOP is something very close to what we call actors these days.

It's a fractal design where it's 'computers' / objects communicating with messages all the way down.


>It's pretty much the only software paradigm that's survived for that long.

Functional programming predates non-functional programming - Turing's papers and thesis were published (at least) a year after Church's papers on lambda calculus.

Type systems in FP run decades ahead of type systems in regular programming languages. For example, simple type system for FP was published in 1948 and it was (more or less) equivalent to Fortran's type system (1958). The type inference was published in 1968 by Hindley and Milner adapted his algorithm to more "efficient" mutable state in 1978. Type inference come to mainstream languages only in what? 2004?

The algebraic types and pattern matching were born in 1971, the year I born too. These facilities start to appear in mainstream languages no earlier than 2008 if you consider Rust at that time as a mainstream language. And C# acquired them, I think, in 2016 and not earlier.

I boldly and offensively assume that OOP is the only paradigm you decide to care about and thus you consider it "the only software paradigm". I think that it is a very useful position in life, not to care about things you decided not to care about. I do that too.


> It's pretty much the only software paradigm that's survived for that long.

Perhaps the only one that’s been the most popular for that long.


> I don't think Java is a meaningful competitor.

Just curious... what would you consider a meaningful competitor to Haskell?

Just to lay my own cards on the table: I'd prefer Haskell to almost all other languages if we were only talking about the language (well, the GHC dialect). I do use Haskell quite a bit, but unfortunately I also have to do quite a lot of work in the "enterprise" space where ridiculous things like being able to read e.g. Excel spreadsheets is a big deal. (That is, one cannot rely on people sending those spreadsheets over email to convert to CSV in any meaningful way, so...)


Actually, I think there is at least one area where Haskell and java compete where other languages don't: design by committee.

It's often joked that it IS possible to design by committee. The result is Haskell. Otherwise, you get Java.


Better compare against languages that games, kernels, and compilers are typically implemented in.

Games... So, C# then? :)


If you’re talking about Unity it’s fair to point out that the game engine itself is built in C++ with C# being a user land VM. But a lot of mobile and a few PC games are written in Java


I was more thinking that C# and Java are close to identical.

I know they have some different features and can feel a little different, but they’re much closer to each other than ML and Haskell, say.


Thank you - very useful summary.

>I say standardish Haskell because the sweet spot in my experience is a few lightweight GHC extensions but mostly shunning some of the seriously experimental stuff in the language.

Are you able to say which GHC extensions you use and why?


I grepped a typical project and saw these. I think they can be divided into a few different kinds of extensions.

This extension fixes a defect in the spec as far as I'm concerned:

* ScopedTypeVariables

Syntactic extensions that make certain code forms lighter:

* LambdaCase * MultiWayIf

* OverloadedStrings * RecursiveDo

* TypeApplications * PatternGuards

* KindSignatures

Code Generation:

* DeriveFunctor * DeriveTraversable

* DeriveFoldable * DeriveGeneric

* StandaloneDeriving * GeneralizedNewtypeDeriving

Extensions to the type system. These can be misused so they should be respected.

* MultiParamTypeClasses (Can weaken type inference)

* FunctionalDependencies (Helps type inference with MultiParamTypeClasses, these two together compete with TypeFamilies for functionality)

* FlexibleInstances (Can be a bit sketch, actually) * FlexibleContexts

* UndecidableInstances (Surprisingly mostly benign)

* TypeFamilies (Type level programming should be kept to a minimum. Can kill type inference)

* GADTs (Very powerful, sparingly appropriate)

* RankNTypes (Higher rank types are very expressive but have much worse type inference)

Unpleasant extensions that cope with the realities of serious engineering:

* TemplateHaskell

* CPP

Very Spicy extensions that I avoid unless they're really really appropriate:

* PolyKinds * TypeInType

I am mostly neutral or unfamiliar with the others. The only extension I vehemently oppose is RecordWildCards because it is a binding form that doesn't mention the names it binds. It can get really confusing!


So I've developed a medium-large Haskell project that used nearly - but not quite all - of those same extensions. I would never think to describe this as "a few lightweight GHC extensions". To me - and I think most others - it's just normal Haskell, but why even bring up "standardish" if this is what you do?


I got this list from a project involving a lot of people. If I'm in charge of the project the list gets much thinner :-)

Most of these extensions have been around for like 10 years now and are well understood. They integrate well with the language without impacting Haskell2010 code. For example, RankNTypes or any of the code gen ones.

I shouldn't have said "a few" though! There's lots of light extensions. The only ones I consider heavy are some of the type system ones, CPP and TemplateHaskell. TypeInType is the heaviest by far.


Hey thank you!

I really appreciate it that you took the time to write that out.


I want to see this programmer who doesn't jump at the opportunity to give their opinion on programming :p you're welcome


> monad transformers (the promise of aspect oriented programming actually realized)

I hardly see any overlap between these two concepts.

AOP is about cross cutting concerns and being able easily insert similar functionalities in unrelated areas of the code.

Monad transformers are the hack you need to use to make monads compose.


Monad transformers are distinct from monad composition, it just seems like they're the same because a monad transformer takes the Identity monad to some other monad. The correct conceptualization of monad composition is a distributive law called as such because it generalizes distributive laws from algebra. Of course the two are related.

Monad transformers allow one to slice up the functionality and concerns of a program in a dimension different from slicing of a program into components that are then composed together; e.g. a bunch of functions that call each other or a bunch of objects that pass messages between themselves.

Each monad transformer in a transformer stack adds functionality and concerns that the other parts of the stack don't need to care about. Components can then be written polymorphically in the concerns they care about allowing them to be instantiated wherever the capabilities they need are present. Ultimately a program is instantiated to a particular transformer stack and then it is supplied effect handlers that reduce it to the base monad (often IO). The ability to modify functionality in a cross-cutting way is concentrated in the effect handler which is at the discretion of the call site, not the implementation site.

This is not quite AOP but it's in the same space. It's hard to see in small apps but quickly emerges as the scale of a Haskell application grows.


How are monad transformers a hack? They have a solid theory behind them.

i'm not sure where you got the idea that they 'make monads compose'. Monads don't compose. It's nonsensical to talk about monad composition in general, because there is nothing about the structure of a monad that allows it to compose (quite the opposite, really). Some things which may form some monads do compose with certain other things that form monads, but this is like saying 'Abelian groups' are the 'hack' you need to make groups commutative -- it's a meaningless statement.


Agree that monads don't compose.

I am not the OP, and not a Haskell programer, but my understanding is that Monad Transformers exist because Monads don't compose, and when you want monadic effects to compose, you use Monad Transformers (the other option is a go crazy writing every combination and ordering of effects you want by hand).

I got this impression from the papers I have read, particularly this one [1]

    Monad transformers offer an additional benefit to monadic 
    programming: by providing a library of different monads and 
    types and functions for combining these monads, it is possible 
    to create custom monads simply by composing the necessary monad 
    transformers. For example, if you need a monad with state and 
    error handling, just take the StateT and ErrorT monad transformers 
    and combine them.
Notice the last line of the snippet I posted above. What am I missing?

[1]: Monad Transformers Step by Step https://page.mi.fu-berlin.de/scravy/realworldhaskell/materia...


Monad transformers exist independent of the fact that monads don't compose. Can you explain what you mean by 'monad transformers exist because monads don't compose'? Are you saying the concept of a monad transformer wouldn't exist if monads did compose? This makes no sense, as there are applicative transformers despite the fact that you can compose applicatives freely.


> Monad transformers exist independent of the fact that monads don't compose. Can you explain what you mean by 'monad transformers exist because monads don't compose'?

I see what you mean.

You are right - the concept of Monad Transformers exist independent of the fact that monads don't compose.

What I meant was that MTs exist in Haskell programs because Monads don't compose. Of course, there probably exists a Haskell program where this is not the case, but I am certain MTs are largely used in Haskell because Monads don't compose.

BTW, the grandparent is not the first to coin "MTs are .. use(d) to make monads compose" usage. The late Paul Hudak et al writes in [1] that:

     A ground-breaking attempt to better solve the overall
     problem began with Moggi’s proposal to use monads to
     structure denotational semantics. Wadler popularized
     Moggi’s ideas in the functional programming community
     by showing that many type constructors (such as List) were
     monads and how monads could be used in a variety of
     settings, many with an “imperative” feel (such as in Peyton
     Jones & Wadler). Wadler’s interpreter design, however,
     treats the interpreter monad as a monolithic structure which
     has to be reconstructed every time a new feature is added.
     More recently, Steele proposed pseudomonads as a way
     to compose monads and thus build up an interpreter from
     smaller parts, but he failed to properly incorporate important
     features such as an environment and store, and struggled
     with restrictions in the Haskell type system when trying
     to implement his ideas. In fact, pseudomonads are really
     just a special kind of monad transformer, first suggested by
     Moggi as a potential way to leave a “hole” in a monad
     for further extension.
Notice the usage Steele proposed pseudomonads as a way to compose monads. So this usage has been established in the Haskell community at least from 1995. Why are you, presumably a Haskell programmer, surprised when someone repeated that usage in 2018 on an internet forum?

[1]: Monad Transformers and Modular Interpreters (http://haskell.cs.yale.edu/wp-content/uploads/2011/02/POPL96...)


> Why are you, presumably a Haskell programmer, surprised when someone repeated that usage in 2018 on an internet forum?

I hate the way monads in general are talked about in the Haskell community (I also do not like the language for other algebraic terms). In particular, it is common to say 'data type X is a monad'. This statement is completely nonsensical. A monad is a thing along with some operations. Haskell data is a concrete description of how to store a thing. By itself, a piece of data has no operations associated with it. There is no way a particular data type could be a monad, since a data type is just a thing.

The proper terminology is 'X forms a monad along with these functions' or 'X has an instance for the Monad type class' (this is different, because the monad type class allows for a subset of monads, not general monads). I am on a personal mission to rid the Haskell world of sloppy language because it confuses beginners, in my opinion. For a long time, I thought monads were a thing. Then I realized they're just an algebraic structure, like groups or rings.

When we say things like 'monads don't compose' it makes it seem like there is some deficiency in the idea of a monad that makes them not compose, rather than a realization that monad composition results in zero or more ways to combine two arbitrary monads. Thinking just the former makes you wonder if monads ought to be fixed. Realizing the latter means you realize that the lack of a fix indicates that putting two monads together can accomplish a variety of tasks, only one of which is likely suited to your use case.


> …it is common to say 'data type X is a monad'. This statement is completely nonsensical…

I think the reason for this is that in Haskell we model algebraic structures using typeclasses, which are dispatched by type. So we say “X is a monoid, with mempty and mappend defined as follows”:

    instance Monoid X where
      mempty = …
      mappend = …
Instead of “X, emptyX, and appendX form a monoid”:

    instance Monoid X emptyX appendX
    emptyX = …
    appendX = …
This leads to what I call the “newtype hack” for distinguishing different monoids (resp. monads, &c.) on the same underlying type.


Thanks for writing this. I understand now where you come from and am sympathetic to your reasoning. Thanks again!


> Notice the usage Steele proposed pseudomonads as a way to compose monads. So this usage has been established in the Haskell community at least from 1995. Why are you, presumably a Haskell programmer, surprised when someone repeated that usage in 2018 on an internet forum?

Shockingly, our understanding of monads and monad transformers has advanced in the last 23 years.


> Shockingly, our understanding of monads and monad transformers has advanced in the last 23 years.

@danharaj, of course!

At least enough that when someone says "MTs exist to compose monads", to understand it as "MTs are used in Haskell programs to compose monadic effects".


Would it be correct to say that certain implementations of Monad make aspect oriented programming possible? One example is the Writer monad which allows recording the state of a sequence of monad operations. Essentially separating the concerns of logging and the actual computation. Perhaps OP means that combining monads (with transformers) enables multiple of these cross cutting concerns to operate together without knowledge of each other


> AOP is about cross cutting concerns and being able easily insert similar functionalities in unrelated areas of the code.

Yes, monad transformers exist to let you do that. They're the thing that makes it possible to insert an effect in a code area that's using other, unrelated effects.


Reread carefully my characterization of AOP.

Cross cutting concerns allow you to insert a certain line of code in "file 1 at line 223 and file 2 at line 400" because these lines match a certain type safe regexp.

Monad transformers accomplish nothing remotely close to that.


Don't be so rude.

> Cross cutting concerns allow you to insert a certain line of code in "file 1 at line 223 and file 2 at line 400" because these lines match a certain type safe regexp.

What you're describing is the implementation details of how AOP works. Cross-cutting concerns refers to the problem statement: secondary concerns that need to be addressed in the same way in different parts of the codebase with minimal disruption to the primary logical code in those different parts of the codebase.


What type of problems/specific problems do you work on?


I'm part of a general software consultancy. I've worked primarily on web applications, some of the bigger ones are around the same complexity as something like Slack or some chunk of google docs. Some other projects I've been involved in are blockchain tech, and software as a service tools.


Mind if I ask the company you work for or is there any open source stuff of yours to have a look at? Thanks.


obsidian.systems (that's a url :)


Great, thanks, will check it out.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: