Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: When is pure functional programming beneficial?
56 points by behnamoh on July 11, 2023 | hide | past | favorite | 83 comments
The way I think about Haskell programs is locked-in statements that tightly sit next to each other (i.e., function compositions) and nothing "leaks", as in no mutations are allowed (except in monads, like IO).

But I wonder if this is mostly a matter of taste. In small programs, the end result of a Haskell program is the same as a Python program. Is there a threshold after which Haskell's purely functional paradigm shines the most?




I wouldn't say there is any threshold where purely functional programming shines less. Fewer regressions and the system being more likely to "just work" makes it more fun to develop. So for interactive programs, servers, CLI tools, parsers et.c. purely functional programming is amazing. An elm developer reported that the prototype they wrote in elm ended up with less bugs than the actual production system. I personally experienced building a system and after I had written a few thousand lines, fixed all compiler errors and then it compiled and was just... done. No bugs where found in production.

In F#, an experience report came out where 350k lines of C# were rewritten in 30K of F# code. They also went from 3k null checks to 15 lines of null checks (Plus much more). Zero bugs were reported in the newly deployed system.[0]

Now with that said, there are exceptions where purely functional programming languages shines less:

- Places where the ecosystem is not quite as mature. If you're building a server and have to interact with Cloud services in Haskell, you'll have a bad time.

- Any kind of system where you need to do manual memory management, so systems programming, is badly suited for purely functional programming.

[0] https://www.youtube.com/watch?v=MGLxyyTF3OM&t=863s


> I wouldn't say there is any threshold where purely functional programming shines less.

But what is the set of problems you are usually solving? Any problem set that involves a lot of mutable set is not well-suited for functional programming, in my opinion. And maybe problems where efficiency is important, although I don't know enough to have an opinion on that. Some areas that come to mind are:

- systems programming (which you said)

- GUI programming

- games

- large simulations? Maybe if you can reuse the before and after buffers FP would work okay there.

GUI's have a surprising amount of state, which is why immediate-mode GUIs haven't really taken off. (Yes, there's Dear ImGUI, but it actually uses the name of the control to store retained-mode state under the hood) It sounds really good when you think about buttons and sliders, but then you come to text editing or list-box selection...


GUI programming is fine in purely functional programming, see for example elm[0], where you have a Model - Update - View paradigm to build frontends. Another example of GUI programming in Pure FP is functional reactive programming[1].

[0] https://elm-lang.org/ [1] https://en.wikipedia.org/wiki/Functional_reactive_programmin...


Functional reactive programming is very popular for graphical user interfaces.

Functions handle state perfectly with scopes and closures.


> In F# ... 350k lines of C# were rewritten in 30K of F# code.

Not discounting the rest of your statements (I love FP for all the reasons), but I really despise - and discount - statements like the one above. I feel they are disingenuous:

"This large-ish program that grew over time with evolving needs and uses was rewritten at a later date once the requirements and use-cases were fully understood and - amazingly - we were able to greatly simplify the code!"

It probably could have been rewritten in BASIC and still resulted in fewer LOC. That's not to discount the null checks, "zero bugs", or other factors. My favorite part of programming in Haskell is that 99% of the time if it compiles it just works, and I can refactor code + add features without concern.


Of course that statement on it's own can be disingenuous, but in this case it's not only related to better understanding requirements. Lower cyclomatic complexity which functional programming lends itself to contribute to the lower LOC. The new code base had even more features than the previous one had implemented. I think it's relevant.


F# is one of the best professional experiences in my career writing software. Unfortunately now Im back in a dynamically typed world dealing with null checks and exceptions..


> Places where the ecosystem is not quite as mature. If you're building a server and have to interact with Cloud services in Haskell, you'll have a bad time.

Why does a server that interacts with cloud services not fit well with functional programming?


Read what you’ve quoted. They say the ecosystem is lacking, not that the issue is inherent in FP.


They wrote about "places where the ecosystem is not quite as mature", and then referenced interacting with cloud services. I read that as alluding to the rapid development and instability of some cloud services.

I've written plenty of non-FP code that interacts with less mature systems, and the problems I've always run into have been related to changing APIs and behaviors of the remote systems, not anything inherent to the language or paradigm I'm using.

I was wondering what it is about FP that makes it less suitable for unstable environments.


What I had in mind was that the library ecosystem just isn't mature yet, so the libraries are incomplete or undocumented.


One way around this maybe to just write a terraform-using library in Haskell then leverage all of it’s compatibility across clouds.


Systems that have clear inputs and outputs. Parsers, compilers, stuff like that. SAT Solvers. Anything you could unit test without mock objects.

Mixed initiative systems (say UI code where you call into the framework and the framework calls into you) can go either way. Sometimes you can formulate the part of your code in a pure functional way which tames the chaos, if you can’t or that formulation is unnatural the chaos tames you.

System w/ immutable data structures lose a factor of two or so in performance in some cases. FORTRAN programmers won’t accept them, distributed “big data” systems will spend a lot of time marshaling and unmarshaling data and won’t accept any slowdown in their parsing code. The same could be true for something like that SAT solver.


>Anything you could unit test without mock objects

You can unit test without mock objects because you're following functional patterns.


... or working in an environment or on a problem for which functional patterns apply.

Suppose you are writing a "CRUD" app that writes to a relational database, how do you apply functional programming to that? The whole point of an application like that is that it makes side effects.

In some cases you can break those problems down into functional pieces. Consider Python drivers for a product like

https://www.arangodb.com/

One major problem is that you want drivers that work synchronously and asynchronously, the structure of the average api call is something like

   def query(parameters):
      encoded_parameters = encode_parameter(parameters)
      url = generate_rest_url(configuration, parameters)
      response = do_post(url, encoded_parameters)
      return decode_response(response)
To make that async all you have to do is stick "async" before the def and put an "await" before the do_post. 95% of the API calls have a structure like the above. Most of the complexity lives in encode_parameter(), generate_rest_url() and decode_response() are all perfectly functional functions that are easy to test. Code gen the API stubs and you get sync and async switching almost for free. Contrast that to the option of mocking do_post.


I never really understood what a "monad" is but for a program like that I'd have all the "side effect" stuff separated from all the "data manipulation" stuff. It could all be organized as functions, which I generally do because I find it much easier to reason about mentally than objects.

"Pure" functional programming is an extreme which probably isn't suitable for many real-world programming tasks. Functional style is typically what I do, I use lists and maps and folds quite a bit but still end up updating a database.


Code like the above can be restructured with monads in various ways.

In general there is the pattern of building systems that do execution control such as the methods used to create control structures in Lisp, the Executor in Java, etc. One way to use monads is to accumulate a list of operations that need to be done and actually does the operations when you "unpack" the monad at the end. Such a monad could do many of the things a compiler does since it has the operation list at the outset and doesn't just blast from one statement to the next. This blends into the techniques used in Fluent DSLs.


Usually your database connection is a function that is passed as a parameter to the functions using it.

That way you don't need to Mock, but you can input a "connection" during testing that does what you expect.

If you say "well now I have to type extra parameters everywhere", this is usually solved with partial application. I.e. you create findings from your functions that have the Db Connection already baked inside.

https://fsharpforfunandprofit.com/posts/low-risk-ways-to-use...

https://fsharpforfunandprofit.com/posts/partial-application/


Haskell has the IO Monad which is like a magic type that is allow to have any side effect whatsoever. So you are free to write imperative code in Haskell.

However what you might want to do (and this is based on IIRC!) is have a typeclass for your database interaction. Then the real DB type and the mock one can both implement that typeclass and you have basically dependency injection.

There are other ways like free monads but I never got my head around that.

You could also create a monad transformer which is a bit more straight forward. They let you compose monads, so you can chain little bits of side effect. Like “here is something that can talk to a database and write logs and that is all it will do” … but actually it is pure! It thinks it is doing those things but they can be mocked.


Not necessarily. For an easy counter example, build a Sierpiński triangle. Easy to unit test. Easier to build using imperative code than functional. (This goes for a ton of fractals, honestly.)


It burns me up that Fibonacci numbers are used so frequently as an example of functional programming because it is a clear case of malpractice, particularly because it performs terribly without memoization. Even in the 1980s CS profs were trying to tell us how BASIC sucks but efficient Fibonacci is so easy to code up in BASIC. (I'd really be impressed with a system that could figure out the closed form based on the definition.)

How easy it is to draw fractals depends on your tools. If you have turtle graphics with affine transformations (rotation and scaling) it is so easy to write simple recursive programs to draw space filling curves and such.


> It burns me up that Fibonacci numbers are used so frequently as an example of functional programming because it is a clear case of malpractice, particularly because it performs terribly without memoization

It's used as an example of basic recursive programming largely because its an example that can be returned to with memoization. That there is a better way to do it that can be shown later is a reason for selecting early examples in a curriculum.


I don't see why you can't write the closed form in a functional language as well. This only works for 1 =< N =< 70, for N >= 71 the approximation of Sqrt5 isn't good enough; you need to do more symbolic math so you can eliminate that term.

  -module('fib').
  -export([go/1]).


  go(N) when N < 1 -> throw(badarg);
  go(N) ->
        Sqrt5 = math:sqrt(5),
        round(1.0 / Sqrt5 * math:pow((1 + Sqrt5) / 2.0, N) - 1.0 / Sqrt5 * math:pow((1 - Sqrt5) / 2.0, N)).

  rp(lists:map(fun fib:go/1, lists:seq(1, 71))).
  [1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597,2584,
 4181,6765,10946,17711,28657,46368,75025,121393,196418,
 317811,514229,832040,1346269,2178309,3524578,5702887,
 9227465,14930352,24157817,39088169,63245986,102334155,
 165580141,267914296,433494437,701408733,1134903170,
 1836311903,2971215073,4807526976,7778742049,12586269025,
 20365011074,32951280099,53316291173,86267571272,
 139583862445,225851433717,365435296162,591286729879,
 956722026041,1548008755920,2504730781961,4052739537881,
 6557470319842,10610209857723,17167680177565,27777890035288,
 44945570212853,72723460248141,117669030460994,
 190392490709135,308061521170130]
( 190392490709135 + 117669030460994 /= 308061521170130 )


The question is if you can have a system that can derive the closed form from the definition. Not one you can feed it to. Right?

Similarly, it may be faster to use the repeated squares approach to get a large value fib, and I don't know any system that can get that from the recursive definition, either.


Nonfunctional programming doesn't give you that either, so I don't see the problem?

If you use the iterative / recursive form, runtime performance isn't great. If you use a lookup table, performance is great, but space might suffer (but then, how many values do you really need?). If you want to use something better, you've got to write it yourself. Maybe one day an optimizing compiler will figure it out, but it's more likely to figure out you never call it with N > 10, and precompute, and that's fine.


Right, but the post is "I'd really be impressed with a system that could figure out the closed form based on the definition." Point seeming that coding up a recursive process based on the definition doesn't seem to be the best reason to teach something that way.

This goes back to a ton of the talk on early compilers and systems. They would push that you should lean heavily on the system to move from definitions to optimal solutions. (This later got changed heavily to leaning only on the compiler, but the same idea applies.)


That was parenthetical, I was responding to "It burns me up that Fibonacci numbers are used so frequently as an example of functional programming because it is a clear case of malpractice"


Right, I took that to be that the complaint is the recursive definition is used to teach it. Stated differently, ignoring the parenthetical takes the weakest interpretation of that post.


Change for a dollar is another fun one there. Closed form is mind bending, and I don't think we have built a system that can derive that.


Or you can write the tail-recursive version which starts from 1 and builds to n; no need to memoize.


> Easier to build using imperative code than functional.

I am a bit confused about this. The following code is pure and I can't think of an easier way to do it imperatively:

    function triangle(n) {
        if (n <= 1) {
            return [1]
        }

        const p = triangle(n - 1)
        return Array.from({ length: n })
            .map((_, i) => (p[i - 1] ?? 0) + (p[i] ?? 0))
    }


I was actually mildly curious if anyone would call me on having picked an easy fractal to do with the recursion. :) When I wrote the post, I was thinking of the Koch curve. Not sure why I mixed it up in name with the other.

I was definitely leaning on some of the abstractions that you get in the likes of turtle geometry to show how this works. Many of the programs you can write using those primitives are very interesting, and far more unwieldy in any other paradigm.

At large, I view it as the difficulty of forcing Cartesian coordinates. For a large number of problems, things get far easier to state in polar coordinates. Such that your choice of abstraction and representation matters. A lot.


Another "pure functional" approach to drawing these things is to write a shader program which takes the (x,y) coordinates and returns the pixel values. Here is a parameterized Koch curve done as a shader

https://www.shadertoy.com/view/XdcGzH


Sadly, that page isn't loading for me. :( Mistake on my end?

I'm going to guess that the shader version is a bit larger than the naive turtle version. Not that it isn't doable, but is tough evidence to hold for the benefits of the expressive power of functional code. :D


It is 74 lines of basically C code on a page that runs the shader with WebGL. On my iPad it renders at 60 fps with a parameter that changes. It’s really neat.

It doesn’t look like “functional code” but it really is since the shader returns a pixel color for x and y. I bet there is some squarish kind of fractal that could be code golfed with but manipulation.


Odd that the page doesn't show me the code. For some reason, webgl is busted on my firefox at the moment. Not sure if that is my fault or something else. I'd expect it to still show the code, though. :(

At any rate, thanks for the link! Will try another computer later to see if it can load.


Could you elaborate a little bit more? These days i write a lot of go code and i end up using mock objects for different interfaces to write unit tests. Is there another way to do this?


Purely functional programming in package management and system configurations is next-level.[0] When your package installation scripts no longer are represented as actions from one state to another but as a state that is either realized completely or not, things become a lot more deterministic, manageable, and compositional. In Nix, a package is a pure function that takes in inputs and produces outputs. Given the same inputs, we have the same outputs, and so caching becomes easy, for instance, along with extensive sharing.

As an active user and package maintainer I can't count the number of times I've decided to rebuild an old project that I haven't touched in years and it still working, while in the same session working on another project using up to date libraries.

There's also straight.el for Emacs[1] which has finally made Emacs config maintenance far better for me.

[0] https://nixos.org/

[1] https://github.com/radian-software/straight.el


It's all about minimizing cognitive overhead. This happens in a few ways when you use strict functional patterns and a tight domain model:

- You can safely make more assumptions about how your program works - You can rely on your type checker like a to-do list when extending your program

These two benefits require more discipline, but they make extension and support brain-dead easy.

For the record, you don't need Haskell to enjoy these benefits, you can get them in Python or TypeScript too, as long as you're disciplined in how you design your system.


Functional programming enables local reasoning about your program: since effectful code is clearly signaled with various monads, you can be sure that some pure code is such just by looking at the type. This lets me think less about the surrounding context do you understand a given block of code.

So, I would say the advantage of a pure functional programming language is that it enforces and encourages writing pure code. You can still write in the style to some degree in python, but you don’t get all of the idioms language support to make functional programming really nice.

I notice a difference in my code when I write in a pure language.

There are some cases where you really want in place mutation— graph algorithms are good example—but for most other things, I feel like functional programming wins because of how you can think about your code, and less about code actually can or cannot do.


Structure and Interpretation of Computer Programs section 3.1, Assignment and Local State, gives an overview of the costs and benefits of adding the concept of mutation to an otherwise referentially transparent language. It's from the opposite perspective: we know functional programming, now what's so great and so dangerous about imperative?

http://sarabander.github.io/sicp/html/3_002e1.xhtml


A lot of people in the imperative world have learned that it’s much easier to test pure functions, and many more people know how badly global shared state affects scalability on the human factors side (everything, everywhere, has access to changes this variable, so good luck figuring out why it gets a bad value).

Functional core, imperative shell is the compromise where you collect state in shallow parts of the code, where they are easier to spot and thus easier to reason (correctly) about.


"FP works great for distributed systems, and most software written today is distributed."

According to Grokking Simplicity[1], good functional programming isn't about avoiding impure functions; instead extra care is given to them. These functions depend on the time they are called, so they are the most difficult to get right.

Compare to pure functions: given the same input, the results are always the same regardless of when the function was called. So they are easier to reason with.

There is actually a level that is even easier to reason about than pure functions: plain data.

So functional programmers prefer data over pure functions over impure functions.

The reason to avoid "leaks" is because impure functions cause everything that calls them to also become impure. However, it is OK to use local mutation contained to the function itself. Sometimes mutation is simpler and doesn't affect the purity of the function that contains it.

Another good book on practical functional programming is Domain Modeling Made Functional[2]. Actually all of Scott Wlaschin's material[3] is very good!

[1]: https://www.manning.com/books/grokking-simplicity

[2]: https://pragprog.com/titles/swdddf/domain-modeling-made-func...

[3]: https://fsharpforfunandprofit.com/


Mutable State is to Software as Moving Parts are to Hardware :-)

https://kristiandupont.medium.com/mutable-state-is-to-softwa...

I find personally that there are some areas where FP shines -- anything that is or could be a CLI that takes some input and returns some other output is well suited. A GUI program is the manipulation of state, and it is poorly suited, though many sub parts might not be.


> A GUI program is the manipulation of state, and it is poorly suited, though many sub parts might not be.

If you're manipulating state that doesn't drive any outputs, then your program doesn't do anything useful. I think they're not all that different.


I consider taste a benefit, regardless of what your taste may be. There’s other things to weight, too, of course! But if the end result is that you enjoyed your time more than you would have if you were slinging Java, that’s not without merit.


I came here to say something similar. A lot of us code as a hobby as well as professionally, and certain languages and paradigms tickle us. Sometimes I find almost tantric beauty in c++ for goodness' sake.


that’s a good perspective!


Imperative shell, (ever-expanding-through-relentless-refactoring) functional core seems to work pretty well for practical applications. Modern Java gets the job done in that regard.

What I don't like is the concentration level required for functional productions over equivalent imperative routines. I have found it difficult to impress upon junior developers the testing advantages of functional programming and in refactoring pairing sessions with them, they quickly lose the thread and go all doe eyed.

I feel like the current functional programming paradigms favor terseness that completely obliterates the thought process that led to the production. So, even if junior has the aha moment when we reach the end of the production, and understands what we just achieved there in 6 lines of dense code, they'd be at a loss to retrace it themselves in the future because the thought processes of each operator in the production require too much simulation space in the head.

That, and libraries like RxJS that layer in concepts of time and observables and higher-order observables with sneaky completion states that only stretch the mind further because the true semantics of the program are coupled with under-documented quirky edge cases of the operators. Running into one of those while pairing with junior is not exactly confidence building.

Long-form programming with named pure functions might help, but then I suppose you can lose the terseness and can get lost in a sea of 1-line named functions.


You can write a confusing unreadable Haskell script, just like you can write a confusing unreadable Python script. There’s nothing magic about functional paradigm programming.

It does, IMO, give you less foot guns related to state though. While I still actually like other paradigms, functional programming really limits how clever you can be with state. You can only pass things around immutability via parameters (At least with Haskell specifically) instead of making a confusing taxonomy of objects with unique APIs, for example.


> You can write a confusing unreadable Haskell script, just like you can write a confusing unreadable Python script.

I totally agree (and have seen both scenarios throughout my career).

That said, all else being equal[0], refactoring the purely functional code is, usually, easier, more reliable and less scary.

[0] - among other things, this assumes the person doing the refactoring knows, in this example, both Python and Haskell equally well.


I mean, there's different ways to describe this but it comes down to how you want to think about state. The more pure you get the more you can focus on operations or transformations, as functional programming is really about pipelines of transformation of code & data. By having such a constraint over state it allows one to potentially find it more easier to reason about whatever domain they are solving for. However in order to achieve that there is a certain set of hoops one has to mentally go through and maintain in order to think clearly about said problems. For some that is worthwhile for others not. I generally trend towards code that is functional regardless of the language.

Related concepts that are relevant at a high level is the comparison between declarative code and imperative. Code that is functional tends to be declarative and in a way "is what it seems to be." There is also a transitive property that allows the makeup of a thing to also be its own runnable representation which makes portability or idempotent things easier to achieve.

I dare say that original "OO" in terms of "message passing" is functional in nature, but object-oriented somehow became something it is not.

Erlang is probably the best representation of a functional language ideal in my own personal opinion when weighing in the ecosystem and capabilities of the language and tooling.


a Haskell program is the same as a Python program

  Beware of the Turing tar-pit 
  in which everything is possible 
  but nothing of interest is easy.
      -- Perlis:54
https://web.archive.org/web/20120806032011/http://pu.inf.uni...

Haskell is syntactic sugar on top of the Von Neumann architecture (likewise Python, Rust, COBOL, Smalltalk, etc.)

The Von Neumann architecture is inherently stateful. Anything with volatile memory is.

Functional programming is an idiom, not a technology. To the degree functional programming is a technology, it is one for analog computers.

Good luck. My time has been wasted trying to be clever.


There is probably as much value in just disciplining yourself to try and write mostly in pure functions (regardless of language - I'd recommend Javascript or Typescript as it is a joy to use and highly expressive).

This has benefits when working in large, long-lived systems where engineers can't keep everything in their head, and people come and go over the years. It is easier to reason about, to refactor, and to test, when you know that the "blast radius" of your function is just the return value.


Consider your program a model of the problem that you are automating. Depending on what you are modeling, different paradigms and styles of program will help in different ways.

What we call functional programming today is often reducing the problem down to the easy parts. Which is great and should not be discounted. But don't think of it as any more valid of a solution than anything else. And pure numerical models of the problem can often be just as fruitful in solution speed, while being utterly foreign to most programmers.

OO gets lambasted as you try and model each individual thing in a problem. In my mind rightfully so. That said, it is very common for us to model things in a way that is very "object" based. Consider the classic model of an elevator system and how that is coded by most people. You will have a set of elevator objects with states that they currently represent.

But, you could also do this fully numerically with a polynomial representing things. Expand your toolset to use generating functions, and you can even start building equations that can count the number of solutions at any given state. Still just a symbolic model, but very very different from the OO or even FP style of model that most programmers will write.

Better for the problem you are solving? Probably not, during the exploration side of things. But get the problem into a formulation that you can translate into a SAT style, and you can feed it to SAT solvers that are far more capable than programs most any individual can write. Translate the solution back to your representation for display to the users. (Or just general use in your program, as you move to the next thing.)


Systems where you need to prove correctness at least informally. It’s much easier to prove correctness of a functional program than imperative one.


Depends on your definition of "correctness". In MISRA, JSF, and similar areas, memory usage (including stack!) is part of the correctness. If you overflow the stack because of recursion, it doesn't matter how "correct" the rest of the program was, it's still broken.

And the same for time. Some programs require proving that you meet the timing constraints. Recursion does not make that easier. (Neither does a language that can allocate behind your back, nor one that can run a garbage collector.)

So the rules for such environments usually say: No recursion, and no allocations after initialization. They say this because those rules make it easier to prove correctness.


Unstructured recursion is incredibly tricky to reason about in terms of stack/heap use and termination, IMO quite a bit moreso than imperative structures. To me, it reproduces most of the same problems of gotos.

There are techniques of factoring out recursion (e.g. "recursion schemes") that can help eliminate these problems, but some things are just easier to reason about imperatively.


A different thing to look at is what enforcing referential transparency in your program means.

Referential transparency means that when we bind an expression to a name (e.g. `y = f x`), the two are interchangeable, and whenever we use `y` we could just as well use `f x` and vice versa, and the meaning of the code will stay the same.

Enforcing referential transparency in a programming language means that:

- We need to have more control over effects (purity)

- We can use substitution and equational reasoning

The value of referential transparency for me is that I can trust code to not surprise me with unexpected effects, I can use substitution to understand what a piece of code is doing, and I can always refactor by converting a piece of code to a new value or function because referential transparency guarantees that they are equivalent.

Because of that I feel like I can always understand how something works because I have a simple process to figure things out that doesn't require me keeping a lot of context in my head. I never feel like something is "magic" that I cannot understand. And I can easily change code to understand it better without changing its meaning.

Referential transparency is freeing, and makes me worry less when working with code!

---

The other part that is very notable about Haskell is one of its approaches to concurrency - Software Transactional Memory. Which is enabled by limiting the kind of effects we can perform in a transaction block:

https://www.oreilly.com/library/view/parallel-and-concurrent...


It's nice to have things pure-by-default; and sprinkle in whatever effects, etc. as desired. (Like Haskell, Idris, Agda, etc.)

For example, I've worked on a lot of Scala code which represented optional values using `Option[T]` (~= Haskell's `Maybe t`); represented possible failures using `Try[T]` (~= `Either Exception t`); and I even used the Cats library to make code polymorphic (i.e. for all `M: Monad`, or `M: MonadThrow`, or indeed `Applicative`, etc.). The older ScalaZ library is a popular alternative to Cats which does similar things. Concurrency is currently quite diverse in Scala: we mostly used `Future[T]`, but there are a bunch of alternatives out there like `Task`, etc. Likewise there are other Scala libraries/frameworks which model side-effects differently, e.g. Zio uses algebraic effects.

However, since Scala is not pure by default, we can't count on any of these fancy types to actually prevent the problems they're meant to address. For example, we can only use `Option` alongside `null`: the latter can't be avoided, since Scala isn't pure; in fact `Option` should technically introduce even more null checks (since `x: Int` might be `null`; but `y: Option[Int]` might be `null` or `Just(null)`!). Likewise, `Try` can only be introduced alongside the possibility of any exception being thrown at any time. And so on.

In contrast, we might have a project e.g. in Haskell, where `IO` appears in all of our types. Such code might use all sorts of confusing effects (e.g. early returns, mutable variables, concurrency, etc.), and may blow up in all sorts of ways, just like Scala, Python, etc. Yet at least they're labelled as such, and we're able to write other values and functions which actually do what they claim (modulo totality, unless we're using Idris, Agda, etc.).


I think the issue isn't for the most part the kind of problem the code addresses, but the social context in which the code is used. Functional programs can allow for easier reasoning about programs or paths later, which can be useful when updating, refactoring, debugging etc.

Are you _literally_ writing a simple script to do a single task which you won't save into version control? Pick whatever style will be fastest to write + run. But if you'll need to re-read it in 6 months to figure out how to re-use a portion in a different context, or the next engineer will need to migrate something out from under it in a year, or whatever else, then picking a style based on readability, testability, and analyzability can be worth it -- provided the rest of your team is on the same page.


IMO pure FP is nice because _compositionality_ is nice. If my problem has lots of simple data structures that represent mathematical objects, then usually I find that most of things I want to do with them can be succinctly modeled with pure function compositions. When my problem gets more complex, including but not limited to state, monads can restore that compositionality in a way that works with the rest of my pure code.

This property is true to varying degrees outside of pure FP too. For example in Rust I find that expression-orientation makes programs fairly easy to compose, modulo thinking about ownership.


Pure functions can be visualized as static mappings from inputs -> outputs. It's like a simple table mapping inputs to outputs. Given a set of inputs, you are guaranteed the same outputs on every invocation.

In a complex world with many hidden aspects, unleashing unpleasant surprises all the time, working on purely functional components makes it easier to:

1. understand 2. test 3. debug 4. extend

These are significant benefits of the functional approach (at function/component level) in my pov.


I've been diving into Haskell recently and FP in general and although I lack the deep CS knowledge I find what you said to be nice: everything is atomic.

I personally do a lot of complex math and so I'm using it for programs that are doing a lot of math. I still haven't got my chops enough to do a lot of procedural and external stuff with it yet, I find a higher level language is much easier for that stuff, but when doing math I find that it's much less easy to make mistakes that make it into your production code with Haskell.


When you run a company and need to decide what your tech stack is, it becomes as much a question about the labor pool as it is about technology, maybe even more so.

If you choose to write your core product in a functional programming language, you now have to hire an entire team of people who can program FP. If you decide to write your core product in Javascript, Node, or Typescript, you can now hire bootcamp grads and a handful of senior engineers to keep an eye on them


John Backus, one of the pioneers of functional programming, said “Functional programming makes it easy to do hard things, but functional programming makes it very difficult to do easy things.”


I like pure FP languages for writing UIs. In particular I like languages where you can't have any undefined behavior. I like modeling the state of a page/form/component/whatever as a bunch of constructors on a custom type, each containing exactly the data you need to render the page/form/component/whatever in that given state. I guess this isn't so specific to UIs but it's where I have the most experience. Modeling something as a state machine is cool and good.


Pure FP is fantastic for business applications with complex rules and logic. Shines in projects that leverage Domain Driven Design: https://pragprog.com/titles/swdddf/domain-modeling-made-func...


Recommended (free) reading: Professor Frisby's mostly adequate guide to functional programming

https://mostly-adequate.gitbook.io/mostly-adequate-guide/

See discussion: https://news.ycombinator.com/item?id=22654135


Pure FP is fantastic when you're doing Domain-Driven Design. Optics is brilliant for working with aggregates, functional core imperative shell to structure the dependencies within the applications and put the domain in the center, monads and applicatives to achieve different component composition styles.


IMHO the benefit of fp is it's easier to prove things. The downside is that it's more work: being able to show you've done something correctly is extra work on top of the actual doing of it correctly.

There's a point of view that if you haven't done it correctly, you haven't done it at all. There's another point of view that you just fix bugs as they found.


When you are expected to write tests for your code, you will be happier if that code uses functional patterns. Functional code can usually be tested without the complex mocking many developers are used to relying on.

For me, as a UI dev, there are a lot of repetitive tasks. FP allows me to know that the Lego pieces all work as they should.


I think it's beneficial when you want code that is easier to reason about, test, and maintain. It excels in scenarios where immutability and side-effect-free functions are crucial, such as concurrent programming and building highly scalable systems.


Can be useful for running untrusted code. You can bound your pure functional programs in time and space.


I work full-time as a Haskell programmer, I stream myself building various Haskell projects once a week at https://twitch.tv/agentultra, I maintain a couple of Haskell libraries, and I occasionally do other things too.

I've been a professional programmer for a little more than twenty years and have been using Haskell to this degree for about the last 4 or so. A good chunk of my career was dedicated to Python and C. I've done work in C++ and Javascript. Dabbled in Common Lisp in my spare time. Which is to say, I've ran the gamut of functional, imperative, object-oriented, etc.

I got started in OCaml first by taking the Inria course. I later tried Haskell by taking the University of Glasgow course. I often thought of myself as preferring FP but I always worked in languages with escape hatches where I could retreat to familiar territory. It was easy to rationalize this as, "using the right tool for the job." However thanks to some smart people in my life I decided to dip my toes in languages that were FP-first (OCaml) and later FP-only (Haskell).

> Is there a threshold after which Haskell's purely functional paradigm shines the most?

I think you just have to jump in the pool and go for it. There's no threshold. You can write scripts, programs, whole systems in Haskell. There's no point where, "it becomes worth it."

I'm certain at this point that you can write a relational database engine that is comparable in performance and features as any other in the market in Haskell. You can write video games in it. You can write short helper scripts and tools. Word editors, websites, compilers, etc.

The threshold is how willing you are to leave behind what you're already familiar with and learn a new way of programming that will make you feel like a complete beginner again for a while. Are you willing to re-learn programming over again? I don't know why this is surprising to people but if you started with a C-like language and picked up other C-like languages along the way, picking up Haskell isn't going to be as easy: there's nothing familiar you can use in your C-like knowledge that will be useful for a long while.

I've been doing it for a while now and there are benefits to it which is why I still program in Haskell and got a job writing code in Haskell. You likely already have a good deal of the tools needed to reason about how to design programs in a pure FP language: sets and high-school level algebra. You have notions built into the type system in Haskell like equality, relations, and constraints. Once you start jogging your brain on how to think this way you can design most of your program in types. And when you do it this way, there tends to be only one or a couple of ways to implement a program that satisfies those types... it's almost a mechanical process of filling in the holes.

From a practical perspective of a former Python programmer, I don't have to write a good deal of unit tests I would usually start with when building a Python program. Armed with a well-thought out handful of types the compiler will reject any program that isn't correct by construction which allows me to focus more of my efforts on things that are more important to me: is the logic correct, etc. I'm not testing, "are these things equal," "does this throw an expected error," "is this actually a valid function," etc.

Another weird thing that happens when you learn to program in Haskell is that control flow is kind of... represented in data types. Since Haskell is expression-oriented and execution is carried out by evaluation we get neat features like whole program analysis of all code branches. If I add a new constructor to a sum type, I get instance feedback on all the places where I need to now handle that case. This is super useful.

However... it can be a challenge getting to this place where you're comfortable with it. I had a hard time when I started a small Haskell project early on to make a simple web app. I was so used to logging being a single import away that having to learn all these new concepts in Haskell like, "monad transformers," just to add logging to my app seemed like a bunch of busy work for something that was, "solved," in my mind. The key to getting through moments like that is to forget your past experiences and just open your mind to being a beginner again.

It becomes useful later on when you realize that composition is so incredibly useful. You start to miss it when you go back to procedural languages and there are no guarantees and nothing composes.


> control flow is kind of... represented in data types

This sounds really interesting. Is there a toy example that comes to mind that you could share?


Sure, a trivial one is:

`Either Int Char`

Example values of this type are: Left 3, Right 'a', Left -99, Right 'X'.

When we pattern match on a value of this `Either Int Char` type we have two branches to handle:

``` case myValue of

  Left someInt -> "It's an integer: " ++ show someInt

  Right someChar -> "It's a char: " ++ [someChar]
```

You can nest them if you need more branches: `Either Int (Either Char String)` and you can match out the different branches. Although people don't usually write code this way in Haskell: we usually think of better types that model what we're trying to do.

This is such a common way of doing things. There's a great function called, `compare` that returns a value of type `Ordering` which can be one of: LT, EQ, GT for each of the cases when comparing things with the usual comparison operators.

A lot of Haskell is variations on pattern matching: guards, if-expressions, etc.

Where this becomes even more useful is when you start to use types to enforce pre-conditions. I don't know how many times I've seen code in C++ or Python or Javascript where you have to validate the inputs to a function: make sure the list isn't empty otherwise throw an error! In Haskell we can prevent that possibility and eliminate the need for a useless check by writing a type for a non-empty list and use that in our function's signature (note: you can do this in C++ and Javascript too, it's a good idea).

Or you can write your type so that it fails in some way if the inputs are not right: use `Either SomeError Int`. Then your callers have to handle the error case and the happy case.

There are such things as "exceptions" in Haskell but they're limited (mostly) to things that happen in `IO` returning functions. There are ways to pick apart the seams of the GHC runtime and do nefarious things that would throw exceptions that would break the usual rules (eg: https://hackage.haskell.org/package/bytestring-0.10.8.1/docs...). However it's incredibly rare and sticks out like a sore thumb that it forces you to give such code extra scrutiny.

Exceptions aren't used for control flow in the same way that they are in Python, say. Handling them though is possible and a whole other topic. Let's just say it's a bit tricky to get used to... though Haskell has some good facilities for properly handling async/threads and exceptions so that resources are cleaned up and handled properly.


Imo pure functional might not be a good idea if you really care about performance a lot. Immutable data structures can corner you into suboptimal decisions sometimes.


Functional is the way to go up until you have a problem that you need e.g. TLA+ to solve.

In other words, is your program a calculator or a robot?


When you parse something instead of validation.


Theorem proving like lean4. Ie programs that aren't about state manipulation


It becomes useful when you work with pure functional hardware.


they are not

just like cargo bikes are not




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: