Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Except for all the research around functional programming?

This is like when I hear people claim that physics has not advanced in the last 50-70 years.



Futhark (https://futhark-lang.org) is an example of a functional language specifically designed for writing "normal" high-level functional code that compiles to parallel execution.


Genuine question-- is there something about Futhark that makes it particularly well-suited for parallel execution compared to any other functional programming language (especially of the purely functional kind)? FP in general is inherently well suited for this application.

As I understand it, Futhark aims to leverage GPUs in particular, and that approach seems to be what makes it unique within the category of FP languages?


Functional languages have a core that is well suited for parallelism, but all mainstream functional languages have various features that are unsuitable for parallel execution - particularly if you want to target limited machines like GPUs.

The Futhark programming model is standard functional combinators: map, reduce, scan, etc. You can do that in any Functional language, and it is mostly trivial to rewrite a Futhark program in Haskell or SML or OCaml. Futhark removes a lot of things (like recursive data structures) to make efficient parallel execution easier (and even possible), and adds various minor conveniences. But you don't really need to add something for functional programming to be suitable for parallelism; you just have to avoid certain common things (like linked lists or laziness).


Functional programming is not based off how hardware is implemented. Serial execution of instructions and mutating chunks of memory at a time are all core parts of how the hardware works which aren't functional. Doing graph reduction and making tons of copies will be slow.


Are we still doing this stupid ass reasoning around FP? Is the CPU really that serial, when it literally reschedules your instructions based on the graph of their interconnections?

Also, just think about all the optimizations your "serial" programming language does -- are your yourself really write all those mutations? Or is that the compiler, that in many cases can do a much better job? Now what if the language's semantics allowed even more freedom for the compiler in exchange for more restrictions on the language? Sure, we still don't have "sufficiently advanced compilers" that would replace programmers, but FP languages absolutely trade blows with most imperative languages in many different kinds of problems. Very visibly when parallelism comes to the picture, as it turns out, a smart parallel-aware data structure will easily beat out their serial counterparts here.


> "Is the CPU really that serial, when it literally reschedules your instructions based on the graph of their interconnections?"

Yes. Yes, it is because all of that rescheduling and reordering is completely hidden at great effort and expense to make it seem like the instruction stream is executing in exactly in the order specified. If it weren't, lines of code would essentially execute in an indeterminate order and no programs would function.


Hardware was for a very long time a limiting factor in the practicality of FP. For most general applications, today this is a relative non-issue.

FP is also particularly well suited for cloud computing and parallel computation.


"FP" camps tend to come in two flavors: "I'm a mathematician writing a computer program, and all problems will be made to look like math problems even if it means the program becomes an inscrutable mess of types and dense syntax" and "functional-ish idioms are included". The latter is useful, sometimes, for cloud computing and parallel computation; the former tends to have too many problems (slow build, slow execution, poor jargon laden syntax, etc.) to be very useful outside of academia.

I am (perhaps obviously) biased, here, but I tend to just roll my eyes whenever any of my colleagues suggests we should use functional programming to solve a problem. There are actually very few real-world use cases where it's objectively better.


> all problems will be made to look like math problems

All problems that can be solved with code are math problems. Proofs and programs are isomorphic (see the Curry-Howard correspondence).

---

Edit: this is a factually accurate comment, delivered dispassionately. It's not controversial or new-- it's something we've known for longer than the C language has existed. Why the downvote? Like I said, see this: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


Factually accurate (to some degree), but almost completely irrelevant.

First, not all problems that can be solved with code are math problems. Take driving a serial port, for instance. There might be some aspects of it that are mathematical, but mostly it's a matter of interfacing with the chip spec.

Second, even for problems that are isomorphic to a math problem... the thing about isomorphisms is that they aren't identical. One form or the other is often easier to deal with. (That's one of the reasons we care about isomorphisms - we can turn hard problems into easier ones.)

Well, which is easier, writing programs or doing proofs? Almost always, writing programs is easier. This is why it's (almost) completely irrelevant.

Nobody wants to write code that way. So nobody cares. They're not going to care, either, no matter how forcefully you point out Curry-Howard.

Now, as you say elsewhere, you can gain insights from math that can change your code. That's true. But I suspect that most of the time, you don't actually write the code as a proof.


This is like saying all physical engineering problems are quantum mechanics (or insert other physics theory) problems. It’s technically correct (the best kind of correct), but misleading and useless as a matter of practice.


Your analogy is interesting to me.

I definitely see the parallel, but I'm not actually sure this is true.

A lot of the deep functional stuff I'm learning right now are more about finding connections and shortcuts between things that we used think were different.

For me, comparing functional programming to older languages is more like comparing "tally marks" or "roman numerals " to a modern "place value system".

Now back to the physics analogy. The gap between quantum physics and chemistry is both a theoretical and computation limit.

There are also seen to be very distinct layers where the lower level don't seem to correlate with higher levels.

But I can also see this might apply to the Curry-Howard correspondence.

Hmmm. I have to think about it more...


I see what you’re trying to do with the comparison, but it’s not really the same.

In your example, the two things are separated by at a minimum one layer of emergence: your example is more like saying biology is just chemistry. In maths and programming, they are both at the same level, no emergence.

I also haven’t found what you say to be true at all— As I’ve been learning more maths and more programming, and learning more about the link between the two, I have found that the ability to see problems from more than one angle has had a dramatic impact on how clearly I think and how efficiently I solve problems. Not useless whatsoever.


I was an academic physicist for the first several years of my post-graduate school career--I, too, see much value in having multiple perspectives to a problem.

But that's quite different from your other claim. Maths and programming are not at the same level. When one writes a "hello world" program, math does not figure into the final text of the code at all. Similarly, when one writes code to implement a system interacting with multiple dependencies, one is not doing mathematics, except in the trivial sense of your original comment. That is to say, at such a remote distance that it's meaningless to describe the activity as a mathematical one.


You’re doing theorem proving in all cases where you handle errors or exceptions in non-trivial fashion. Same when you’re implementing any kind of authz. When dealing with async code or threads, you’d better be good at your invariants. This is all discrete maths. Yes I don’t differentiate continuous functions at work but let me tell you juggling multiple condition scenarios when integrating multiple inputs with multiple systems is damn close to working with logic proofs.


I completely disagree, but I don’t have the energy to argue or explain.


Interesting. Functional programming is the only strategy I've seen be successful at building business software. (Hint: you can do FP in Java and it's not even awkward, and SQL is inherently functional).


I work at a FAANG-like company. The code base has almost no functional programming paradigms deployed. It’s a multi-billion dollar company from which I, an IC cog in the machine employee, became a multimillionaire through from the IPO. It’s wildly successful.


Is it all design-patterny OOP?


The vast majority of it is, unfortunately. That's starting to change, though, as we are hitting some pretty severe maintenance and performance issues, as well as being bottlenecks to making changes, that all the abstraction is obviously the root cause of. None of the services the team I used to lead utilized OOP (in the sense you're referring to), and the new team I lead will be making sweeping changes to move away from it under my guidance.

It served its purpose, but we've outgrown it.


Is “wanting evidence for constant silver bullet claims that never actually pan out” really “obviously biased”?

It’s not just you. Functional Programming really does not adequately solve any of the problems that its advocates claim while refusing to provide any evidence.

And it’s not “biased” to write these claims off.


I don't know what "silver bullet" claims you've heard, but I'm not sure how that is relevant to this thread. I don't think I've made any outlandish claims, or any claims that aren't substantiated by a preponderance of academic literature.


I find this whole thread fascinating because nobody seems to have identified what functional programming is defined as (programmers can just use functions to code and end up with functional programming, it isn't an obscure style), who the advocates are or what they claim.

And yet, without any substance, the debate rages.


> nobody seems to have identified what functional programming is defined as

Nobody asked (until now, so thank you for asking!), and this is a fairly well discussed topic for anyone who cares to search!

You will probably get a lot of slightly different answers depending on who you ask or where you look, but I think a very strong common thread is "referential transparency". Functional programming gives you that, and that is the property that makes FP particularly well suited for parallel computation. Referential transparency is related to the concept of "function purity" (in the sense that either one usually guarantees the other), which you will often hear people talk about as well. The two concepts are so intimately tied that sometimes I wonder if they're two different perspectives on the same thing.

This, along with the fact that FP has been an active area of research (an important part of innovation) for a long time, is why I brought it up.

https://en.wikipedia.org/wiki/Functional_programming

https://en.wikipedia.org/wiki/Pure_function

https://en.wikipedia.org/wiki/Referential_transparency

https://softwareengineering.stackexchange.com/questions/2938...

---

> programmers can just use functions to code and end up with functional programming, it isn't an obscure style

That's not how it works. Note that functional programming has nothing to do with merely "writing functions".

---

> without any substance, the debate rages

The substance is there and there is plenty of it, but learning requires work.


> Referential transparency is related to the concept of "function purity" (in the sense that either one usually guarantees the other), which you will often hear people talk about as well.

Isn't referential transparency (the property of a function that allows it to be replaced by its equivalent output) a consequence of function purity? In other words: could a pure function not be referentially transparent?

Also, I remember Robert C. Martin describing the functional programming paradigm as a "restriction upon assignment". I kind of like this definition as the rest seems to flow from it: if you can't assign, you can't mutate. If you can't mutate, you have pure functions, which are referentially transparent.


"Could a pure function not be referentially transparent?"

Yes there are pure functions which are not referentially transparent. A pure function with side effects, such as printing a result to standard output, is not referentially transparent. You can't replace the function with it's return value, since that doesn't replicate the printing side effect.


A pure function is by definition side-effect free. What you write makes 0 sense. Maybe thinking of total functions?

But not all FP languages have to be so puritan to only allow pure functions, most have escape hatches and it is just good form to prefer purity (even in more mainstream languages!). The most common way to circumvent this problem is through a Monad, which very naively put, just a description of the order of side-effecting functions, and their inter-dependencies. This will later get executed at a specific place, e.g. the main function -- the point is, that a large part of the code will be pure, and much easier to reason about.


I thought that a pure function could not have side effects by definition. Would you mind sharing your definition of a pure function?


Functional programmers regularly claim:

-Haskell is faster than C (lol)

-FP gives you free concurrency

-FP makes code more testable

-FP is easier to read

-FP is easier to consume for people

-FP results in no bugs

-FP is easier to change

-FP will literally suck your peepee

-Actually FP is the second coming for Christ

It’s really funny how you also pretend you’ve never heard of all the silver bullet claims that are incessantly plaguing every programming forum.

What’s also really funny is your multiple alt accounts manipulating your votes. You must be real secure in those those claims.


> -FP will literally suck your peepee

> -Actually FP is the second coming for Christ

On a style point, you've rather undermined yourself that these are common claims because there are entries on this list that are clearly fabricated, as well as others that look like wilful misinterpretations of what someone else said. That casts doubt on the more reasonable entries. There are annoying FP evangelists out there, but the overall tone pattern matches straw-manning.

You'd have made it easier for everyone taking the whole list seriously. Transparently mixing fact and fiction just makes it harder for people who aren't already part of a conversation.


I agree with some of these points, but none of them are claims I made in the context of this discussion, so I’m not sure why you’re even arguing about those. It’s irrelevant.


They’re false claims. And I didn’t bring up the false claims FP programmers make until you asked for it to happen. I simply said that rejecting the ridiculous silver bullet claims that FP programmers make is completely fine because they don’t provide any evidence and that which is asserted without evidence can be dismissed without it.

Not that that matter because all of those claims are demonstrably false anyway!


> Functional programmers regularly claim:

> -Haskell is faster than C (lol)

If they regularly claim that you should easily be able to point to several recent examples. Can you?


Then why has it been slowly incorporated into literally every mainstream PL? You do know you don't have to be an extremist, and your code can contain both FP, OOP and imperative parts, whichever makes the most sense? FP and OOP are not even opposites.

They absolutely solve real issues, and it's just sticking your head into sand to say otherwise.


The parent comment was talking about aligning programming languages with the hardware. I am not commenting about the viablity of those languages, but rather that if your goal is to write the most performant code by understanding the strengths and weaknesses of the hardware than using fp concepts is not the way to do it.


I feel like in both of your comments you've changed the topic slightly. I responded to the following comment, which I interpreted literally:

> Very little innovation in programming languages has happened regarding new realities at the hardware level especially transition from serial to parallel execution


Well... if code were pure (in the FP sense), then a "sufficiently smart compiler" could move it around to extract the maximum performance.

But, as always, the sufficiently smart compiler never shows up. So we're left with the humans doing the tuning, and as you say, FP is kind of antithetical to that approach.


But how will language performance evolve as the nature of the hardware our programs run on evolves? IMHO this is not an easy question to answer right now.

C compilers don’t produce 100% optimal assembly language in all cases, but typically the assumptions they make are light. The executable code they output is somewhat predictable and often close enough to hand-optimised assembly in efficiency that we ignore the difference. But this whole approach to programming was originally designed for single-threaded execution on a CPU with a chunk of RAM for storage.

What happens if we never find a way to get a single core to run much faster but processors come with ever more cores and introduce other new features for parallel execution? What happens if we evolve towards ever more distributed systems, but farming out big jobs to a set of specialised components in the cloud at a much lower level than we do today? What happens if systems start coming with other kinds of memory that have different characteristics to RAM as standard, from content-addressable memory we already have today to who-knows-what as quantum technology evolves?

If we change the rules then maybe a different style of programming will end up being more efficient. It’s true that today’s functional programming languages that control mutation and other side effects usually don’t compile down to machine code as efficiently as a well-written C program can. The heavier runtimes to manage responsibilities like garbage collection and the reliance on purely functional data structures that we don’t yet know how to convert to efficient imperative code under the hood are bottlenecks. But on the other hand, those languages can make much stronger assumptions than a lower-level language like C in other ways, and maybe those assumptions will allow compilers to safely allocate different behaviour to new hardware in ways that weren’t possible before, and maybe dividing a big job into 500 quantum foobar jobs that each run half as fast as a single-threaded foobaz job still ends up doing the job 200x faster overall.


> C compilers don’t produce 100% optimal assembly language in all cases, but typically the assumptions they make are light. The executable code they output is somewhat predictable

Lol, since when? C compilers literally will run some of your code at build time, and only write the results into the binary and they do all sort of crazy "mental gymnastics" to make people believe it is still a dumb single-pass compiler.


You're mixing up functional programming with an execution model for functional languages. These are not the same.


Research around functional languages involves their execution models. Even ignoring execution models immutability is a staple of functional programming which is not good for performance.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: