Functional programming appeals to many of us for reasons other than these practical considerations: Functional programming feels like reasoning in algebra. As in Modern Algebra for math majors (groups, rings, fields) and beyond.
There's a saying in mathematics that when any field matures it turns into algebra. Life crossed a threshold from chemistry to biology on Earth, and developed exponentially from there. Order in mathematics crosses a similar threshold, as it becomes sufficiently structured to support algebraic reasoning. The subjective experience is like ice melting into a churning liquid, or a land-locked creature learning to fly. Once one has this experience, one cannot imagine thinking any other way. In the case of functional programming, programs written in other languages feel like ad hoc pre-civilization constructions, doing arithmetic by counting pebbles.
Advocates of Haskell don't tend to express this, because from the outside it can come off like trolling, but this algebraic sense of wonder is at the core of many Haskeller's experiences. We all have the example of Lisp in our minds, its "we found God" advocacy did much to hinder its adoption. Nevertheless, understanding this explains much about Haskell. The real point of lazy evaluation is that it best supports this algebraic reasoning, as carbon best supports life. The 47,000 compiler options reflect a primary goal of being a research bed for language constructs derived from mathematical category theory, despite its success as a practical language for those free to choose it.
The killer app for Haskell is parallelism. To this day it has the best implementation of parallelism; one can achieve a 7x speedup on 8 cores by adding a handful of lines to a program. This ease is a consequence of the functional model, and of considerable effort by Haskell's developers.
Idris 2 is itself a joy to learn, if one wants a smaller, cleaner Haskell without the 47,000 compiler options. One gets to learn dependent types. Alas, it doesn't offer commercial-grade parallelism.
> Life crossed a threshold from chemistry to biology on Earth, and developed exponentially from there. Order in mathematics crosses a similar threshold, as it becomes sufficiently structured to support algebraic reasoning.
It's funny, because I think this metaphor cuts exactly the opposite direction -- code reaching sufficient complexity, the pure math approach doesn't survive an encounter with the real world.
I love static analysis and think that there are core business domains that are best expressed with a rich type system, but there is a whole world of absolutely critical programming that cannot simply eliminate messiness, due to interfacing with humans and our flawed practices.
With sufficient time and an expressive enough type system, even the messiest business process can be represented, but I'm not convinced that's an example of beauty but rather stubbornness.
> ... but there is a whole world of absolutely critical programming that cannot simply eliminate messiness, due to interfacing with humans and our flawed practices.
I understand what you're saying, but I don't think you'll find many programmers that will argue that a programmer shouldn't use the best tool for the job.
Backus emphasized the algebra of programs in his Turing Award lecture: "Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs"
Backus: Well, because the fundamental paradigm did not include a way of dealing with real time. It was a way of saying how to transform this thing into that thing, but there was no element of time involved, and that was where it got hung up.
Booch: That’s a problem you wrestled with for literally years.
Functional Reactive Programming seems promising. (Elm ditched it in favor of some simpler metaphor on didactic rather than technical grounds, I think.)
> In the case of functional programming, programs written in other languages feel like ad hoc pre-civilization constructions, doing arithmetic by counting pebbles.
> Idris 2 is itself a joy to learn, if one wants a smaller, cleaner Haskell without the 47,000 compiler options.
From a conceptual point of view, Idris makes Haskell look similarly crude.
> To this day it has the best implementation of parallelism; one can achieve a 7x speedup on 8 cores by adding a handful of lines to a program.
Eh, you can achieve the same in Rust using rayon, this isn't exclusive to Haskell or FP. Do you have an example in Haskell, where you get an effortless speedup which wouldn't be easy in another language?
So half a line added to parallelize the mapping a pure computation including using a sensible threadpool implementation. How impressive you find this depends on how you define "easy in another language". Rayon comes pretty close but you can encode some properties about your computation in the Haskell type system that Rust does not yet support and so the safety of some computations can be statically proven in Haskell but not in Rust.
var colours = coordinates.stream().map(this::computeColour).collect(toList());
var colours = coordinates.stream().parallel().map(this::computeColour).collect(toList());
Java doesn't do as much (well, anything) to help you make sure the implementation of computeColour is suitable for parallelisation, but that comes with the territory. On the other hand, it will automatically split the stream up appropriately for the number of cores you have.
You have just started that Java isn't as good but since it isn't, it shouldn't be judged so hardly and thus concluded that Java is as good as any (when using softer requirements just for Java)
I also think that explicit threadpool isn't the same thing as implicit one. But that's very minor issue here easily solvable by different api for parallel method
Ray-tracing is absolutely trivial to parallelize, because every pixel is computed independently of all others. That would even be trivial to parallelize in C, and that's saying a lot!
I think this reasoning goes the other way (as I understand), Rust can express threadsafe mutability, as you can only have one mutable reference to any object, so if I can mutate, I know I have the only reference.
Yeah, but Rust isn't exactly a counter example, I think. It's heavily inspired by Haskell, its type system, as well as FP in general. So, it sort of inherits (pun not intended) its parallelization capabilities from that.
What? No. Rust has some influence from Haskell in terms of traits and such, but Rust's parallelization capabilities comes from its tracking of mutability. (No, Haskell did not invent caring about mutability.)
Also, succinct expression of parallel algorithms isn't a particularly unique feature. Like half of the named algorithms in the C++ standard library can be parallelized by adding a single parameter.
That’s because the parallelization in the STL is encapsulated. In Haskell and Rust it’s explicit, it’s not just about adding a parameter to an existing function, it’s a general mechanism.
Heavily? It's actually heavily inspired by ML (SML/OCaml), not Haskell. Most of its functional programming attributes originated in SML, and has avoided most of the things that make Haskell unique. Really it is just type classes that came from Haskell.
I was around in the very early days of rust, and it was explicitly stated in many places in the docs that the trait system, ADTs, etc. were all inspired by Haskell experience. I don't think we ever mentioned OCaml or SML. The trait system in Rust has nothing whatsoever to do with SML or OCaml and everything to do with Haskell's typeclass system. So between that and ADTs, on what grounds do you think it was "heavily inspired by ML"?
I too was around in the very early days ...at least early enough that there wasn't a rust "team", but rather just Graydon.
And to wit the only mentions I read about Haskell as an influence were a) the trait system, and b) to criticize and explicitly refute the Haskell approach to a given problem. There was a reason Graydon wrote the compiler on OCaml and not Haskell. He was very critical of a lot of the ideas that Haskell brought. In particular, he hated how Haskell took mutability from "not the default" to "practically impossible". He hated the effect system and how overly intrusive it was (although he was open to the idea of a fine grained effect system that was not tied to some all powerful Monad typeclass). He hated lazy evaluation. He hated the syntax...even the convention toward snake-case vs camel-case is an homage to OCaml. He didn't even care much for typeclasses, preferring the more strict encapsulation of modules, and only bringing them in after a lot of contributors pushed for them.
Basically, apart from typeclasses (which werent really unique to Haskell, as they had been known as interfaces within OOP for a few years), Graydon viewed Haskell quite a bit like an old atheistic refrain about religion: that which is good about it is not unique, and that which is unique about it is not good.
BTW, Rust's ADTs as well as Haskell's ADT's came from ML. They've been around since 1973, 13 year before Miranda, and 17 years before Haskell's first release. You don't get to claim that Rust got it's ADTs from Haskell any more than you get to say that Java got it's Optionals from Scala.
EDIT: see this comment from Graydon himself, made about a year ago on the rust subreddit:
> That said: back then Rust was much more OCaml-y. We did not start with traits / typeclasses; we started with modules (for a while: first-class, though quite badly broken due to my lack of knowhow). I was and am unapologetically more of an OCaml fan than a Haskell fan. This opinion is not made from a lack of information about either, and I am not especially interested in having a Haskell-vs-OCaml argument here. I don't even think of them as being especially different languages from a family-lineage perspective. But insofar as I think eager is a better default evaluation strategy than lazy, and modules are a better abstraction mechanism than typeclasses, I am more in the OCaml camp. Other folks later in Rust's development argued for (and eventually won) the typeclasses / traits thing, against my earlier preferences.
Thanks a lot for the in-depth explanation! May I pick your brain a bit more on this topic?
1. Do you know what were the main arguments for and against using typeclasses vs modules for abstraction? It's interesting to read that traits wound up in the language against Graydon's preferences.
2. There's another commenter on this thread suggesting that 'Typeclasses are just OOP interfaces' who has gotten a bit downvoted. To me they seem kind of the same thing, yet at the same time there's this feeling that I'm overlooking something and typeclasses are likely 'so much more powerful'. I just can't figure out what that may be. So, are there any major differences between what interfaces are in OOP and what typeclasses/traits are?
The power of Haskell's type classes comes from two things:
* Implicit composition of instances: you can write `show [True, False]`, which will automatically/implicitly compose the `Show` instances of lists and booleans. With modules or interfaces, you'd have to build and use a "BoolListShow" manually.
* Higher-kinded types, to abstract over type constructors. This enables the use of abstractions like monads.
Rust only has the first of these two.
Scala uses OOP interfaces instead, but by augmenting them with both capabilities (implicit composition and higher-kinded types), it achieves the same expressiveness as Haskell.
How could you have not mentioned OCaml? The original Rust compiler was implemented in it. By this, Rust has a very clear (OCa)ML heritage.
It's meaningless to argue, whether Rust got ADTs from Haskell or OCaml, because the author (Graydon Hoare) had been clearly familiar with both and both got ADTs from ML, which is much older than either of them.
On the other hand, traits are a different story, those just Type classes with a different name, and that is a Haskell thing.
After reading some of these comments I went around searching for info and found that in the Rust book it does feature SML and OCaml prominently as influences[0].
However, the things mentioned: algebraic data types, type inference, pattern matching could also easily be seen as influences from Haskell or FP languages in general.
At the end of the day though, it's not like this matters. I'm just happy that such flow of ideas happens, irrespective of where exactly each particular concept comes from.
> There's a saying in mathematics that when any field matures it turns into algebra. Life crossed a threshold from chemistry to biology on Earth, and developed exponentially from there. Order in mathematics crosses a similar threshold, as it becomes sufficiently structured to support algebraic reasoning. The subjective experience is like ice melting into a churning liquid, or a land-locked creature learning to fly. Once one has this experience, one cannot imagine thinking any other way.
Hi, this seems wrong to me.
First, as far as I can tell, such a saying does not exist. I've never heard it, despite having a fair amount of experience with mathematics (though not as much as you), and I can't find it through Googling.
Second, and more substantively, it is not the case that mathematical fields inevitably turn into algebra. One only has to look at PDE, probability, and analytic number theory to see this is the case (to give just a few examples). All are highly mature fields and essentially non-algebraic. This is not to say that ideas from algebra are not occasionally useful, just that it should be obvious to anyone who opens an introductory graduate text in PDE that the subject has not "turn[ed] into algebra."
> The killer app for Haskell is parallelism. To this day it has the best implementation of parallelism; one can achieve a 7x speedup on 8 cores by adding a handful of lines to a program. This ease is a consequence of the functional model, and of considerable effort by Haskell's developers.
I don't think this statement is entirely wrong, but I don't think it's entirely right either. There are plenty of languages where one can get parallelism with just a few extra lines of code; Julia comes to mind immediately. Easy parallelism is hardly a Haskell-specific, or even FP-specific, feature. I would of course agree that designing things without mutable state makes writing parallel code easier, but this can be done in many high-level languages these days.
But that is a nitpick. A more substantive objection is that the sole purpose of parallelism is performance, and idiomatic Haskell is generally speaking slower, sometimes a lot slower, than the same algorithm written in C++ or similar languages. (The adjective idiomatic is important here – I'm aware that one can with enough contortions write C-like code in Haskell, but this response undercuts of the point of highlighting Haskell-specific features like immutability, laziness, etc.) In particular, a few years ago it seemed like the people implementing parallel Haskell features didn't understand or care about fundamental things like cache locality. Maybe this has changed?
There is also an amusing story about the "ease" of writing parallel quicksort in Haskell [1].
> The real point of lazy evaluation is that it best supports this algebraic reasoning, as carbon best supports life.
It's worth noting that Simon Peyton Jones is on record as saying that any "Haskell 2" should not be lazy [2].
Good point... The realization that chaotic systems are ubiquitous, and that many physical phenomena (motion of the planets, how chemical interactions work at the electron level etc) can't be modeled analytically (so far as we know) was a big upset to mathematics and science, and still isn't addressed well in school curriculum.
To expand on your main point: A loose, but appropriate analogy is that FP appeals to programmers in the way analytic solutions appeal to mathematicians and scientists: Neat, ideal, beautiful... but a poor fit for many practical problems.
> In particular, a few years ago it seemed like the people implementing parallel Haskell features didn't understand or care about fundamental things like cache locality.
That's the main issue which FP-for-parallel-execution proponents don't seem to get: Identifying independent (and therefore parallelisable) computations isn't the hard part, but planning the data layout and partitioning so that all your parallelism isn't eaten up by synchronization and data movement between cores.
That is misleading. Although you're technically correct [1], the computational models on which computational complexity classes are based are so abstract and idealized, that they offer almost no information whether your program could make efficient use of an actual, physical 8-core CPU.
The NC complexity class assumes a PRAM machine, in which all processors can access all locations of a central memory in constant time. That's not how a multi-core CPU works in practice: If your parallel algorithm doesn't make proper use of cache-locality, you've already lost.
> Second, and more substantively, it is not the case that mathematical fields inevitably turn into algebra.
Not to fully justify the OP (though it rings both true and not quite true to me), but I think topology is a good positive example. It's impressive how much of continuity can be captured purely algebraically. For example, topologies are a kind of lattice, which cross both algebra and order theory. And then you get literal algebraic topology, which captures useful and interesting properties of individual topological spaces effectively.
> A more substantive objection is that the sole purpose of parallelism is performance.
I disagree a lot more strongly here, though I will twist your words a little bit. Yes, parallelizing a program is about making it run faster; but parallelism is also a base fact of many problem domains, where you have multiple agents (up to and including humans) collaborating and interacting simultaneously.
Abstracting over the speed-up aspects of parallelism gets you to concurrency -- the independence of knowledge held by distinct agents that is not directly knowable by others -- which is far more fundamental than simply making programs run faster. In my experience, most properties of modular systems can be stated in terms of concurrency. Parallelism is an exploitation of that concurrency structure to schedule things efficiently, but it is by no means the only application.
The fact that "one can achieve a 7x speedup on 8 cores by adding a handful of lines to a program" tells me that Haskell is extremely good at letting you expose the concurrent structures of your problem domain.
> but parallelism is also a base fact of many problem domains, where you have multiple agents (up to and including humans) collaborating and interacting simultaneously.
I don't think parallelism is the word for that. More like concurrency
Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency (such as bit-level parallelism), and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU).[5][6] In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.
Certainly, algebraic topology is a huge win for algebraization!
> Yes, parallelizing a program is about making it run faster; but parallelism is also a base fact of many problem domains, where you have multiple agents (up to and including humans) collaborating and interacting simultaneously.
Maybe I should say what I meant more clearly. The claim "one can achieve a 7x speedup on 8 cores by adding a handful of lines to a program" implicitly has a "and this is good because being fast is good" at the end. Why care about a speedup if you don't care about speed in the first place?
It follows that when evaluating a language's ability to gain speed through parallelization, we should also evaluate how fast that language is in the first place. If you can get a 7x speedup on 8 cores from a few lines of code, but the original algorithm is, say, 10x slower than a similar implementation in C/Rust/Go on one core, then who cares? The ultimate criterion is always absolute speed, not relative improvement.
I agree that being able to deal with concurrency is important, but as the Go people always remind me, concurrency and parallelism are distinct concepts.
Probability has surely morphed into an algebra, and what you study in modern introductory books is the conversion of the algebra into "everyday language" so that students don't get scared. It has very little similarity to what probability looked like when it was being developed.
Also, you seem to be overstating the maturity of PDE (people are pretty much still trying to turn it into algebra, but it may not be possible) and number theory (that's an active research area).
I have never seen that saying either, but it seems quite correct, with the exception of the areas where an algebra is impossible, so the field matures without becoming one.
On what basis do you say that probability has "morphed into an algebra"? Consider all of the popular research topics today: the KPZ equation, Schramm-Loewner evolution, spin glasses, percolation and critical phenomena, rigorous statistical mechanics in general [1], random matrix theory, large deviations, stochastic analysis and (P)SDEs.
Obviously, I'm forgetting some, but none of these are primarily algebraic. Some algebraic methods are used (orthogonal polynomials in random matrix theory, and determinants and a whole host of other things to study integrable models in the KPZ universality class, etc.), but clearly groups/rings/fields are not playing a major role.
For a more systematic approach you could look at recent issues of Annals of Probability, Annals of Applied Probability, and similar journals. There's not going to be a lot of "modern algebra" (of the flavor you see in algebraic geometry) there.
The same comments apply to PDE and analytic number theory. Both are obviously mature fields (worked on for a long time by many people, with a lot of great discoveries), but again algebra does not play a central role in either. In particular I am not aware of any PDE specialists whose research agenda consists of "trying to turn it into algebra."
I know vaguely from a few friends that there is some cross pollination between algebraic geometry and PDE's but both fields are so huge that this doesn't characterize either field. I think the gist of it is that the moduli space of PDE solutions are frequently interesting as varieties.
I strongly identify with this view. Have always loved math. I remember writing a paper in middle school on how all of nature is built on math which ties closely with this quote "algebraic sense of wonder". My B.S. degree is in mathematics and I loved it.
And yet... I very strongly - no, extremely - dislike functional programming. When building concrete things, I love pragmatism above all else and to me functional is the diametrical opposite of that. A CPU has registers, instructions and addressable memory. Any abstraction that strays too far from that reality is a distraction (and maintenance burden) I don't want to deal with.
The primary difference between computer science and mathematics proper is an overt concern for performance. Mathematical elegance and computational efficiency are not the same thing.
Haskell is a pretty awesome language, but it doesn't make it easy to incrementally optimize programs.
Not all aspects of programming can be described by math.
Mutations and IO are very critical parts of programming and the two areas where FP breaks down. Even the IO Monad leaks the imperative nature of the program over to the programmer.
The only way pure FP can sort of work is if there are heavy frameworks abstracting IO and mutation away from the programmer. If you're not doing IO or mutating something then your caching framework, database or Haskell runtime is doing it for you. Additionally like I said earlier, Even if your database is handling mutation for you, you still end up embedding mutation commands into the strings of your pure FP function. Updating a database in haskell still necitates the haskell user to place the mutation command in a SQL string.
My point is, that math is not the complete solution to the programming problem. What FP allows the programmer to do is to use the framework to segregate combinatorial logic away from mutation and IO. Your combinators will always be more composeable and modular but your IO and mutation functions will be less modular but they still have to exist.
>Not all aspects of programming can be described by math.
This is completely untrue. You may not be familiar with the math, but that doesn't mean it doesn't exist. If you can model something well enough to understand it, then you can model it mathematically. There really isn't any domain of knowledge that math is unsuitable for, excepting if you don't know any relevant math to do.
Technically everything in the universe can be modelled by math. If it isn't modelled yet we can make something up to model it. Math is just axioms and theorems so yeah, you're not wrong.
I'm speaking in less technical terms. For example in general mathematical equations or axioms represent immutable concepts. In programming, variables mutate and change... very different from what math traditionally represents. Haskell is an attempt to segregate the immutability (the math part) away from the less "mathy" part (the mutations/IO).
Maybe math is too broad of a term. I probably meant to say "algebra" can't model all of programming, or whatever more suitable word that may or may not exist.
Mathematicians have no trouble modeling change. There are many ways to do so. Some are algebraic, some are not. There is nothing wrong with modelling mutability using immutable structures: that is how you probably think about history, after all.
Either way, it is unclear what you're actually trying to say. Haskell has methods for modelling change of state through pure objects, but you're talking about that as though it were an inherently flawed or invalid approach, rather than one of many equally valid approaches to modelling state transformations.
>but you're talking about that as though it were an inherently flawed or invalid approach,
This is just your bias. I never said this. I feel some people worship a paradigm so much that they see everything as an attack. FP is great however it is not a one size fits all solution. There are limitations. This is literally what I said.
>Mathematicians have no trouble modeling change. There are many ways to do so. Some are algebraic, some are not. There is nothing wrong with modelling mutability using immutable structures: that is how you probably think about history, after all.
You can model change with purity but the program in the end actually has to conduct the change without the modelling. The application has to eventually perform real world actions and the purity of your program cannot protect you from potentials pitfalls of imperative style errors/mistakes.
You have a database. The purity of haskell does not remove the necessity of mutating data in that database.
What you can do is segregate dealing with mutation/IO to a framework or external service. This is what haskell does, but you see this is just shifting the problem to somewhere else. Someone somewhere still had to deal with the issue of mutation. Modelling mutation with purity does not eliminate the problem it only moves the problem to another location.
Segregation of mutation/IO into a framework is a good thing. It makes it so that the problem can be solved one time, rather then a problem solved many times. However the main point of my post is to say that "math" or "algebra" is not a one size fits all solution. You cannot model everything this way, moving the problem into a framework does not make the problem disappear. Someone still had to use imperative primitives to deal with the issue. Think about the complexity of a SQL database.
You said FP "breaks down" when handling mutability, and you attributed that to some vague sense in which "mathematics" is the cause of it.
I have no bias for FP. I just don't understand what you're getting at.
>the purity of haskell does not remove the necessity of mutating data in that database.
That's great, because the purity of Haskell does not inhibit mutability. It just constrains it to lie within some mutable context.
>Modelling mutation with purity does not eliminate the problem it only moves the problem to another location.
Location? What is a location? It's like you're saying you can't truly add 3 + 3, because someone still has to add 1s under the hood. It's just a different model of the same problem.
Honestly, it sounds to me like you've never used the language, and your criticisms come off a bit like standing on an aircraft carrier shouting about how iron boats will never float.
>That's great, because the purity of Haskell does not inhibit mutability. It just constrains it to lie within some mutable context.
Haskell does inhibit mutability within your haskell program. Your haskell program does not mutate. What it does is it that it does IO operations and the mutations happen externally. It can also model mutation without doing actual mutation but in the end there's no point in a program modelling mutation if the program can't actually do mutation or IO.
>Location? What is a location? It's like you're saying you can't truly add 3 + 3, because someone still has to add 1s under the hood. It's just a different model of the same problem.
Location meaning outside of haskell. Like your database. I'm saying within haskell you have a variable.
x = 3
You can never mutate that variable in haskell. However you can mutate the state of the console without ever mutating any state within haskell.
print "hello"
The above triggers no mutation in haskell. A runtime outside of the haskell universe analyzes the IO instructions and mutates the console. What I am saying is that the thing that mutates the console has to do mutation. Whoever wrote that thing HAS to write imperative primitives. They are moving the imperative nature of programming INTO a framework. They are not eliminating the problem.
This is the same thing as a database string. UPDATE. You are moving all the imperative errors that have to deal with threading and mutations to the database. But your haskell sql string is still pure.
Again my argument is just saying that this thing that is doing the UPDATE or mutating the console cannot be built using haskell style code or immutable algebraic concepts. Imperative primitives need to exist and someone needs to use those primitives to do the actual mutations.
The OP is basically saying algebra is the future and it can replace everything. I'm saying it CAN'T.
>Honestly, it sounds to me like you've never used the language, and your criticisms come off a bit like standing on an aircraft carrier shouting about how iron boats will never float.
And honestly you sound like the guy standing on the iron boat. The person I'm shouting at is you, but you're just dismissing me.
>You're just trolling at this point. Please reconsider your confidence in this material, because you are egregiously mistaken.
Why the heck would I run such a long expose and troll you and be mistaken at the same time.
>How does this Haskell program write to stdout if it doesn't mutate memory?
Let's not be stupid here. Every program on the face of the earth must mutate memory because that's how computers work. Assembly instructions mutate things. We're not even talking about that. We're talking about application level programming where we only deal with primitives that the application programmer is aware about. I am saying that at the application level within the category of Hask nothing is mutated.
In your example tell me what haskell primitive.... What variable or data was mutated within haskell? That is what I'm referring to.
Try to implement IO or ST in another lang using only purely functional primitives. Use your algebra to make it work. You'll find it's impossible. What this means is that imperative primitives must exist for any programming to work.
>Then it doesn't work. Every Haskell program does nothing, because mutation of program state does not occur.
Obviously I'm operating on a certain layer of abstraction here. In X = 6, X is obviously immutable in haskell. A runtime is obviously executing your haskell program and mutating the console but your haskell code itself is pure. But you know this. It's quite obvious you're the one that's trolling.
>What variable or data was mutated within haskell?
The stdout buffer.
Just because mutation isn't explicit doesn't mean it isn't there. Programming languages are not syntax devoid of meaning: they have semantics. What happens at runtime is part of what a programming language does. (Arguably, that is the most important part of what they do.)
>What this means is that imperative primitives must exist for any programming to work.
That's completely untrue. Imperative languages can be implemented as a subset of functional ones[1] and vice versa. Again, they're just different models. No language can do anything if it isn't implemented in a machine. A machine isn't "imperative"[2], it's a pile of atoms that do what atoms do, without paradigm or instruction. You absolutely could implement a pure functional assembly language. The reason nobody has, is because it doesn't matter: any Turing complete language can be used to implement any other language[3].
Try to implement `volatile` in C without using another language. Does that mean C fails to model real hardware? No, because it has `volatile` to get volatile semantics! Just like Haskell has IO to get I/O side-effects. Or ST to get mutation semantics.
> Use your algebra to make it work. You'll find it's impossible.
Don't assert it, Prove it. Show me one computable function that cannot be computed using boolean algebra.
The stdout buffer is not part of the haskell language, it is part of the OS. The haskell runtime reads the haskell language and accesses the buffer. Neither the runtime or the buffer is part of the haskell language, get it? That's why haskell is called "pure" Category Hask: https://wiki.haskell.org/Hask#:~:text=Hask%20is%20the%20cate....
>Just because mutation isn't explicit doesn't mean it isn't there.
So? I never said it wasn't there. I'm basically saying as far as the programmer is concerned when operating within the haskell language no haskell language primitive is mutating. stdout buffer is not a haskell primitive... it is an OS primitive.
>That's completely untrue. Imperative languages can be implemented as a subset of functional ones[1] and vice versa.
This is true theoretically, but physically but you can't actually build a functional machine. Lisp isn't actually a functional language and you'll see from the instruction primitives that the lisp machine is more or less a turing machine that mutates memory.
>No language can do anything if it isn't implemented in a machine.
So? Never said this wasn't true.
>A machine isn't "imperative"[2], it's a pile of atoms that do what atoms do, without paradigm or instruction.
The machine you build is limited by what you build it with. You have a limited set of atoms. Therefore you can only build a machine with limited amount of state. In order to use the state efficiently the state must be mutable. Mutable state means imperative instructions. You can imitate functional programming with such a machine and you can sort of solve the memory problem with garbage collection. But with what paradigm do you implement the garbage collector? Imperative primitives.
> The reason nobody has, is because it doesn't matter: any Turing complete language can be used to implement any other language[3].
No the real reason is also because it's physically impossible. A physical translation of a actual lambda machine cannot be realized. What they can make is register based machine that are more efficient at compiling certain functional languages that's it. All machines we build have some sort of state that changes.
>Don't assert it, Prove it. Show me one computable function that cannot be computed using boolean algebra.
Sure I can prove what I said. But you're changing the problem from IO and ST to a computable function which I assume is algebraic. So of course all of algebra can be used to create all algebraic functions. I'll just prove what I said rather than what you changed it to.
Assuming mutation is an axiomatic operation that cannot be built from immutable operations, you will see that no mutation operation exists in algebra indicating that mutation cannot ever exist in any theorem of algebra:
You will see that no algebraic operation involving mutation exists in the above document.
>Try to implement `volatile` in C without using another language. Does that mean C fails to model real hardware? No, because it has `volatile` to get volatile semantics! Just like Haskell has IO to get I/O side-effects. Or ST to get mutation semantics.
No but I can implement volatile with imperative primitives from other languages. All I am saying is you cannot implement ST and IO with functional primitives.
Personal attacks are against site rules on HN. You are clearly across the line here. Moderators ban people for repeated violations, so if you want to continue here, you should stop posting abuse.
You can do better, then. Read nendroids latest reply to me, and help them understand that it's the "use of statements" that makes a language imperative, not the "modification of state."
I won't be responding, and they seem to think they're quite the expert in this sort of thing.
"In computer science, imperative programming is a programming paradigm that uses statements that change a program's state."
The above quote is ripped straight out of wikipedia's definition of imperative programming showing that what I said wasn't a misunderstanding but an official definition.
The definition of imperative programming must include mutation otherwise it's isomorphic to functional programming. Because functional programming is simply statements without mutation.
Case in point, This guy turned what was just fact checking into something personal. See, it's not about being civil. That's just the way people like to think they are. The reality is most people can't accept being wrong and they can't accept opinions they disagree with and the irony is everyone believes they're above this base behavior.
"In computer science, imperative programming is a programming paradigm that uses statements that change a program's state."
You will see from the quotation above. The very act of changing state is an imperative style by definition. The purpose of mutable state is for it to change. So mutable state = imperative instructions.
>You don't have to listen to me, but you should seek out a second opinion from a competent person who can get through to you.
I'll throw that advice back at you. But you don't need to find that person. I'm right here in front of you telling you how it is.
>Everything I said in my last post is basic, well-understood computing knowledge. If you want me to disagree with it, you need to find yourself a competent computer scientist to aid you in framing your ideas in a way that is comprehensible with respect to the subject matter.
Yeah but you didn't account for the practical parts of computing. The theoretical parts often deal with machines that can't be realized in reality. It's pretty much common sense. How do you represent a function call without mutable state? How can you have a machine do an algebraic operation without mutable state? The very act of holding that information in state requires a state change meaning to even load a lambda machine with a program requires an imperative instruction.
We're also in a corner of computer science that isn't formally well defined. A language can be formally defined as pure but there's no formal theory for systems design and how the system overall influences the content of a pure SQL string in Haskell.
I define a haskell sql string to have syntactically correct SQL. The external requirements of my database are forcing me to define a string this way, is that a side effect? There's no formal rules in literature so it's just raw talking points... you won't be able to find an official source stating who's right or who's wrong.
The interpretation of the io monad? That's another story.
Once you realize that mutation itself is a side effect then why not use the single most powerful idiom we have for abstraction over side effects? (Monads with do notation; for blocks in f#)
For the record io is st with an opaque realworld type. And st is fully pure with a neat type trick to prevent leaking of st references.
I never said it's not pure. Just like how a SQL string in haskell is fully pure. Doesn't change the fact that you're using pure primitives to control a process that is fundamentally unpure. The concept leaks across the boundary.
It does. You're completely and utterly wrong. You don't understand.
I'll reiterate my example a SQL string is pure. Just like the IO monad is pure. However when you're coding the sql string in your "pure" haskell program you have to account for imperative side effects related to the SQL itself.
sqlString = "UPDATE X SET X.Y=2 WHERE X.Z = 1"
sqlString is technically "pure" but that doesn't mean you can treat the UPDATE command in the string as a pure concept.
It doesn't matter how "pure" your language or sqlString is... the concept of a mutation leaks over into the language and the programmer still has to deal with the concept.
Like you're original post said. The interpretation of the monad is different, and the programmer still needs to account for this in how he composes things together. The concept leaks across boundaries.
edit>>this whole karma thing is unfair. Posters can't vote down responses. I simply state my opinion the person responds then votes me down because he disagrees. What's the point of even having a discussion?
This is a philosophical distinction, not an objective fact.
I do not consider sqlString to be impure. It’s a perfectly valid string. I consider `executeQuery sqlString :: IO Result` to be an indication of impurity, since I can do `let x = executeQuery sqlString in “bar”` as a valid bit of Haskell but it’s clearly 100% pure.
If you want to think that sqlString is “impure” outside of the context of execution (i.o.w. a Monad...) then sure, that’s valid, but so is my assertion that it is pure since it is referentially transparent. It exists in the void as just another string until the programmer decides to make it into an IO value that’s executed for its impure side effects (the only reason we do anything in computing, right?)
I think you’re getting downvoted (I can’t) because of your first statement.
>I think you’re getting downvoted (I can’t) because of your first statement.
nobody is reading this stuff anymore. It's just you and me. You have over 800 karma. You CAN downvote and you ARE. There is no theatrics, you're just voting me down, stop.
>I do not consider sqlString to be impure.
It's not "impure." But it doesn't change the fact the way you you write your SQL has imperative side effects within the database. You can trigger a deadlock in the database from within your pure haskell code if you wanted to.
It's not a philosophical thing. You absolutely have to consider imperative side effects even in your pure program.
That is reality. The philosophical part is whether you can call it "pure" or "impure."
>If you want to think that sqlString is “impure” outside of the context of execution (i.o.w. a Monad...) then sure, that’s valid, but so is my assertion that it is pure since it is referentially transparent. It exists in the void as just another string until the programmer decides to make it into an IO value that’s executed for its impure side effects (the only reason we do anything in computing, right?)
Yeah so? I never said your assertion was wrong. I never said that it was "impure." But I did say that you have to account for side effects in your program. Example:
Is a valid "pure" string, but will trigger a syntax error in your database. You have to account for all of this within your "pure" program. Haskell eliminates side effects in the category Hask but does not actually eliminate the need for YOU to deal with those side effects. This part is an objective fact.
Here's a better way to put it. For this specific example, the impurity of the real world leaks into your pure haskell program by affecting the contents of the string. The type itself can be seperate from the real world but the contents of the string reflects knowledge and impurity from the real world.
I have over 14,000 karma. I can't downvote (direct) replies to my posts. So as far as HN mechanics go, you are falsely accusing nimish.
Other people - not nimish - are downvoting you. And they're doing so because you're starting to cross the line from "disagreeing" to "aggressive and rude".
Now for what it's worth, I'm kind of on your side of the actual dispute. I just think you're pushing the line in trying to be more, um, "expressive".
No it's not that. Even in posts where I just state logical reasons for disagreement I get voted down in this thread.
People vote down what they disagree with, if they agreed with me, most peoples' biases would usually find my attitude appropriate.
If I said something like logically I feel a certain race is inferior genetically. People will vote that statement down purely out of disagreement and misinterpret it as an emotionally charged statement and illogical.
It's just a statement with no logic behind it. It's dead pan with nothing. You can't even find erroneous logic with it because the logic wasn't even spelled out. You can only technically disagree with the statement. But people will subconsciously add all sorts of embellishment.
That's how people work. Maybe Nimish didn't vote me down, but they certainly aren't voting me down because I'm crossing some sort of line. They're voting me down because they disagree. That's the majority of it.
You'll find that more than anything the majority of what I write are just dead pan responses with like 5% of the sentences being "expressive." In fact a great number of stuff that I write that gets voted down is just dead pan responses with zero "expressiveness."
It's because people can't tell the difference between someone disagreeing with them and an actual attack. That's human nature. We all think we're above it, but basically none of us are. You'll find that even you are like this.
The reason why I get voted down is because my opinions tend to be different than most people. So people interpret this disagreement as an attack.
I've been here for over 10. I've had accounts with lots of karma generated by simply agreeing with everything I read or just commenting with useless side suggestions. T
I've experimented a lot. Without changing tone just disagreeing with a popular opinion is all that you need to get voted down. I've even experimented with emotionally charged impolite popular opinions. People will vote you up just because they agree.
If I wanted a shitload of karma. I know how to get it. Removing one or two of the more "expressive" sentences above will in my experience not do much for a thread that is very very biasedly supporting FP. FP is great, I prefer it but most people cannot maintain such neutrality for their favorite paradigm. Or favorite anything for that matter.
Not to mention if you recall a while back someone posted a GPT-3 AI generated story that got voted to the front page. Most commenters didn't realize it was AI generated. A select few were able to figure it out and they posted comments that were vehemently voted down for not being in agreement with the general sentiment. HN is a technical crowd but they are not above common mob behavior.
Scroll down to the picture. The person stated facts but his facts were misinterpreted to be attacks and he was told to be civil by someone who thought of himself as level headed. The reality is that the levelheaded person is as biased as one can get.
You think you're helping me. You think I'm putting up stubborn resistance to your help. Is that the case or am I just simply describing to you some of the insight I've gotten from messing around with HN for over a decade? Did I in fact put up stubborn resistance or am I just conversing with you and stating a disagreement? Hard to say.
Your immediate assumptions are no different than the "levelheaded" dude from earlier. Let's just be clear, I'm fully aware of the "expressive" areas in my posts.
You will also note that many of the posters assumed I'm attacking FP and that I don't have much experience with FP. All wrong assumptions. I have lots of experience and I prefer FP over other paradigms. I am simply stating that I disagree with the fact that algebra is the future of programming and can replace all imperative programming.
You've been here for over 10 years, under various accounts, but you don't know that people can't downvote direct replies? Not quite sure I believe that.
There's a saying in mathematics that when any field matures it turns into algebra. Life crossed a threshold from chemistry to biology on Earth, and developed exponentially from there. Order in mathematics crosses a similar threshold, as it becomes sufficiently structured to support algebraic reasoning. The subjective experience is like ice melting into a churning liquid, or a land-locked creature learning to fly. Once one has this experience, one cannot imagine thinking any other way. In the case of functional programming, programs written in other languages feel like ad hoc pre-civilization constructions, doing arithmetic by counting pebbles.
Advocates of Haskell don't tend to express this, because from the outside it can come off like trolling, but this algebraic sense of wonder is at the core of many Haskeller's experiences. We all have the example of Lisp in our minds, its "we found God" advocacy did much to hinder its adoption. Nevertheless, understanding this explains much about Haskell. The real point of lazy evaluation is that it best supports this algebraic reasoning, as carbon best supports life. The 47,000 compiler options reflect a primary goal of being a research bed for language constructs derived from mathematical category theory, despite its success as a practical language for those free to choose it.
The killer app for Haskell is parallelism. To this day it has the best implementation of parallelism; one can achieve a 7x speedup on 8 cores by adding a handful of lines to a program. This ease is a consequence of the functional model, and of considerable effort by Haskell's developers.
Idris 2 is itself a joy to learn, if one wants a smaller, cleaner Haskell without the 47,000 compiler options. One gets to learn dependent types. Alas, it doesn't offer commercial-grade parallelism.