I really don't get it, what does this guy mean by 100% pure functional programming?
Because we all know already that a program needs to print values or "do" something to be useful, so in that way no program is ever 100% pure - is that what he's talking about?
Or is he talking about 100% pure functional languages which allow functional ways of writing IO code, like Haskell?
In Haskell can't you still program nearly 100% imperatively, just by wrapping stuff in the IO mondad?
mysquare x = x * x
mysquarer x = do
putStrLn "Hey look, it squared"
return (mysquare x)
main = do
putStrLn "Starting program..."
x' <- mysquarer 9
putStrLn (show x')
Can't he just write his program like that?
Or is he just arguing that we should write programs in functional languages but just with an imperative style (no points-free and simple use of IO)?
What he means is that purity is more of a curse than a blessing. That the pure approach is not a good way to think about the problem. That you should use purity as a tool, but not dogma.
What I'm arguing is that you can still think and write with an imperative style using the IO monad if you want, so I'm not sure what the issue is.
Edit:
As an aside, I already know that "Monad" doesn't mean impure (having implemented an IO monad in my spare time to find out) and that was implied when I said "100% pure functional [...] like Haskell".
They are implemented purely, nonetheless they do allow side-effect (impure) programming.
His issue is precisely with things like the IO monad. At least that's how I parse this sentence:
"what's often a tremendous puzzle in Erlang (or Haskell) turns into straightforward code in Python or Perl or even C." (http://prog21.dadgum.com/54.html)
In my limited experience, the type-checking constraints in haskell make programs using monads unnecessarily hard to change. I just assumed I didn't know what I was doing; it's interesting that James Hague is agreeing with a lot more knowledge and experience.
Please correct me if I'm wrong, but my understanding is that most Haskell code doesn't generate values, but "thunks" that evaluate to values if needed.
When you write
x = 4 + 5
you aren't setting x to 9, but creating a thunk that evaluates 4 + 5 at runtime. An Integer is a thunk that returns an integer and has no other effects. An IO Integer is one that does some I/O before returning that integer.
As I understand it, the only function that has the power to "do" anything (computation or IO) undernormal circumstances is main, which always has type IO ().
This is true, but not really relevant to the original article or the comment you are replying to.
When you sequence computations with >>=, like the grandparent does, you generally evaluate the left side of the operation before running the computation. That is the point of monads; sequencing computations and controlling the order of evaluation. Sine the rhs depends on the lhs, the sequence is "evaluate lhs completely", "evaluate rhs completely", and so on.
I think you are correct that the language does think of all expressions in terms of thunks, but I'm guessing `4 + 5` won't actually be run at run-time!
Since it's pure a decent compiler should be able to simply replace any `x` with the number 9 - correct me if I'm wrong :)
Any primitive (C code) however won't run until run-time, which is probably the pressing reason putStr doesn't run until run-time (and that's a good thing :)
I feel like I'm missing the punchline here somewhere. Functional programming "doesn't work" at 98% but if you turn some imaginary slider to 85% then everything is awesome? I honestly don't get this.
Can anyone shed some light on the WTF-ness of this article? (Other than the author disliking purely functional development and "98% functional" development, whatever that means).
What's so hard to understand? I think he means that if you try to use functional programming as a religion and try to shoehorn everything into the functional paradigm (I hate that word) that you are making life hard on yourself.
But by using the right mix of functional and imperative you get the best of both worlds.
I don't necessarily agree with that but I have no problem understanding it.
The more I learn about 'functional' programming the more I'm beginning to see that it is not so much a simple technique as it is a wholly different approach to solving the problems, which often leads to tremendous insights that in turn lead to huge optimization possibilities.
Witness the 'hashlife' link that I posted last week, I never even thought such an optimization was even possible.
The author explained that purely functional development is hard to do in his first posting. I don't understand the point of his follow-up; all he says is there's some magical barrier in which functional programming is suddenly okay, and that magical barrier is somewhere between 98% and 85% on the mystical "functional programming is hard, let's go shopping" scale.
I don't know why you're taking his post so literally.
Forget about the 98 and 85 numbers.
The point of the follow-up was to merely say that functional programming works as long as you don't try to rigorously and dogmatically apply it to everything. There's a sweet spot where most of your code is functional but you also allow certain inconsistencies in, in order to make the whole system palatable.
This was actually what he was trying to say in his first post, but he didn't really get it across clearly, hence, the follow-up.
The main problem I have is that pure is a binary condition. Functions are pure or are not. There's no '85% pure.'
But that's really a wording issue, and not one with his argument. He's just saying that being purely functional is not worthwhile, 'just functional' is good enough.
It's true. But he's talking about the structure of programming languages, not functions. Like I said, it's a small wording issue, not a problem with his argument.
There is nothing to see here even when the light is on. The author makes a single example of something he thinks can only be solved impurely. Of course, it it trivial to solve purely. He just doesn't know how, and therefore functional programming sucks. If you only use the parts he knows about (the "85%" presumably), then everything is fine.
Whenever I hear "functional programming sucks", it's from someone who doesn't know what functional programming is or how to do it. Whenever I hear "object oriented programming sucks", it's from someone who doesn't know what object oriented programming is or how to do it. Do you detect a pattern?
Bait I can't resist! I know what object-oriented programming is and how to do it, and I say it sucks. "Sucks" isn't the word I'd have used, but it'll do.
Of course you're free to disbelieve that I know how to do it. (I did program professionally in that style for years and did a lot of consulting/mentoring to teams in OO practices if that helps, not that it should.)
The reactions to Hague's previous article reminded me of allergic responses - that is, in a programming community that has become utterly soaked with "more FP = better" commentary, any statement to the contrary must be shrugged off as irrelevant or incorrect. And so we see people falling over themselves to say that something must be wrong with Hague or his arguments.
He was clear enough in the first article that he was advocating compromise over purity, specifically addressing pure FP's faults as a tool for software development. The real problem is not with him, but that with FP we've hit that point where the hype has induced people to start bellowing various forms of "USE FP EVERYWHERE, FP>IMPERATIVE" meme from the rooftops. See the memes around TDD, design patterns, etc.
In conclusion, this is a prime indicator that the "interesting new stuff" in programming practice has gone elsewhere. Functional style is quickly entering the mainstream, it has its zealots, and it's well on its way to become just another tool. So the question is now, what's next?
The barbarians are at the gates. Hordes of Java programmers are being exposed
to generics and delegates; hundreds of packages have been uploaded to Hackage;
the Haskell IRC channel has nearly hit 500 users; and it’s only a matter of time
before Microsoft seals that multi-billion dollar bid for Hayoo.
The time has come to retreat and climb higher into our ivory tower: we need
to design a language that is so devious, so confusing, and so bizarre, it will take donkey’s years for mainstream languages to catch up. Agda, Coq, and Epigram are some approximation of what functional programming might become, but why stop
there? I want strict data, lazy codata, quotient types, and a wackier underlying
type theory.
I don't claim to be very experienced in functional programming, but doesn't his whole argument simplify to "State is hard in a pure functional setting" ?
I was under the impression that this was a simple and fairly well understood problem. Either use a functional abstraction for state, a reduce with state or perhaps a monad if the state needs to be carried through many types of operations, or manage it in an impure way, using STM with a global state perhaps.
I don't see how this is any knock against the functional programming paradigm. I always understood the rule of thumb to be "use purity as much as possible, and be impure only when it's explicitly needed." Modifying a global state seems to be a pretty obvious case for using an impure function.
>I don't see how this is any knock against the functional programming paradigm. I always understood the rule of thumb to be "use purity as much as possible, and be impure only when it's explicitly needed." Modifying a global state seems to be a pretty obvious case for using an impure function.
This is pretty much exactly what he's saying.
His response was targeted at people who think functional programs should wholly avoid state and side-effects. (I don't actually know of too many people who believe this, but apparently he does)
I have experience with Lisp, Java, Scheme and Clojure. I am surprised at how vehemently people are pushing back on James (the author). I tried coding in Clojure and found myself working very hard at things that would have been straight forward in Lisp. Could it have been done in Clojure - answer is "yes!". But i didnt want to get sidetracked thinking about how to make programming language work for me rather than actually work on the task.
Thing is, Clojure is not at all a 'pure' functional programming language, it has all kinds of facilities to mutate state (and very well thought ones in my opinion). If you had to categorize it, it would be more like the 80% functional language James is talking about.
If you are struggling in Clojure, i know this is a common argument but i think it's true, it's probably because you didn't code enough in it for this style of programming to become natural.
Moreover,and on a side note, i'd very much like an example of what was very hard in clojure and was straight forward in Lisp ,whatever lisp that is, i assume CL. It would be interresting to me to know what kind of code does seem hard to produce at first in clojure.
Agreeing with @pwnstigator I think the important thing is to separate the pure functional code from the impure code with side effects. And be very explicit that this code is impure and does change state so when you use it you must take this into account. If this is done properly you generally find that the impure code need only be a small portion of the whoe program.
Maybe put another way: Some algorithms are more simply represented using an imperative aproche than a functional, and sometimes the benefits of easily understandible code outway the benefits of having no side effects.
(The answer is yes, because it often follows expectations. This is why dynamically-typed languages work. Although there is no explicit structure, you can generally see what type the author wants something to be by understanding what the code is trying to do. With imperative programming, the same is true. The "+" function probably doesn't depend on your fooBarBaz global variable, so you can pretend it doesn't. But there is no guarantee -- and programming based on guarantees is safer than programming based on hoping the author is doing what you think he's doing.)
Functions should only have side effects if and only if that's what the function is designed to do: e.g. perform IO, write to disk, et cetera. Then, "side effect" is a misnomer, because the alteration to state is intended.
The hatred for "side effects" comes largely from a prehistoric tendency of programmers to write highly optimized in-place operations that destroyed the original data. An example would be a matrix multiplication that destroys one of the original matrices, or Common Lisp's NCONC (a faster APPEND that destroys some of the lists passed as arguments).
The general guiding principle of good code is that visible state changes occur only when requested. (For optimization, private state changes can be used, such as caching/memoization, but these are behind a layer of abstraction and don't violate the referential transparency of the API.)
>Functions should only have side effects if and only if that's what the function is designed to do: e.g. perform IO, write to disk, et cetera. Then, "side effect" is a misnomer, because the alteration to state is intended.
In medicine, they have a saying, "There are no side effects, only effects." I first heard that from a psychiatrist, discussing how he initially selects antidepressants for his patients based on side effects, such as weight loss or gain. Viagra was originally a heart medication.
What the discussion really comes down to is, what is the intent of the programmer? If the reason for a function call is a return value, but there's also a state change, then a problem will almost always arise because you want one but not the other. That's why change of state and global variables are usually maligned, because it couples disparate behavior, sometimes in unintentional or obfuscated fashion.
Eiffel is apparently designed such that anything with a return value causes no change of state, and vice-versa. It's an interesting idea because it means that a change state, like you said, is never a side-effect, but always the intention of the caller.
Like pretty much almost everything, anywhere, ever, it's probably better as a general programming practice rather than a language-enforced restriction. And that's basically what the author was saying about purity--it's better as a guideline than a commandment.
I generally agree with the principle of command-query separation. One point where I would differ is that, when the command's effects aren't entirely known to the caller, it's often useful to return this information.
For example, a function that creates a user account with a unique numerical user ID, that isn't known until the account is created, can return the user's ID, e.g.
Because we all know already that a program needs to print values or "do" something to be useful, so in that way no program is ever 100% pure - is that what he's talking about?
Or is he talking about 100% pure functional languages which allow functional ways of writing IO code, like Haskell?
In Haskell can't you still program nearly 100% imperatively, just by wrapping stuff in the IO mondad?
Can't he just write his program like that?Or is he just arguing that we should write programs in functional languages but just with an imperative style (no points-free and simple use of IO)?