Exactly. The book can't trademark the name in general use, but it's still crappy hygiene to reuse the exact name. This has to be taken as deliberately drafting the wake of the book, which I cannot respect. Nor can the authors feign ignorance. They decided to do this.
Wow, if that matures, ie gets some real world adopt by serious companies it would be the best thing around.
Much like using Rust and Haskell for web I'd much prefer deferring as much to something like Vue and the server APIs (no sever side rendering for that yet so I mostly just IRL use nuxt.js for toy projects). Something Servant (Haskell) does a clean job of. But you can never completely do away with static heavy lifting
This one takes a more modernized Rails or Elixir's Phoenix approach which is really interesting and looks highly usable.
Community size and quality is everything for real life software (something I remember Clojure had early on and now Rust has in spades, best of breed sort of libraries to choose from) so I hope it gets some adoption to see where it goes. I'll give it a shot to toy around with when I get some time.
> Make sure you have good internet and can wait up to 30 minutes to complete the download [of version 16.10.2020]. We highly suggest to make sure your coffee machine is working before starting the update.
If you have good internet connection it's usually downloaded a lot faster. We have switched the nixpkgs version in that version, so a lot of tooling had to be redownloaded. E.g. in our getting started video we install it on a fresh empty macbook: https://www.youtube.com/watch?v=PLl9Sjq6Nzc&feature=youtu.be The full download there takes around 9 minutes on a slow home office internet connection (it's speed up in the video, but you can see the real-world time at the top right).
> Additionally Nix is very hard to install. This is a stumbling block for a lot of people. Especially when using certain macOS systems you cannot even install nix at all. Luckily most of the nix problems, like the macOS issues will be solved with time.
It's a complete end-to-end solution with an opinionated approach and unstructured documentation.
That is, the learning curve is very steep.
That said, for new projects I'm doing packaging and containerization in nix, because I don't have the time or mental energy to deal with half-assed solutions anymore.
I looked at nix a while ago, concluded the versioning/chesumming is - a given sum should represent kind-of the same thing, but can mean really anything.
There was even a post on how to have a stable version for a tracking branch.
Now, I wouldn't mind, if it were explained exactly what guarantees the hashing provided, but there seemed to be pervasive misconception on what it provides.
+1 for Elm to learn Haskell. It is indeed way simpler, and has very friendly error messages. It lets you concentrate on grokking the basics: no side effects, pattern matching, recursion, currying.
After some time you start wondering why you have to type `List.map` and `Maybe.map` and `Set.map` when they all do the same thing. Then you'll be ready for some Haskell :)
Then PureScript also deserves a mention. It has several frameworks (some akin to the one built in in Elm). If you want to do this in Haskell (which is a lot harder) you may want to look at Miso.
Not used Elm since early 2014 and heard it’s changed a bit since then.
Elm was a great Trojan horse. It limited feature set made it quick to learn and get familiar with the syntax.
When sufficiently proficient in Elm you then find yourself missing some of the features of Haskell i.e. higher kinder types. At that stage Purescript / Haskell is a natural transition as they feel familiar but with more functionality.
Thanks for the suggestion. I've heard people suggesting Elm on reddit but in terms of learning materials there isn't much compared to haskell. But I think that shouldn't be much of a problem.
I know that Elm works great for front end but how is it for desktop and backend stuff?
> I know that Elm works great for front end but how is it for desktop and backend stuff?
It isn't. It's explicitly designed as a purely web front-end language, that's its scope.
Of course, many Elm users like to run another ML-family language in the backend as well. Haskell's Servant backend library, the one used in this article, has a project to automatically generate Elm client code [1]. Elixir is popular as well.
But in general, I'd say that any backend framework that can provide an OpenAPI spec will work well, as you'll be able to auto-generate most if not all of the Elm client code (which is normally dull work and one of the main complaints about Elm).
Learning Haskell sideways is a good approach. There are so many usecases in JS as well and Typescript has made it all the better (but not nearly so restrictive).
Recursion is equivalent to iteration. You can solve any problem you like with it.
The problem with those tutorials is that they’re trying to do two things at once: 1) teach you Haskell and 2) advocate for Haskell.
I’m currently working as a full time assistant for one of the first year computer science courses at my university. This course is taught in a teaching dialect of Racket which does not allow mutation or side effects of any kind.
That doesn’t stop the profs from creating assignments with real world applicability such as graffiti stroke recognizers and decision tree learning algorithms. The key is that they’re not trying to make Racket look good, so the patterns of recursion can get pretty ugly at times.
I found something like splitting a string into words not very intuitive...
Also the classical (recursive) fibonacci function everyone shows has accidental complexity of O(2^n). If you want a sane version it looks definitely more complicated than the iterative version.
I never saw a recursive version of bresenham's line plotting algorithm so this might be a good candidate as well ;)
I found something like splitting a string into words not very intuitive...
Just use a couple of accumulators (one for the current word, one for the list of completed words).
Also the classical (recursive) fibonacci function everyone shows has accidental complexity of O(2^n).
That's moving the goalposts. If you want a really fast fibonacci function then the classical iterative algorithm [which is O(n)] is not what you want either. Better to use matrix exponentiation which you can implement nicely with recursion using a divide-and-conquer approach similar to mergesort, giving you fibonacci with time O(log(n)).
I never saw a recursive version of bresenham's line plotting algorithm so this might be a good candidate as well ;)
There's nothing special about Bresenham's line algorithm that would make it particularly challenging to implement with recursion.
> splitting a string into words not very intuitive
It seems okay?
split :: String -> [String]
split s = w s "" where
w [] "" = []
w [] a = [reverse a]
w (' ':ss) "" = w ss ""
w (' ':ss) a = reverse a : w ss ""
w (c:ss) a = w ss (c:a)
If you're done then you're done (and dump the accumulator), if it's a space get rid of it (and dump the accumulator), otherwise add it to the acuumulator. That the accumulator is stored backward is a bit wierd, but seems more a consequence of linked lists than recursion per se.
Yeah it's ok ;) Going back to my original comment this code is exactly what I meant with "patterns to deal with other problems". You clearly used a pattern (accumulator) with wich you are familiar.
Do you agree that most imperative programmers would not come up with this solution on their own, if you didn't teach them the "accumulator pattern"?
For me personally a tutorial with a lot code like that would be interesting.
Showing me another version of a recursive descent parser doesn't teach me anything, because I know how to write it (with recursion!!!) in an imperative language...
Do you agree that most imperative programmers would not come up with this solution on their own, if you didn't teach them the "accumulator pattern"?
Background: I’m currently working as a full time assistant for the first year computer science course at my university. We have around 1800 people enrolled in the class including over 1000 first year CS students. Many of our first years were exposed to imperative programming in high school, either at home or with a high school CS class.
The course we teach is based on the book HTDP and uses a purely functional teaching dialect of Racket. Many of our students struggle to figure out how to use the different recursion patterns such as accumulative recursion and mutual recursion.
I’m not the GP but I will answer your question anyway: yes, I agree that most imperative programmers would not come up with that solution. I have witnessed it myself first hand.
Where I disagree is with any implication or suggestion that they should, given no prior functional programming experience. Accumulative recursion is a standard pattern in functional programming but if you’re an imperative programmer you may not have ever needed to learn it. Therefore, I think it would be unreasonable to expect you to come up with it on the fly.
> You clearly used a pattern (accumulator) with wich you are familiar.
No, I didn't, and please refrain from imputing thought processes without evidence. I didn't even think the word "accumulator" until I got to the last paragraph and needed a term to describe the second argument to w.
> Do you agree that most imperative programmers would not come up with this solution on their own
If we assume most imperative programmers are idiots as a special case of most people are idiots, sure. I'm not sure that's a good assumption or particularly relevant, though.
> Showing me another version of a recursive descent parser doesn't teach me anything
Ah, I think I misunderstood your point, then:
> > Which problems are hard to solve with recursion instead of iteration?
> I found something like splitting a string into words not very intuitive...
I assumed you meant that "splitting a string into words" was a example of a problem that was hard to solve with recursion (which I found rather puzzling, because as my example shows, it really isn't), rather than that it wasn't a good/intuitive example to illustrate recursion in the general case (which makes a quite a bit more sense, and I'm afraid I don't have a immediate solution to, sorry).
> There where at least two other replies talking about accumulators. I call that pattern, like it or not.
It's true that you use this way of programming so often in functional languages that it earned itself a name, so that it's easier to talk about it. However calling it a 'pattern' feels weird (to me), even though it technically is one.
The equivalent of accumulators in imperative languages would be the 'pattern' of having some variables outside of the loop and mutating them from within the loop. Maybe you can appreciate that it feels weird calling this a 'pattern' — and now you can better understand how I feel about accumulators being called a pattern.
Skipping back to your original question: I write Haskell for living and teach two high-school classes in another functional language, and the only two 'patterns' regarding recursion I can think of are accumulators and mutual recursion. So, you know almost everything there is to know already! :-) [0]
Let me add that you use recursion very rarely in day-to-day programming; mostly you try to spare yourself writing the recursion explicitly and you instead use map, filter and foldr/foldl (sometimes called reduce) to do the recursion for you. Especially the folds are super-powerful (AFAIK you can write any recursive function using folds, should you wish to do so), and often under-appreciated.
> Haskell is a genius language but I don't like the community
IMHO the community around Haskell is (in general) great; the people are always eager to help. I'm a self-learned Haskeller and I couldn't have done it without the community. Come join us at r/haskell or the IRC and see for yourself :-)
[0]: Why do I say this? I remember my old days when I was learning Swift and encountered one new design pattern every day. Factory, Facade, ... I decided to get 'em all (why reinvent the wheel?), I was always anxious that I was coding something that could be better served by a ready-made design pattern. So I just wanted to let you know it's nothing like this with recursion, and that you can save yourself the anxiety (if you are anything like me).
Hey thanks for the nice reply. I know you can't generalize like that. There are probably a lot friendly Haskellers around ;)
I was just angry when I wrote my last comment, because I had to read this:
>> as a special case of most people are idiots
which is so low in multiple ways. It's also just wrong, because by definition most people have average intelligence. Even if you think this is true, a smart person would not use it as an argument^^
It's on my todo-list to dive a bit into haskell, when I find time.
> It's true that you use this way of programming so often in functional languages that it earned itself a name, so that it's easier to talk about it. However calling it a 'pattern' feels weird (to me), even though it technically is one.
> The equivalent of accumulators in imperative languages would be the 'pattern' of having some variables outside of the loop and mutating them from within the loop. Maybe you can appreciate that it feels weird calling this a 'pattern' — and now you can better understand how I feel about accumulators being called a pattern.
That's a fair point, actually. I read "patterns" as "design patterns" a la the book of same name, based on the implied^Wlater stated title "accumulator pattern", but I agree it does fit a weaker notion of "pattern" such as "less-than-maximally-entropic (aka compressible in the information-theoretic sense) feature of code".
> While I don't mind the toy problems, all those toy problems have something in common...
> They can be easily solved with recursion.
As a sibling mentions, recursion is pretty much required in pure functional programming. There is no notion of time or sequencing: everything we write is a definition. If we don't call another function, then our program will finish; if we don't use recursion (either directly, indirectly via functions like `map`, or via a chain of mutually-recursive definitions) then our program's behaviour will be fixed by the structure of our code (e.g. the number of calls we write). That's fine for simple calculations, but most useful programs depend on the structure of their data (e.g. processing lines in a file, etc.); that requires recursion, since we can't hard-code the right number of calls ahead of time.
General recursion is equivalent to goto. It is nicer when recursion can be wrapped up in a higher level combinator (map, fold, etc.) or other even higher level abstraction. That's one thing the array languages (APL, J, etc.) got right.
Yes, but goto requires a whole bunch of machinery in order to make any sense. It (a) needs a notion of time/sequencing, (b) needs a notion of statement and (c) needs a notion of labelling.
(a) and (b) are often taken as given in imperative programming languages (e.g. C, assembly, etc.), but Haskell has none of them. It's hard to ascribe any meaning to "goto" in Haskell; other than some embedded language like ST or something.
> It is nicer when recursion can be wrapped up in a higher level combinator (map, fold, etc.) or other even higher level abstraction.
I agree, but I was counting those as (indirect) recursion for the purposes of explaining the parent's observation.
In my brief experience of going through the Haskell examples, I am awestruck by the power that Haskell enables yet scared by the huge type level complexity underlying the libraries that enable it.
Just to query a database the example used half a dozen langauge extensions.
I could probably learn the base Haskell langauge but that's not enough it seems for real world usage.
Now these langauge extensions ofcourse reduce the boilerplate and enable a nicer syntax but they dramatically increase the cost of using Haskell in production.
My understanding of the ecosystem is that there is a divide between people who want to use Simple Haskell concepts and others that want to advance the state of the start for every library.
I worked with Haskell for awhile but never had it fully 'click'. That being said, I think that once someone is fully comfortable with algebraic data types, category theory, and functional programming, most of these extensions and fancy constructs become pretty shallow and easy to understand.
(Would appreciate if any bonafide haskellers care to comment)
Cat theory isn’t needed at all. It just gives you another smell test for recognizing you maybe designed an api nicely. (Aka: oh cool, my api code has some nice mathematical / logical property, maybe it’s not just mud on the wall)
Agree. You don’t need to know category theory to use Haskell or the libraries.
Know category theory or being aware of it just gives you a deeper understanding of the way things work.
I don’t know category theory but aware of it. The way I look at it is c style languages you have design patterns. These are loosely designed patterns with no formal definitions. That makes it hard to reason about or test things.
In category theory you have mathematical laws. Laws are far stronger than a loosely typed designed patterns so using these laws it’s far easier to reason about code.
Also piping in to say you don't need to understand category theory to successfully write production Haskell. It's interesting to learn and does help solidify understanding of some concepts, but by no means required.
Highly agree that most common extensions are pretty shallow and easy to understand. The tricky ones are usually enabled because some library told you to, or you've genuinely leveled up and recognize where to use them yourself.
I'm not convinced that arguing that "think[ing] day-to-day tasks like running a web app are difficult or impossible in Haskell! ... isn't true!" is well-served by a series that introduces (in the first post!) Persist and Template Haskell and then has
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE RecordWildCards #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE OverloadedStrings #-}
PTH.share [PTH.mkPersist PTH.sqlSettings, PTH.mkMigrate "migrateAll"] [PTH.persistLowerCase|
User sql=users
name Text
email Text
age Int
occupation Text
UniqueEmail email
deriving Show Read
|]
as literally the third block of code in the series. That's more of a proof of the opposite, don't you think?
Python web app tutorials routinely introduce MongoDB/pymongo, Flask, at-annotations for routing, and follow that up with 2-5 `from __future__ import` statements. These are pretty much exactly analogous to the steps in the current series. The only difference is that the Haskell standards change less frequently, so more "from __future__" imports (LANGUAGE pragmas) are required. (Note that the latest Haskell is from 2010, while the latest Python is from 31 days ago; see how many future imports you need to get a modern Python web app to work with a ten year old Python release)
This is not evidence that day-to-day tasks like running a web app are difficult or impossible in Python.
Sure. But you're begging the question that those are somehow evidence for an argument that Python is easy. I disagree that they would be, especially for a novice, or one who already has a familiar and comfortable workflow in a different language. I think such a user would think "wait, this is what they think is easy entry-level practical programming? Riiiight."
> The mentioned pragmas are not defined by the 2010 standard.
That's exactly the point. GHC supports the most recent Haskell version it supports, and implements some extensions to the standard language, which can all be enabled or disabled using compiler directives (LANGUAGE pragmas). These are analogous to Python future import statements, which are compiler directive saying that a particular module should be compiled using syntax or semantics that is not part of the current version of Python the language.
Once in a blue moon, the Haskell standard changes, and certain features that you used pragmas for (e.g. PatternGuards, EmptyDataDecls, RelaxedPolyRec) will no longer require them. This depends on the standard, not the latest GHC release. Unlike Python, core Haskell changes slowly and conservatively, so you should not be surprised that the average Haskell code uses pragmas more frequently than Python uses future imports.
You do recall that your initial thesis was "A lot of people think day-to-day tasks like running a web app are difficult or impossible in Haskell! But of course this isn't true!"? Right?
I think this is considered normal for real world Haskell usage.
Most libraries and even frameworks like ihp that try to be favor ease of use rely on these language extensions.
You can ofcourse place these directives once in the cabal file, but there's no getting around to learning what they do.
It comes with the territory for a fast moving language with new extensions like -XDerivingVia[1] -XLinearTypes[2], more often than not backed by a publication where its interaction with other features is considered. Some extensions are purely syntactic and are quickly understood -XLambdaCase[3] -XBlockArguments[4] -XMultiWayIf[5]. The compiler will suggest missing extensions. There is enough to hs98 for a lifetime of study (see work of Graham Hutton[6] and Jeremy Gibbons[7])
Why a turnoff? "Extension" just means "language feature that is thoroughly well-specified, well-tested and widely used but hasn't made it into the language standard (partly because there hasn't been one for ten years)".
Yes. They're normal here, and you can consider them about as significant as an import statement, some context when reading the code, but nothing crazy.
The extensions are very well vetted, and new versions of the language standard can be summed up as "enable extensions a, b, c by default." Plus library changes, of course.
Are you looking at production code or pedagogical examples? For the former, it's totally normally to use compiler-specific features. If you are scared off by that, you'd be better served reading a book that introduces things little by little.
I use github.com/obsidiansystems/obelisk/ at work (I work at the place that makes it). It does it's own thing for user-visible routes (for frontend and backend because automatic prerendering). It does a different thing for requests and live queries all of which are over websockets for simplicity.
Our routes stuff should be pulled out as a generic dual parsing-prettyprinting library (it and https://hackage.haskell.org/package/tomland are the same basic ideal, just need to remove the concrete HTTP / TOML specifics to get to the commonality).