Hacker News new | past | comments | ask | show | jobs | submit login
Why I prefer scheme to Haskell (slidetocode.com)
103 points by steeleduncan on April 9, 2012 | hide | past | favorite | 70 comments



> This has always been my problem with Haskell; it is beautiful, but it is useless for hacking.

Don't confuse a lack of experience and skill with an advanced tool's (Haskell in this case) lack of practical value. It's one thing to say "Haskell takes a long time to become useful for the average person" and another to say "Haskell is useless for hacking (as in quick prototyping as I understand from your article)".

I, for one, can produce quick mock-ups and prototypes way faster in Haskell than in any other language I know; and I consider myself an advanced user of Ruby and Python. You need to, of course, absorb and internalize a large number of use cases and concepts in Haskell before you get to this point, but it is quite extraordinary once you're there. High expressiveness, modeling to fit the domain even for small, one-off tasks (thanks to algebraic data types) and many other elegant aspects all become second nature and really really fast once you're expert enough.

dons' slides come to mind as a presentation on a related topic if anyone's interested[1].

[1]: http://donsbot.files.wordpress.com/2009/01/semicolon.pdf


But that is I think OP's point, Haskell forces you to "model", versus something like scheme or python where you can just sling dicts around.

Haskell absolutely forces you to do a non-trivial amount of thinking up front, which is counter-productive when you're trying to throw something together in 10 minutes.


You can sling dicts around in Haskell too if you want to. The reason people don't is that the alternative is easier.


+100. That's one of the things I love about Haskell - it makes the "right" things easy and the "wrong" things hard.


You can sling dicts around in Haskell too if you want to. The reason people don't is that the alternative is easier.

This is extraordinary if it is true -- it would mean something in Haskell has succeeded in making the designed/intended "right way" in a programming language the "easy way."

I'm not so sure finding the right way to design a language/environment is right for a sole design focus anymore. A programming language/environment should be designed to unify and leverage the power of a community, in a way that elevates it above the level of "pop culture." (Where a pop culture is defined as one where the rate of change far outstrips the growth of actual value/knowledge.)


But, if I remember correctly, the haskell map is typed, isn't it?

In python you can mix the types of both keys and values in a single dict freely.


You can easily define your own type that is "everything". But how useful is a dict in python with keys and values of varying types? I've certainly never used one.


It's very useful.

Use it like you'd use a bare object in Javascript, as a freeform data container to hold whatever properties you need. A bit like a key-value NoSQL store, actually.


Values, yes, but keys?


Sure, why not. I mean, I can't offhand of a time that I've really used mixed keys, but I also can't come with any compelling reason why it's a bad idea. Ad-hoc data comes up all the time.


It's possible in Haskell:

  {-# LANGUAGE ExistentialQuantification #-}

  import Data.Maybe
  import Data.Typeable
  import Data.Dynamic

  import Data.Map (Map)
  import qualified Data.Map as M

  data Orderable = forall a. (Ord a, Typeable a) => Orderable a

  instance Eq Orderable where
    (Orderable a) == (Orderable b) =
      case cast b of
        Just b' -> a == b'
        Nothing -> False

  instance Ord Orderable where
    (Orderable a) < (Orderable b) =
      case typeOf a `compare` typeOf b of
        GT -> False
        LT -> True
        EQ -> a == fromJust (cast b)
    a > b = b < a
    a <= b = not (a > b)
    a >= b = not (a < b)

  toOrd :: (Ord a, Typeable a) => a -> Orderable
  toOrd = Orderable

  fromOrd :: (Ord a, Typeable a) => Orderable -> Maybe a
  fromOrd (Orderable a) = cast a

  m1 = M.empty
  m2 = M.insert (toOrd "hello world") (toDyn (4 :: Int)) m1
  m3 = M.insert (toOrd True) (toDyn "foo") m2
  m4 = M.insert (toOrd (5 :: Int)) (toDyn False) m3

  main = print ((fromJust (fromDynamic (fromJust (M.lookup (toOrd "hello world") m4)))) :: Int)


As elegant and appealing as Haskell's purely functional foundation is, it prohibits simple, but crucial, impure tasks such as writing to files and communicating over networks. [...] but they make it neither intuitive nor easy.

Actually, such tasks have become much simpler thanks to the recent surge of packages such as conduit. These make typical I/O work easy, and very much resemble a UNIX pipe. You have a pure or impure source that produces data (e.g. reading a file), conduits that manipulate data, and sinks that consume data (e.g. writing data to a file, or storing it in a list).

In fact, I have started to like conduit so much, that I/O in other languages feels quite kludgy.

If you wish to debug a function in Haskell you can't insert a printf to inspect its inner workings unless that function happens to be on the IO monad.

Actually, you can, using Debug.Trace.trace, which takes a String and an expression. The string is first printed, then the expression is evaluated. No need to be in the IO monad:

http://www.haskell.org/haskellwiki/Debugging#Printf_and_frie...

After a frustrutating hour with my Haskell version I was still struggling to understand the monadic API required to work with the XML parser and I gave up.

I had to parse some XML data and used hexpat-pickle. Once you get the hang of it, it is pretty simple. But I agree that there is a learning curve involved.

With such tasks, it usually takes longer to come up with an initial Haskell solution. But then that solution is usually very clean and elegant. If you have to write something quickly, it is not ideal, but often the things that we wrote quickly end up being used for years anyhow ;).

By the way, my experience is exactly the opposite. I wrote some code in Clojure (which I do like very much, since it manages to leverage the Java platform while staying very clean). But I noticed that my productivity dropped - lots of things that Haskell detected through static type checking I had to verify myself by hand and write tests for.


it usually takes longer to come up with an initial Haskell solution. But then that solution is usually very clean and elegant

The big thing that jumps out at me like that in my experience is: it takes longer to come up with an initial Haskell solution that compiles, but when it compiles, it almost always works first-time; while, in a less rigorous language, I can get it compiling a bit sooner, but any time saved there is lost many-fold when it doesn't actually work.


A bit off topic, but a question I've been wondering:

How do conduits differ from arrows? They seem to occupy a similar space. Is one more general than the other?


They actually occupy radically different spaces. Arrows are a generalized computational tool, along the lines of applicative functors and monads. Conduits are basically the next step from iteratees. Iteratees are essentially a functional/compositional way of dealing with I/O. Prior to iteratees, you essentially had to choose between using lazy I/O and ignoring the problems or writing C in Haskell if you needed performance and determinism. Conduits take the benefits of iteratees, simplify the coding and give you more concrete guarantees about when resources will be acquired and released.

In fact conduits/pipes/iteratees are all more useful than arrows outside academia. The only library you're likely to encounter arrows in is HXT and there are other XML libraries. You're likely to encounter one of the conduit/pipe/iteratee libraries using any Haskell web framework and probably any other system doing a lot of I/O where guarantees need to be had.

I haven't yet seen evidence that arrows are actually useful at all outside very limited circumstances. The difficulty of learning them and using them is exacerbated by the fact they have something like 11 laws you must obey to create your own arrow versus three for a monad. There's also a lot more going on in the pretty arrow syntax than in do-notation, which desugars pretty easily by comparison. It would all be justified, I suppose, if using arrows enabled crazy functionality that isn't available via simpler abstractions, but that just doesn't seem to be the case.


OP wasn't aware of Debug.Trace: http://www.haskell.org/ghc/docs/latest/html/libraries/base/D...

Otherwise, this seems pretty light on technical details -- and big on hyperbole -- "Scheme shares Haskell's unsuitability for production code" -- I'm very wary of people who make claims like that, despite all the evidence to the contrary...


> Scheme shares Haskell's unsuitability for production code

It's rather unfortunate something like that gets voted to the front page, though I tend to learn something from the comments to these kinds of submissions (scripting in Haskell! your slides). Cognitive dissonance.


how do the hundreds of KLoC of Haskell in production at a certain bank headquartered in Singapore feel about Haskell being unsuitable for production?



read this, in Hungarian

http://gergo.erdi.hu/blog/2011-05-16-i%27m_leaving_on_a_jet_...

google translate:

    rapid help deszantosokként trapped in the trade analysts,


Languages and frameworks unsuitable from production have been used in production thousands of times. It doesn't prove anything.


Fortunatly, repeatedly saying the same things

  "Scheme shares Haskell's unsuitability for production code"

  "it prohibits simple, but crucial, impure tasks such as writing to files and communicating over networks"

  "it is beautiful, but it is useless for hacking"
doesn't make them true. Plenty of people use successfully Haskell in production. I wonder what kind of motivation one can have to say with such emphasis a tool is useless while they simply can't use it properly.


Haskell is not a language that you can expect to pick up and be writing awesome stuff by lunch time.

There's no substitute for humbling yourself and learning to program all over again. I understand that it's difficult; you think you've mastered the art of writing programs, but suddenly you're back at square one.

No language is perfect, but if you persist and stick with it long enough to become productive, you will have learnt a huge amount and that knowledge is useful in the real-world.


Haskell is very fun, and well tutorialized now (2nd half of Thompson book is very good tutorial(2nd ed, $5), there are small speed bumps:

I needed a little better glossary than book provides for "invariant", "witness", other terms folks sling around. It took me a while to figure out hoogle, and I needed to ignore blog posts about bijective vs surjective functions and category theory as a beginner. Also the ghc instance extensions are probably the most thinly documented corner:

    TypeSynonymInstances, -XUndecidableInstances, Flexible, Overlapping, etc


While we're at it, let me throw out a shoutout for Learn You a Haskell For Great Good. One of the best programming books I've read, Miran is excellent at explaining programming concepts.


There are good tutorials to Haskell, but it seems to me that Haskell best practices still evolves fairly quickly. After "Real World Haskell" and "Learn you a Haskell", I still did not know what most of the GHC extensions are for (especially those related to generalizing types seem interesting), how to write idiomatic IO-code (conduits now, apparently? Or is it Pipes?), or how I should choose between the many ways of computing on many cores. Is Data Parallel Haskell mature enough? STM? The par combinator? accelerate? I don't think they mention Template Haskell (which seems powerful). The recommended way to connect to SQL is also in a state of flux (possibly "persistent"?).


Lisping (written by the author of the article) purchaser here - a really interesting product.

Now, nit-picks with this article: mostly that Haskell and Scheme are not suitable for production.

I am fairly weak with Haskell. I have bought three books, read through parts of them, and played with a lot of little bits of code. My non-expert opinion based on reading people's and company's success stories with Haskell convinces me that either the language is fit for production or the people who use Haskell are great developers and can ship with anything. Probably both possibilities are true.

I do have a lot of experience with Scheme (I wrote a (not very good) Springer Verlag Scheme book in ancient times, once ported all of OPS5 from Common Lisp to Scheme in about 10 hours flat, and lots of projects...). I find Gambit Scheme great for writing small and efficient utility programs and the Racket system certainly has a lot of available libraires for getting stuff done.


If you wish to debug a function in Haskell you can't insert a printf to inspect its inner workings unless that function happens to be on the IO monad.

Sure you can.

    f x =
        let intermediate = 2*x
            intermediate2 = unsafePerformIO $ do
                              putStrLn ("intermediate value is " ++ (show intermediate))
                              return intermediate
        in
          (intermediate2 * intermediate2)
Output:

    intermediate value is 6
    Result is 36
It's a little uglier, since you need to make sure intermediate2 is actually evaluated. The better way to go would be:

    f x = 
        let intermediate = intermediateComputation x
        in 
          (intermediate*intermediate)

    intermediateComputation x = 2*x
And then you just don't expose intermediateComputation to stuff outside the module. This has the advantage that you can use quickCheck on intermediateComputation. If the value isn't what you think it is, quickCheck will tell you what it actually is.


Your unsafePerformIO version is essentially what Debug.Trace does.


It took me two months to really comprehend monads. Now I find them very intuitive and easy. However, they are definitely not easy to learn. It is like you are learning programming for the first time. Initially, they seem weird. Haskell requires patience. It is not a language where you can jump in and immediately see results, like jumping from C to Python. Someone who starts learning vi might be frustrated too. You need to learn ten keys before it becomes as productive as notepad.

> In Ruby or a similar object oriented language you expect to find three APIs/gems, all with a similar object oriented syntax, but for three Haskell DSLs designed for three different tasks to share syntax implies that their authors failed to optimise them for those tasks, hence instead of five minutes with API documentation you have hours of DSL tutorials ahead of you before you can begin work.

You need to learn monads only once. I can use new monads without any trouble.


What makes it difficult to "really comprehend monads"? Why did it take two months for you to really comprehend them?


I can't speak for the poster you're replying to, but in my personal opinion, monads exist at a higher level of abstraction than most people, even experienced imperative programmers, are accustomed to. The formal definition of a monad is quite simple, but it can be challenging for a newbie to extrapolate from the literal code that defines a monad to see how and why that pattern is widely applicable to so many seemingly distinct, unrelated problem domains. Eventually, through usage, you develop an intuition for it and it seems plainly obvious. But despite the proliferation of "monad tutorials", there are no magic words that bring understanding. Newcomers want to have it explained to them without having to get their hands dirty by writing code, but the only way to really acquire an intuition for something this abstract is through experience.

The other problem I see is that monads are really hyped (over-hyped, in my opinion) as a huge stumbling block for new Haskellers, and this turns out to be something of a self-fulfilling prophecy. By the time you reach the monads chapter of whatever book you're learning from, you've already read 5000 blog posts and HN comments about how difficult and confusing monads are going to be, so you go in expecting to be confused. And this just contributes to the confusion. I think that it's much easier to learn this sort of concept if you can go into it without any preconceived notions about what to expect. But since monads are something that all Haskell users like to talk about, publicly, it's next to impossible to introduce a newbie to the concept without them having already heard about the difficulties they're about to face.


This is so true. I managed to covertly teach monads to a coworker who had never heard of monads or functional programming before, all without uttering the word monad. Luckily they were familiar with LINQ notation (which is pretty much do notation) and I used analogies such as wrapping and unwrapping. I used a custom c# maybe type as the concrete example. They were able to intuitively grasp it in a couple of days (whereas it took me about 2 months stumbling through online tutorials learning it myself).


- I made many mistakes in the learning process. Monads should be done after you know core language (higher order functions, ADTs and pattern matching, typeclasses). I was attempting to learn many things at once and perfectly, instead of climbing Wittgenstein's ladders.

- Getting used to nested functions, \x -> f >>= (\y -> g >>= h x y) is not that easy.

- The type of bind, m a -> (a -> m b) -> m b is rather complex for a noob. You've got polymorphism, and m can be different depending on the monad. Monads form a higher-kinded typeclass * -> * , unlike most other typeclasses * . I discovered Functor rather late. You need to realize Reader r is a monad, not Reader, not Reader r a. Partial application on types.

- It took me some time to read and rederive all standard monads - [], Maybe, Reader, Writer, State, Cont. Those things enter the mind slowly, especially state and continuations. Now I can reimplement all of them given a scratch of paper.

- If you do not know IO, you can only use GHCi to test expressions. I thought I need to know monads very well to do IO. This is not true, but I survived a month by loading modules and interacting with GHCi only. On the other side, after all this trouble IO clicked immediately. Fortunately Haskell has enough concepts you can learn for a month without writing a complete interactive program. Look at http://www.haskell.org/haskellwiki/Blow_your_mind for example.

- After you know (>>=), return you need to learn standard library functions such as join, mapM, sequence, forever etc. Then transformers.

- When I was learning, there was no Learn You a Haskell or Typeclassopedia yet. I found sigfpe's famous http://blog.sigfpe.com/2006/08/you-could-have-invented-monad... very late. I found a lot of useless buzz and opinions but rarely with details or instructive code.

I am happy I had a good curious attitude and did not resign.


To expand tmhedberg's first paragraph, take the sequencing behavior of any imperative language (represented by the ";", say) and generalize it.


I was once one of the people who was incorrectly told, and therefore believed simple tasks like IO in Haskell was hard. Though it couldn't be further from the truth.

    main = do handle <- openFile "filename.txt" ReadMode
              contents <- hGetContents h
              putStr contents
Now that simple bit opens a file, reads its entire contents, and prints it to stdout.

Look at something similar in Java:

   FileReader file = new FileReader("filename.txt") 
   //Then I have to choose how to read the file in, how to
   //buffer it, ect.
   String contents = file.read(...)
   //...
   System.out.print(contents)
In my opinion the Haskell approach is much cleaner and closer to reading a file in Python or Ruby, and more intuitive then trying to figure it out in Java.


If you use windows or mono is an option, F# is an alternative to Haskell. F# comprises on purity for practicality, and adds a couple of niceties of its own over the ML core. It also allows convenient OOP, but in now way forces to use it if your whole code base is F#.

And if you are using Scheme, Racket is a scheme derivative which provides a nice and extensive library.


Does F# let you debug using printf? I stopped reading the linked post at that point.


Yes, you could do that. Won't judge that approach - that's probably a cultural thing. But you have access to side effects and could dump stuff to the console (even, if you're so inclined, using the .Net framework standard way, calling Console.Write/WriteLine - although the print* stuff in F# is really better suited if you decided to go down that path).


Yes, printf is easy in F#. The following example works in F# and OCaml:

  let rec fact n =
     Printf.printf "%i\n" n;
     if n = 0 then 1 else n * (fact (n-1))


For the record, this was sarcasm. It's 2012. I really hope you guys aren't debugging using printf still.

You'd think someone into esoteric languages would've heard of a debugger.


Print/Trace statements are invaluable in several situations.

I don't trust debuggers for multi-threaded applications or applications with open, time-sensitive resources (open sockets etc..)

Installing and firing up a debugger is not an option in a customer's system.

> It's 2012

I expect to be using Printf's in 2022 and beyond.


I'm pretty good at Haskell and prefer print to debuggers in pretty much every language. I'd rather have the computer produce output for me to read than to have to hand-hold the computer through evaluating my program. It's easier.


The one doesn't replace the other. I wrote a JS debugger for Emacs so as to have a debugger when I wanted one. I still use print statements more.


OCaml, on which if I understand it correctly, F# is based, allows you to do that easily, but it's not really natural to do so in this programming languages.


For debugging Haskell, use trace to print things inside pure functions. http://stackoverflow.com/questions/3546592/how-to-debug-hask...


In addition to the Debug.Trace facility that others have mentioned, it's also worth noting that GHCi has a built in debugger. I find that I need it pretty infrequently, since you can inspect anything defined at the top level simply by loading your module in GHCi and evaluating the expression in question. But if you need to inspect more local bindings, such as functions defined in a `where` clause in the body of a function, the debugger works really well.

The only funny thing about it is that, since Haskell's syntax isn't organized in the "one statement per line" manner like most imperative languages, you often have to specify breakpoints not just by the line where execution should pause but by the column as well. For this, I recommend using an editor that can easily display the column number that the cursor is in.

Most of the author's other comments I can understand, even if I don't share his opinions. There is certainly a cognitive barrier to entry inherent in a lot of the language's concepts. All I can say about this is that as I've used the language more and more, these barriers seem to have melted away to the point where, simply by browsing the types of names exported by a library module, I can pretty much intuit how it works. A lot of libraries are very well-documented, and some are definitely not, but even in the latter case, the type signatures alone can go a long way to helping me understand.

I'd probably be a Scheme addict myself if Haskell hadn't convinced me of the indispensable power of strong static types. I love how elegantly simple Scheme is, and how you can construct so much from such humble beginnings. It feels like it distills functional programming down to its very essence. SICP is a beautiful thing as well. But Haskell naturally guides me to writing better, more robust, more correct code in ways that dynamic languages simply aren't capable of, and I just find myself getting frustrated when using other languages because I always end up wasting time fixing bugs that I know GHC would have caught for me up front.

Someone in the Haskell community (I can't recall who it was) has said: "I think of types as warping our gravity, so that the direction we need to travel to write correct programs becomes 'downhill'." I couldn't agree more.

Edit: I also have to state my disagreement about monads being a "wobbly crutch". While it is true that they are more or less necessary in order to introduce practical impurity into Haskell, being necessary doesn't make them a shim or a hack. Not only are they an elegant solution to many problems, but they are really quite simple to use. If people spent as much time actually writing monadic code as they do reading the zillions of silly, confusing monad tutorials, they'd quickly reach the point where they no longer see what the big fuss is about. There is nothing truly scary here; the worst part about monads is the name.


Assuming that Debug.Trace offers similar functionality as "observe" does, I have to briefly mention that this tracing might not be overly helpful, precisely because of the lacking control over evaluation order, as you write later on. I had to write a parser using both, combinatoric and monadic style, and debugging was a major turn off because I could not easily see what went wrong (though I seem to remember that using observe the ouptut was reverse to actual program flow, at least the one I have in my head.)


I usually use `trace` like so:

    f _ _ _ | trace "print something profound" False = undefined
    f normal arguments here = ...
That way the trace output will be printed whenever the function is evaluated, which typically mimics a lexical call stack unless you use laziness in some particularly interesting way.


The success of Parsec and its ilk has filled Hackage (the Haskell module repository) with hundreds of DSLs covering any task you care to mention.

This is exactly why I've grown wary of DSLs. They have their place, but in a really expressive language like Ruby or Haskell I'd often just prefer to code in the native tongue.


DSLs have a significant up-front cost. If all goes well, the cost gets amortized over many usages and you come out ahead. But it takes a while. One-offs are not worth it. This is probably true of abstractions in general and not just DSLs.

Another thing about DSLs is that you have to nail both the D and the L in order to get it right, so they are hard to make. Language design is not at all like program design.


Do you believe this about bottom-up programming in general, or is your concern specific to parsed DSLs?


Hmm, good question. I'd say it's less true of bottom-up programming because there you're generalizing code you've already written. It's possible to get that wrong, i.e. introduce abstractions that capture what was there before but don't make it easier to write the program going forward, in which case the best thing to do is rip them out and replace them with their compiled equivalents until a better idea emerges. But it's all very incremental and the cost of getting any individual construct wrong is relatively low.

Classical DSLs are harder because of the D. You have to nail the domain/language fit or it's more trouble than it's worth. Even then you don't necessarily win enough to justify the cost: a specialized language needs to be significantly better than "just coding your domain model in your general-purpose language", which is a high bar. There isn't only the cognitive cost of learning a specialized language, there's the interoperability hit (can a general program call the DSL and vice versa), lack of tool support and so on. It doesn't make sense for most domains, especially business domains that don't lend themselves to technical notation.

The thing that bottom-up programming has taught me, not that I'm in any way sophisticated at it, is that good language constructs need to be an order of magnitude more lightweight than good application constructs. Application constructs are what you pack in your car for a road trip, language constructs are what you put in your backpack for a hike. Or another metaphor: application constructs are factory equipment, language constructs are hand tools - if you have to think about how to use it, it sucks. I find it's hard mentally to go back and forth between these two design levels. I'm spending a few days at the language level right now so these things are on my mind.


I would pose a harsher criticism against Haskell then the one posed in this article. I believe that logical relations are a better foundation for declarative programming then functions. As such, even when it comes to declarative programming I'd much rather use Prolog then Haskell.

Fortunately, there are Prolog systems compatible with the JVM and most scheme environments so you can embed Prolog in Clojure and Scheme for declarative programming tasks.


It would be dishonest not to point out that despite it's utility to hackers, Scheme shares Haskell's unsuitability for production code. The difference is that Scheme is limited by its minimalistic standard library, not by a flaw in the language, and you can "upgrade" to a syntactically similar, but heavyweight, cousin such as Common Lisp, or Clojure, to get work done.

I wonder if there's something to this notion. Would it be useful to study the "standard" libraries of various languages to determine what constitutes "suitability for production code?"

Most likely, everything is relative to the particular application a shop wants to build.


" was trying to express my frustration at not being able to insert a little impure code temporarily to tweak behaviour and help with debugging."

Why would you want to do that? Haskell is a functional language. You evaluate expressions, which can be done nicely in GHCI.

When I first learned Haskell, I was also trying to apply my procedural programming habits. Looking back, the pain was a sign of doing it wrong. Once I learned to look at solutions 'functionally', it turned out not to be a big deal, and the resulting code is arguably better. However, it does sometimes become painful going back to procedural languages.


One obvious reason would be if the input to your expression is relatively large or complex (e.g. a file).


That would not be obvious to me. I often have to process quite large and messy XML files and a functional approach works well. I'm not even sure how inserting print statements would be helpful.


There may be far better ways of doing it, but for a Haskell beginner (and I'm certainly one of those), being able to quickly display what each function was seeing as its input/returning as its result as it chugged through a complex file would have made life a lot easier for me when I first tried this kind of parsing in Haskell.


I used to have the same opinion about print debugging, but after a few months exclusively using pdb, I can say I'm a debugger convert.

If Haskell's debugger is good, that point is moot.

The other points still stand, of course.


I love Haskell, but parsing XML with it is a huge pain. Someday I want to write a better XML parsing library, but for now I use HXT. I wrote a blog post a while back that shows some sample usage: http://adit.io/posts/2012-03-10-building_a_concurrent_web_sc....


I use hexpat and it works nicely. I wasn't able to get very far with either haxml or hxt before having grief.


I use HXT to parse HTML. AFAICT, Hexpat doesn't do much besides parse the XML file into a tree. It doesn't have the niceties that Nokogiri or BeautifulSoup do. For example, I can use Nokogiri to get all the links on a page like so: page.css("a").

HXT allows me to come close to this:

tree >>> getXPathTreesInDoc "//a"

But I haven't seen a single Haskell XML parsing library that is as nice as Nokogiri.


In my work, I read in XML, parse its elements, attributes, and data, producing new XML. Along with Parsec, Hexpat is well-suited to the task.

I haven't had to parse HTML in Haskell. I use BeautifulSoup for that. I wouldn't be surprised if the Haskell libraries aren't as useful for that kind of thing.


I wrote up a guide to working with HTML in HXT: http://adit.io/posts/2012-04-14-working_with_HTML_in_haskell...

You might find it handy if you decide to give HXT another go :)


Does hexpat work on Windows?

I am asking since it is linking to expat dynamic library..


The article was a linkbait which rehashes tired old arguments.

But I do agree on the XML. It is just too hard for beginners to grok how to process XML in haskell.


That bank headquartered in Singapore uses something different from Haskell. It is almost Haskell in most aspects... but uses strict evaluation.


A little typo, I think you mean Backus-Naur form?


He seems to prefer ruby.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: