Hacker News new | past | comments | ask | show | jobs | submit login
OCaml: what you gain (roscidus.com)
198 points by edwintorok on Feb 13, 2014 | hide | past | favorite | 118 comments



I have very fond memories of OCaml. It was OCaml that introduced me to functional programming, way back in 2000 in my freshman year at the uni. I'm a Lisper/Clojurian these days, but I think warmly of OCaml's type system, the speed, the self-containedness of the distribution, and the fact that getting the code to compile tends to mean getting it to actually work.

My #1 gripe with OCaml is the fact that its strings are composed of single-byte characters. I know there's Camomile, but not having it as part of the language core creates a sense of disintegration akin to PHP and Python 2.

There's also this: http://www.podval.org/~sds/ocaml-sucks.html which I mostly agree with.

But, all in all, OCaml rocks.


> There's also this: http://www.podval.org/~sds/ocaml-sucks.html

I also mostly agree with it for base OCaml (and I write OCaml every day), but it's worth noting a lot of the complaints--even the ones outside of the 'standard library' section--can be fixed by using a better standard library. (Like Jane Street's Core/Async, which is what I'm most familiar with.)

I do disagree with several of those complaints though (not auto-converting between numeric types is a feature, not a bug!), and several of them are the kind of things that seemed like a big deal when I first started using OCaml and now no longer do. (Like the record field names issue...if you're writing well-organized code each type almost always has its own module anyway, so this doesn't matter much.)


The record field names collision issue has since been fixed in 4.0.1! That single issues was causing me to cling to SML, but now I'm happy that I get to join the growing OCaml community and benefit from the great mindshare and tooling. For example, the excellent Vim/Emacs plugin Merlin that provides in-editor type information and completion.


> I know there's Camomile

Camomille is great, but it exhaustively covers the whole UTF standard, which you might not need for more casual uses. There are other libraries bringing lighter support for UTF strings (ocamlnet for instance I believe). Nevertheless, Camomille does a pretty good job if you're concerned by compliance.

> There's also this: http://www.podval.org/~sds/ocaml-sucks.html which I mostly agree with.

It's a pretty old page. I'm not sure its content is still accurate. I'll have to look into that.


> created: 2007-01-31 > updated: 2013-02-11

not that old, even though ocaml metabolic rate increased a lot in the recent years


Well, I don't know. The "No polymorphism" section is misleading (though polymorphism doesn't carry the same meaning from one language to the next): There's this comment on higher order functions not being supported which surprises me. See the section 7.13[1] and 7.14 of the OCaml documentation. These features are availabe since ver 3.12 of the distribution.

[1]: http://caml.inria.fr/pub/docs/manual-ocaml-4.01/extn.html#se...


Except it's no longer accurate (e.g. native backtraces work just fine). Would someone really think that OCaml sucks and maintain an accurate page about that fact for 6 years? That sort of nonsensical dedication should at least prompt you to check its facts.


>It's a pretty old page. I'm not sure its content is still accurate

Half of it never was as it is just "ocaml isn't lisp and I can't handle that".


if you've not looked at ocaml recently, the community has really taken off in the last year. it's an exciting time to be using the language.


If you program in OCaml be sure to checkout Batteries [1], it is an extension (which also replaces some stuff) to the standard library which eliminate most of the criticisms I had against OCaml. Also, don't try to avoid opam [2], I didn't really want to use it at first because I always prefer to use my system's package manager (using Debian), but it makes your life so much easier.

[1] http://batteries.forge.ocamlcore.org/

[2] https://opam.ocaml.org/


If you haven't, also check out Jane Street's Core/Async, which is an almost-complete replacement (and extension) of the standard library. (Info here: http://janestreet.github.io/.) I work for JS so I'm biased, but I think it's quite a nice set of libraries. Because it's been developed mostly internally, we've had a lot of freedom to fix bad decisions, so it hasn't built up as much cruft as might be expected. (Although it is still obviously far from perfect.)


Do people try to avoid using package managers? Why? On my system I regularly use homebrew and opam but also have occasional need for pip, cabal and more recently npm. Not everything is packaged upstream (nor would I expect it to be).

I can't imagine how the OCaml ecosystem worked before opam as it's rapidly become a fundamental tool.


I don't know, my instinct tells me that I should avoid having many independent package managers in order to keep everything in the same place and avoid conflicts. But I don't really have any reasons to think that those are better ways to do things.


That's a pretty reasonable thing to want. One area where we'd like to improve OPAM is to output Debian and RPM spec files. This requires some surgery in various build systems, but would make upstreaming binary packages and releases so much easier...


My instinct agrees with you, but for one small matter. Ubuntu only just started packaging Ruby 1.9.3 - a version that came out years ago and which is to be deprecated next year - in the latest version.

I don't know about OCaml but trying to do Ruby or Python development without using pip or gems (not to mention virtualenvs or RVM gemsets etc.) seems like an exercise in futility and perpetual brokenness.

I'd like it if system package managers could actually keep up with the needs of developers, but they seem to have failed. Until they can get their shit together, we now have the irritation of managing all these many, many language-specific package managers rather than just using apt-get like sane people. If the system level package manager were able to talk to the language level package managers, that'd be a big improvement.

The number of times I've typed `sudo gem install nokogiri` and then had to fumble around and work out exactly which variant of libxml-dev-0 or whatever is needed before I can install a RubyGem with a native dependency shows we've got some brokenness that needs fixing.


> I'd like it if system package managers could actually keep up with the needs of developers, but they seem to have failed.

I would argue that they fail at this because it's a goal that's in direct conflict with their primary goal: to serve end users. The needs of developers are dramatically different. Having multiple versions of a package is a must for developers, but would just cause weird unpredictable issues for end users. (Unless you are very clever and do something like Nix, of course.)

I wrote more about this topic here: http://technomancy.us/151


Conveniently, opam uses the same upgrade description format as apt < http://www.mancoosi.org/cudf/ > which should make integration achievable.


OS package managerd can't keep up with programming language packages. Use the Langauge community's package manager.


If I may repeat an old comment of mine, lightly edited for freshness (I dont mean to hijack this thread. It is not often that I get an opportunity to interact with people on HN who are fond of OCaML or are likely to have an interest in it. This is the exact demographics who might appreciate Felix)

    --
For the early adopters and experimenters amongst you, you might like Felix http://felix-lang.org/share/src/web/tut/tutorial.fdoc

The site is undergoing a major reorganization right now, so some links will break.

It is a whole program optimized, strongly typed, polymorphic, ML like language that can interact effortlessly with C and C++ and has coroutines baked in. It has type-classes as well as modules. Functions written in it may be exported as a CPython module. This might be useful if one wants to gradually transition from a Python based src tree.

Its own demo webserver is based on coroutines, this is the same webserver that serves the language tutorials. It uses a mix of lazy and eager evaluation for performance and compiles down to C++. Execution speed is comparable to hand written C++, mostly better. Its grammar is programmable in the sense that it is loaded as a library. So in the same way that languages may acquire libraries, Felix may acquire domain specific syntax.

With inaccuracies in analogies assumed, Felix is to C++ what F# is to C# or to some extent Scala is to Java.

It is also mostly a one man effort but with a feverish pace of development so it comes with its associated advantages and disadvantages.

Tooling info is here http://felix-lang.org/share/src/web/ref/tools.fdoc

The author likes to call it a scripting language but it really is a fullfledged statically compiled language with a single push button build-and-execute command. http://felix-lang.org/ The "fastest" claim is a bit playful and tongue in cheek, but it is indeed quite fast and not hard to beat or meet C with.

  --
Removing comment and putting it here to conserve real estate.

@zem I think you just blew your cover :) As I said its been mostly a one man effort, so there are areas that need work. Its an exciting language, welcome aboard.

@jnbiche Oh no! I am by no means the creator, just someone who finds it exciting.


Assuming you're the creator, I'd highly suggest putting some code samples on the language home page, so people interested in the language can see what it looks like at a glance without wading through documentation.

I'm sure you've got a million other things to do, but showing more code could help speed up adoption.


Please respond to comments directly rather than trying to "preserve real estate". It makes it much more difficult to follow conversations.


Noted.

I had participated in a rather contentious thread today and there the nesting had gone way out of hand. That and I did not want my comment to be the dominant conversation on this post.

I got downvoted here, so its likely that my within-post conversation annoyed some. Yes, I know you didn't downvote. Its also possible I came across as a conversation hijacker. That was certainly not my intent. Have to say the blog posts are strikingly well written. In fact quite surprised how a Python aficionado takes to OcaML like duck to water so quickly. (Duck typing pun narrowly avoided)


i've been glancing at felix every so often for years now, but this week i finally took the plunge and worked through all the installation issues. it's a promising languages; definitely not production ready yet, and the combination of incomplete docs and unhelpful error messages can be pretty frustrating, but the creator is helpful and enthusiastic, and so far i'm having fun. now that i'm getting more into c++, the ability to interop with c++ libraries is a strong selling point.


You might like F# even more(well except that clr is not as good outside of windows, but still pretty decent)

As my coworker at the time put it. If Ocaml and python had a baby together it would be f#


I'm a big fan of F# as well. I haven't tried it outside of windows, on mono. I've always wondered how well supported mono was and what the library ecosystem was like (I could just google that, and probably will)

Any personal experience with it?

Right now I'm diving into haskell, partly because I want a static, type-inferred, functional language on linux (my home platform of choice) and partly because I like the idea of using a language that has really pure (functionally and generally) ideas it's trying to implement. Digging it a lot so far as well.


F# has a huge following on macs. The Xamarin guys drive it, and practically every major open source .Net library is available for mono.

I haven't tested it on linux, but I've found it works pretty well on macs.


The Seattle F# user group just had an event last night, and the first talk was 1 hour on how to create iOS apps with F# using Xamarin/Mac. If anyone is interested, a recording is at http://www.youtube.com/watch?v=MriHEnq5MR4


It's getting better and better. They even have an F# programmer working stuff at Xamarin now. Check this out: http://7sharpnine.com/posts/danger-unstable-structure/


The performance on Mono is a bit less but I imagine it can do pure things just fine.


OCaml looks really good, I've heard good things about it for awhile now. I'd like to learn more but and currently learning Haskell and want to focus on learning that for awhile.

Does anyone know why OCaml gets compared to Haskell so often? It sounds like OCaml doesn't force a purely functional paradigm on you, so how is it any more functional that a number of good scripting languages (Python, Ruby, Peal, etc)?


OCaml has a similar type system to Haskell's. It has algebraic data types. It also has all the normal higher order functions and combinators "done right". It also commits strongly to using first class modules as an organizational component atop the functional core.

I don't think "functional" means a whole lot. Many languages with lightweight, first-class lambdas like to jump on that bandwagon by implementing map/fold/scan/take combinators on lists, sometimes even lazy ones, but that whole game is just a sideshow to the real meat of what makes Hindley-Milner typed languages with ADTs quite fun to work in.


HM type inference is one possible aspect of functional programming, but there's also the Lisp family of languages which are characterized by macros and homoiconicity, as opposed to purity and static typing. Functional programming doesn't seem to have a single definition, but the languages do share a family resemblance: instead of one or more properties shared by all members, each pair of members shares some properties.


I'm more in the camp that "functional" is just too much an umbrella term to mean much. I think it's completely fair to say that anything with light syntax for lambdas and the regular "list combinator zoo" of (map/fold/zip/unfold/scan/take) is "functional" enough for me.

All the other features are orthogonal and specialized.


Yes, I think the main selling point of ML languages is the type inference mechanism. Haskell doesn't support exactly the same mechanism, but it has typeclasses, which is very cool too.


Haskell has a very similar core type but adorns it with type classes instead of modules.


and type classes desugar to regular types except with allowance for higher-ranked types


Typeclasses don't interfere with type inference.

    > :t (+ 5)
    (+ 5) :: Num a => a -> a


Well, they do:

    show . read


Or rather they can.


No they don't.

    > :t read . show
    read . show :: (Read c, Show a) => a -> c


That's not what I typed: I did `show . read` which is type `String -> String` and has an internal, ambiguous type.


Oops. So, I'm even more confused now. You know show. read is inferred just fine, and that means typeclasses break inference?


You know show. read is inferred just fine

The "internal, ambiguous type" matters when you try to actually invoke (show . read) on some String. If you want to actually use that function, you're stuck inserting a type annotation, which would lead many people to say that (show . read) is not "inferred just fine."


>The "internal, ambiguous type" matters when you try to actually invoke (show . read) on some String

There is no internal ambiguous type. It is not a concrete type, it is a type variable. But it is inferred just fine. It is (Read a => a).

>which would lead many people to say that (show . read) is not "inferred just fine."

Even if you were to erroneously say that, it isn't type classes that is the problem.


> Even if you were to erroneously say that, it isn't type classes that is the problem.

I'm pretty sure it is. For example, see section 3.7 of "Type classes: an exploration of the design space" by SPJ et al.


I guess it depends on what you we both mean by "breaks inference". I think the fact that it readily infers a type for a function made ambiguous by lost information to be a flaw of inference. The algorithm works, but I think the demonstrates that HM doesn't come through unscathed with the inclusion of typeclasses.


Hindley-Milner is fun until you need nominal subtyping or mutable assignment, then the whole "semi-unification is undecidable in general" comes up to bite you.


Weird points to make.

Subtyping isn't a good organizing principle for code reuse from a type-safety or human-sanity perspective. Typeclasses and composition are better.

You should give Haskell a whirl.


Haskell doesn't use Hindley-Milner.

Edit: Also, sub-typing tends to make a lot more sense in a language with value semantics, such as C++, than it does in a language like Haskell and isn't necessarily about organizing code.


What do you mean by value semantics here?


C++ gives you the freedom to choose between composition and subtyping to suit your problem without having to worry about an extra indirection if you choose composition.

For example, the base member of Composed is not a pointer as it might be in a language with reference semantics:

  struct Base
  {
      int i;
  };

  struct Derived : Base
  {
      bool b;
  };

  struct Composed
  {
      Base base;
      bool b;
  };


How does HM get in the way of mutable assignment?


Polymorphism (more precisely, generalization of unknown types) is incompatible with references. For example:

    let r = ref []
Here r has type 'a list ref (reference to a list of 'a where 'a is unknown).

    r := ["hello"]
r has type 'a list ref, := has type 'a ref -> 'a -> unit, and ["hello"] has type string list, so it works (if we instantiate 'a with string).

    let n = 1 + (List.head !r)
1 has type int, + has type int -> int -> int, and we can type (List.head !r) as int since we can type !r as int list since we can type r as int ref list (this time instantiating 'a by int).

So, we just added an int and a string and the program will probably crash. What's wrong is that every line is correct but because of the mutable store, the program as a whole is incorrectly typed.

The solution is to generalize only a subset of all syntactic constructs. The historic solution in Ocaml is to never generalize expressions that may allocate memory (that's the "value restriction") . For example, a function application (like f x, or ref []) may allocate, but a variable or a constant (like x or 1 or []) can not. That's why [] has type 'a list but ref [] has type '_a list ref ('_a can be unified only once, so in the above example an error would occur at the "let n").


Ahh, that's sort of obvious in retrospect, but I was blinded thinking in terms of Haskell where IO protects against such generalization.


The equivalent Haskell program is

    import Data.IORef

    main :: IO ()
    main = do
        r <- newIORef []
        writeIORef r ["Hello"]
        x <- readIORef r
        print $ x + 1
        return () 
The reference is bound by "r <-", ie a lambda (as this desugars to ">>= \ r ->"), and lambdas are never generalized (even in ocaml).

I think that the reason why it works is that because there is no way to let-bind r except with a toplevel unsafePerformIO.


Yup, that's precisely what I forgot to think about.


Assignment is essentially similar to subtyping, if one says:

A := B

Then all values that could be bound to B could be bound to A also (so B <= A), but if the other way was also taken (B = A), then

A := C

Would imply that B = C, which leads to all sorts of safety problems...so you don't do that. At the end of the day, to make HM work with assignment, you need something like ML's "value restriction" that prevents full unification between A and B.


Nitpick: it's generalisation that is prevented by the value restriction. Unification works just fine:

    let test r =
      r := `A;
      r := `B;
      r := `C
r gets unified with all of `A, `B and `C: no problem.


Yep, and also overloading, overall HM just isn't a good choice for a programming language.


Care to list the reasons why you believe HM just isn't a good choice for a programming language?


It limits the type system in order to achieve global type inference. HM is often touted as a "sweet spot" in static type systems because you don't have to specify types anywhere, but in reality it tends to be more verbose and less powerful than local type inference.


That hasn't been my experience, could you show me a small example?


In an HM system there is no overloading so you are essentially naming type constraints on every function call instead of once in the function definition.

C++ might look like this:

    void do_something(bar_t & bar)
    {
        ...
        bar.foo(...);
        bar.baz(...);
        auto something = bar.get_something();
        ...
    }
The equivalent Ocaml might look like this:

    let do_something bar =
        ...
        Bar.foo bar ...;
        Bar.baz bar ...;
        let something = Bar.get_something bar in
        ...
The situation is similar when dealing with structs and tends to be ever worse when dealing with generic code (although C++ only has duck typed generics until concepts comes along).


> Does anyone know why OCaml gets compared to Haskell so often?

I seem to recall (incorrectly, apparently) that there is a common thread going back to Standard ML for both OCaml and Haskell. Appears, Haskell goes back to Miranda which borrows from SML -- but it's not entirely incorrect to say that both OCaml and Haskell are built on the shoulders of SML. My take is that OCaml was seen as a way to make a "useful" MetaLanguage, fit for more than experimental language implementations -- while Haskell was conceived by some crazy people that didn't think it was hard enough to get any useful work done in SML ;-)

Now, Haskell has evolved into something that actually is useful, and the biggest difference AFAIK is that Haskell leans towards strict and lazy than OCaml.

As they are both "modern" MLs, it's natural to compare them -- why Haskell isn't compared more often to SML might just be that there are a few programs actually written in OCaml that people are aware of ;-)


OCaml is not strictly pure, but it is definitely more strongly functional than Python or Perl. What matters is not that a language completely disallows mutation so much as that it helps you write in a functional style. The functional path is usually the happy path in OCaml, whereas in Ruby mutation and statefulness just come naturally.

The basis of OCaml is functions that transform one thing into another. The basis of Ruby is objects that encapsulate some state and send messages to other objects to get them to change their state.


Does OCaml do any sort of structural sharing behind the scenes? This is always the thing that bothers me about the functional JavaScript movement, since mutable state everywhere makes such sharing infeasible, meaning that I have to worry a lot more about space complexity when I'm doing something like a map over a collection. How does OCaml fare in that regard?


Sharing is both common and expected, as it arises naturally as a result of writing functions over variants. The language also provides an identity test (==) by which sharing can be observed, so it can't be said to be "behind the scenes".

Several of the data structures in the stdlib make good use of sharing to reduce the cost of operations - Set.union is a particularly good example. map, however, is not the sort of operation that leads to sharing.


Both are descended from ML, but have found a lot more "RealWorld" usage than ML. If you know ML it makes it a lot easier to learn Haskell or OCaml.


> Does anyone know why OCaml gets compared to Haskell so often?

Because both belong to the ML family branch of programming languages.


Two reasons. First, they are both languages with useful type systems. There's not many languages in that group.

Second, ocaml is a multiparadigm language for real, and with a focus of functional first. Python, ruby, perl, etc are procedural/OO languages that have map and filter. It is a huge pain and tons of extra code to write a program in functional style in python, it is lacking basically everything. It is normal to write a program in a functional style in ocaml.

If you learn haskell first, ocaml will serve no purpose for you. But for someone coming from imperative land, learning ocaml can be a helpful half-way house on the way to haskell.


As always, it depends on the job. For some systems, OCaml is superior; for others, Haskell is superior.

One big question informing the choice is "do I want to program primarily lazily or primarily eagerly?" Another is "is time to achieve system performance a big factor in success?"

If eager, performance-eager -> Ocaml If lazy, performance-lazy -> Haskell


Having spent 6 years with ocaml, I can honestly say I both never wrote anything where ocaml offered a benefit haskell does not, and can not even think of such a program.

You pose two questions, one of which is effectively irrelevant, and the other I believe is based on an incorrect assumption. Maybe 5% of code the evaluation strategy matters. It makes no difference whether you are adding strictness for 2% of your code or adding laziness for 2% of your code.

The performance question I think is a misconception. Do you have any evidence to suggest it is easier to write fast code in ocaml? I've never found a person who has used both languages and felt that was the case. I've only found people without haskell experience believing that via second (or third, or fiftieth) hand rumors.

Ocaml offers two things over haskell. One, imperative constructs. This matters for people making a transition to functional programming, but not for people who've finished that transition. Second, the compiler is fast. I don't mean generates fast code, ghc does that too. I mean it generates code quickly. This is an annoyance with ghc, no question.

Haskell offers a lot over ocaml. A useful standard library. Cabal and hackage. Parallelism done right, out of the box (super cheap green threads multiplexed over a pool of OS threads). Tons of libraries that don't exist in ocaml, everywhere from attoparsec and aeson, to binary and lenses, pipes-concurrency and STM. A community that is ten times the size of ocaml's.

There are good reasons it has gone from ocaml having the larger community to haskell dwarfing ocaml in community size, language usage, library availability, etc. in the last decade.


Do you have any evidence to suggest it is easier to write fast code in ocaml?

I find it much easier. To give one example: I recently had to write some code that shuffles arrays based on a PRNG. In OCaml, one can simply use a mutable array and a PRNG that uses a mutable variable.

In Haskell, efficient array mutation is possible, but you have to do it in the ST or IO monad. So, in a pure function you'll want to runST an expression in the ST monad. But then you need to thread the 'state' of the random number generator in function calls, or use a State-like monad. If you use a state monad, you have two monads and will probably use a monad transformer.

Writing the equivalent Haskell code is much more work. But on the other side, Haskell keeps you more honest. The end result is that in Haskell, it is guaranteed that the end-result is pure (if you don't use unsafePerformIO of course). In OCaml it's only by convention.

Which is better depends on what you want, convenience or safety. Personally, if I have to choose between OCaml or Haskell, I'll choose the latter. There are already plenty of languages that are pragmatic and less strongly typed. For me, OCaml provides relatively only advantages over strongly-typed imperative languages that support closures, etc. Haskell on the other hand, provides an amount of type-safety that those languages do not provide, while still being usable (both as a language and in terms of the library ecosystem).


>I find it much easier

Ok, but your anecodes + my anecdotes = still just anecdotes. To support the claim that it is easier to write fast code in ocaml, we need to find evidence.

>Writing the equivalent Haskell code is much more work

I have not experienced anything like that, despite having much more experience with ocaml than with haskell. You are imposing limitations on yourself when using haskell and then saying haskell is imposing them. Every ocaml function is in IO, it just doesn't provide type information to tell you that. So doing the same thing in haskell is doing it in IO, which requires no special effort at all. Even if you choose to add a purity restriction on yourself, adding "runST $" does not seem like a lot of work to me.


"Do you have any evidence to suggest it is easier to write fast code in ocaml? I've never found a person who has used both languages and felt that was the case. I've only found people without haskell experience believing that via second (or third, or fiftieth) hand rumors."

The first post in this blog series had a simple test (really just testing start-up time), where Haskell and OCaml were very similar. But in the second post (http://roscidus.com/blog/blog/2013/06/20/replacing-python-ro...) with a slightly longer test-case, OCaml was twice as fast as Haskell.


But both were fast. And if he had enabled optimizations like it was suggested, the difference would be smaller. And if he had used attoparsec instead of a regex it would have been easier, less error prone, and faster. He literally didn't learn haskell and copy and pasted random stuff from stackoverflow and irc.


> He literally didn't learn haskell and copy and pasted random stuff from stackoverflow and irc

But that's the point right. He did the same for OCaml and got faster code. Obviously you can write fast code in Haskell, but that is not the same as it being easy.


>He did the same for OCaml and got faster code

No he didn't. He took the time to learn ocaml.

>Obviously you can write fast code in Haskell, but that is not the same as it being easy.

The closest thing to evidence I can find suggests that it is no more difficult to write fast code in haskell:

http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...


> No he didn't. He took the time to learn ocaml.

Not before writing his initial benchmarks, which is what we're talking about.

> The closest thing to evidence I can find suggests that it is no more difficult to write fast code in haskell:

> http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te....

That measures ridiculous highly tuned implementations of the benchmarks, often written by the creators of the languages themselves. It contains absolutely no evidence of how hard it is to write fast programs in a language.

Besides, I'm pretty sure citing the language shoot-out is an automatic disqualification in any argument about language speed.


(blog author here)

I certainly tried to treat them equally. If I cut-and-pasted from the web more often for the Haskell, it's because I got stuck more often.

For example, I didn't need to search the web for how to read argv[0] in OCaml. It's just Sys.argv.(0). Easy. Would you really expect a beginner to figure out the Haskell version on their own?


No, I would expect a beginner to use getProgName. I would also expect anyone else to use getProgName also.


That doesn't work. It only gives you the leaf (basename), not the path.


getExecutablePath


Thanks - good to know that's been fixed now. I don't know why all the previous requests were marked as "wontfix".


Huh? Are you sure they weren't marked as duplicates of the original request made in 2010, which was closed as "fixed" in 2012? The functionality was available as a library on hackage starting in 2009. It was added to the standard library in 2012 as part of ghc 7.6.1. Your website says you were testing with 7.6.3.

https://www.haskell.org/ghc/docs/latest/html/users_guide/rel...


I don't remember exactly how I searched for it, but the top hit on Google for "haskell argv[0]" today is https://ghc.haskell.org/trac/ghc/ticket/3199, which is marked "wontfix" and includes a workaround (which I used).

The second link is to Environment.hs, which would have solved my problem, if I'd known what I was looking for. I think I'd have just clicked Back when I saw it was library source code, though.

The third link is http://hackage.haskell.org/package/system-argv0-0.1 - which says "Use this instead of System.Environment.getProgName if you want the full path, and not just the last component."

(note: technically, my code was still the correct solution, since 0install targets Ubuntu 12.04, but yes ideally I would have found that newer versions did support it more simply)


>Not before writing his initial benchmarks

Yes, he did. Try talking to him.

>That measures ridiculous highly tuned implementations of the benchmarks

So, fast code. Which is what is at question.

>It contains absolutely no evidence of how hard it is to write fast programs in a language.

There is both lines of code, and actually looking at the code itself. Both of which make it appear that writing fast code is no more difficult in haskell than in ocaml.

>I'm pretty sure citing the language shoot-out is an automatic disqualification in any argument about language speed.

I'm pretty sure the language shoot-out is of greater value as evidence than personal anecdote is.


> So, fast code. Which is what is at question.

No the question is about whether fast code is easy to write, if it has to be highly tuned then it is not easy to write.

> I'm pretty sure the language shoot-out is of greater value as evidence than personal anecdote is.

Actually, I think that the shoot-out causes a lot of harm to people's attempts to understand whether a language is fast or not.

When people ask: "Is language X fast?" what they mean is "if I write my programs in it will they be fast". They do not mean "is the highly tuned code of an expert fast".

The question of whether a language is fast is about whether code written in the natural idioms of the language is fast, not about whether you can coerce the compiler into producing the precise assembly code you are aiming for. If you are going to do that you might as well write the assembly code directly and call it using an FFI. Many implementations on the shoot-out bare no resemblance to the code people naturally write in those languages.


You could just look at the code instead of making up a strawman to try to dismiss the only evidence presented. Poor evidence > no evidence.


It's not a strawman. Just look at the code for reverse-complement in Haskell. It is full of strictness annotations (unidiomatic) and uses inlinePerformIO (unsafe). It also uses a few inlining pragmas (unidiomatic).

The fact that this code is fast tells you nothing about whether idiomatic Haskell is fast. wrong evidence < no evidence.


Strictness annotations are idiomatic in Haskell.


Yes, it is a strawman. Strictness annotations are not unidiomatic any more than using laziness is in ocaml. The ocaml code is effectively using unsafePerformIO all over, ocaml doesn't offer the safety you are complaining that the haskell example gives up. And both of your invalid complaints are completely irrelevant to the question at hand.

The question was about difficulty, not a subjective idea of what idiomatic code looks like. Are you seriously claiming that the average haskell example there is more difficult to write than the average ocaml example? If so, just say that. Trying to redirect the discussion back to your subjective preferences is not productive.


Note that using `unsafePerformIO` is more unsafe than that, because the compiler assumes that functions are pure and will make transformations which alter the semantics in the presence of side effects. The OCaml compiler does not do these optimisations because it has to assume everything is impure.

And `inlinePerformIO` is much more unsafe, because it requires that you do no memory allocation inside it. This property is specific to the Haskell implementation, and is certainly not easy to guarantee.


OCaml has camlp4 which is vastly superior to TH for extending the language's syntax.


I thought that lenses where only needed due to deficiencies/limitation in Haskell? (A kind of workaround).


Nope. Lenses aren't needed at all, they are just helpful. Lenses are just combined getter+setter that are composable. So you can use (lensA . lensB . lensC) to view, update or replace part of a data structure that is part of a data structure that is part of a data structure.


Sure but how is that an advantageof Haskell? It seems possible to do the same in any language C++, Java..: a wrapper class which provide an access to a part of another class.


I'm no expert, but I believe it's an advantage of Haskell because of the generality and level of abstraction you are capable of while maintaining type safety. I also think I just described Haskell.


Because haskell has such a library (a few actually) and ocaml does not. So in ocaml you either have to give up safety, or write a bunch of verbose boilerplate. There's no reason someone couldn't write a lens library for ocaml of course, but the much smaller community makes for far fewer libraries.


> It is a huge pain and tons of extra code to write a program in functional style in python, it is lacking basically everything

I've had the exact opposite experience. I write my Python code in a functional style by default. I use mutation as an exception rather than the rule. I find my code to be much more terse overall. When used effectively I find functional Python requires less code, not more as you imply.


Some pythonistas get very annoyed when I do the same.


I suspect the difference of opinion is due to a different understanding of what a functional style is. Python does not have tail call elimination. You literally can not write lots of normal functional programs in python. Python doesn't have currying and requires explicit extra code to create partially applied functions. This makes functional programming painful and verbose.

I think you are writing what I would call imperative code that uses some functional constructs where it is shorter. No doubt that is shorter than sticking entirely to imperative code, but it is a far cry from writing a program in functional style which requires that you not use imperative constructs.


I'm planning to learn ocaml in the coming weeks, as I want to try ocsigen (http://ocsigen.org/). Seems to me there is good activity in the Ocaml community lately (eg new books came out recently)


I like OCaml but really, Ocisgen is crap. People making it have no idea how today's web works.

Some parts of it (lwt, js_of_ocaml) are nice and maybe useful piece of code, but you don't want to use the framework as a whole to develop a webapp.

The problem with Ocsigen is that it reinvents the meaning of everything, even the HTTP verbs, to suit its internal needs. For example, it is not possible (or it must be very painful) to make an API for your Ocsigen app so that other apps can interact with yours. First, you have to forget PUTs, HEADs, and DELETEs, but also GETs and POSTs are used incoherently by the resulting code to update parts of the webpage, submit forms, etc.

If you want to make a site/webapp that only focus on itself and never have to interact with anything else than its visitors, then Ocsigen might be a suitable tool for the job, otherwise I wouldn't use it.

Disclaimer: I only used Ocsigen for one day, it was at JFLO, which expansion in French would translate to French Ocsigen Lectures Day. I get what I know of it and what I'm saying here by the day-long courses and hands-on tutorials given by Ocsigen developers and by discussing these problems with them afterwards.


New books as well as a lot of new tooling and libraries. A lot of it being coordinated from OCaml Labs in Cambridge UK (http://OCaml.io) and OCamlPro in France.


> Another example is the Config object. When I started the Python code back in 2005, I was very excited about using the idea of dependency injection for connecting together software modules (this is the basis of how 0install runs programs). Yet, for some reason I can’t explain, it didn’t occur to me to use a dependency injection style within the code. Instead, I made a load of singleton objects. Later, in an attempt to make things more testable, I moved all the singletons to a Config object and passed that around everywhere. I wasn’t proud of this design even at the time, but it was the simplest way forward.

That's called a "god object", unless I'm mistaken. It's an abomination in any language. I have the misfortune of maintaining a codebase with this "feature". Never, ever, do anything like this.


Never used or inherited such a design, but I'm curious about what practical problem you encoutered. Care to illustrate?


Once you have implemented this design pervasively, you can forget about reusing your code in another context. Because it is impossible to tell which part of the god object is needed in a given part of the code, you have to initialize everything in the god object. With a bit of luck, you're working in Java and mixing this anti-pattern with some Spring dependency injection, in order to make this mess really impossible to untangle.

And obviously it encourages mixing up layers. Why have modularity when you can access your data access components from anywhere?

And in general, circular dependencies are a code smell. Whenever I thought I needed to use them, I came to realize it was the wrong decision.


OCaml is a great language, but IIRC the creators didn't talk much to the community, about where they were going, etc. Contrast with Clojure, Golang, Python... So many people steer away and go bet on something else.


The conversation between the OCaml creators and the community is a lot more focussed. Back in 2005 when we were developing the Xen toolstack using OCaml [1], I subscribed Xensource to the Caml Consortium [2]. This let us meet with the main team once a year, and air our concerns in a structured way. This is so much easier to handle than a firehose of e-mails. Nowadays I still attend the annual Consortium meeting, but more in a capacity of reporting on our activities at OCaml Labs and there are still a healthy set of industrial users who like this mode of interaction.

[1] http://anil.recoil.org/papers/2010-icfp-xen.pdf [2] http://caml.inria.fr/consortium/


caml-light was a marvellous language to teach computer science (I have learned it in 1992). It was interpreted bytecode. Later "Caml Special Light" has improved its speed. More later OCaml has added objects. There is a good history at http://caml.inria.fr/about/history.en.html#idp1416640 .

The purpose of the language was research and teaching. I think it was never intended to be "battery included". The fact that so many people use it for real work and that so many libraries have been developed has a simple explanation: the difficulty to find a good language to have the job done. OCaml is a very good language.

I think it is easier to learn OCaml than haskell. Scala is more recent than OCaml and is more "battery included". I think that Scala will progressively replace OCaml and it a better fit for professional work.


But what you lose seems to be readability if going by this code sample

let menu = GMenu.menu () in let explain = GMenu.menu_item ~packing:menu#add ~label:"Explain this decision" () in explain#connect#activate ~callback:(fun () -> show_explanation impl) |> ignore; menu#popup ~button:(B.button bev) ~time:(B.time bev);


If you combine many lines of code into one, you lose readability in any language ;).

Probably you would actually format this as follows:

    let menu = GMenu.menu () in
    let explain = GMenu.menu_item ~packing:menu#add ~label:"Explain this decision" () in
    explain#connect#activate ~callback:(fun () -> show_explanation impl) 
    |> ignore;
    menu#popup ~button:(B.button bev) ~time:(B.time bev);
Which is rather nicer.


I don't know how other similar languages deal with GUI. It's always sort of hairy, obscure code when you look at it closely. Mainstream languages (like Java) usually give you GUI building tools which generate all the boilerplate stuff, but if you look at that generated code, it doesn't look so appealing either imho.

If you know the GUI libraries inside out, then the code becomes much easier to grasp though. The lablgtk library (which provides gtk support) is quite well written, providing both a modular interface and objects, but indeed there's a learning curve.

Lastly, you might find it more palatable with a little bit of formatting:

  let menu = GMenu.menu () in 
  let explain = GMenu.menu_item ~packing:menu#add ~label:"Explain this decision" () in
  let callback () = show_explanation impl in
  explain#connect#activate ~callback |> ignore; 
  menu#popup ~button:(B.button bev) ~time:(B.time bev)


> I don't know how other similar languages deal with GUI. It's always sort of hairy, obscure code when you look at it closely.

The best languages for GUIs I've seen are specialized declarative DSLs. Things like QML, XAML, JavaFX FXML, XUL, and even HTML. Anything else very much sucks, in my experience.


And Qt pre-QML. The user interface is declaratively described in XML (which can be edited visually with Qt Designer). You then use a utility (uic) to generate e.g. C++ class which you can use through composition and inheritance. UI events are wired to code through Qt's signal/slot system.


Not knowing how to read a language you don't know isn't a great readability metric


Title made me think of this http://www.youtube.com/watch?v=B9MgWFU3JhM




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: