Hacker News new | past | comments | ask | show | jobs | submit login
Why We Use OCaml (esper.com)
169 points by ALee on July 16, 2014 | hide | past | favorite | 137 comments



Hey HNers, I'm the CEO of Esper - there's been a lot of questions regarding the business case for OCaml and I think it'd be helpful to write here:

1) It's practical - for our team, we had developed in it before and had deployed a large system in it - we could get to where we wanted quickly.

2) we think it's a competitive advantage (see OP and PG's Beating the Averages essay) - additionally, since we deal with large amounts of data, OCaml was particularly helpful in that regard too

3) OCaml is a good filter - the thing that matters more to us is a person's elasticity of learning - their ability to learn new things. We use the right tool for the right job, so we also use javascript, java, objective-C, etc. OCaml is a pretty good filter and test for elasticity of learning.

Lastly, not a primary reason, but an advantage - languages get popular because the companies that use them are popular (i.e. gain market traction and can pay developers). That doesn't mean it's the most efficient way to get things done. We'd like to be part of the group that helps make the industry more efficient.


How do you feel about the fact that OCaml has no capacity for "real" multithreading? It seems that such capability will only become more important in the future.


Is it obvious that you need threads instead of CSP ?


Because there are no libraries for concurrency...

Not built in does not mean not possible or available...


I'd love to hear about point (2), in particular how OCaml helped with dealing with large volumes of data.


There are many additional reasons that impact the maintenance of established code bases and code reuse. OCaml's Module system is good in this regard.

In addition, there are many other companies that use functional languages and my (subjective) experience had been that such orgs are able to achieve far more with smaller teams (cf Whatsapp and Erlang). Facebook and Bloomberg are also users. I find it somewhat bizarre that people are making an issue of the business case for using OCaml in a startup. How many people did that before using Python/Ruby/etc back in the day?

Edit: and here's a clicky link to PG's essay http://www.paulgraham.com/avg.html


> languages get popular because the companies that use them are popular ... We'd like to be part of the group that helps make the industry more efficient.

It seems true that technology can only go mainstream if there are companies taking the risk to use it in production.


Oh. Ocaml again. Well, I'll chirp on the opposite side of discusion. The syntax is completely <->:%^&#$^ $% %^&*% up. Unreadable. Yes, it maybe somewhat pleasant to write code in such syntax, but readability sucks. Just like perl. Also ugly as perl as well. When do you people learn the lessons from C, Python and Haskell? That _readability_ is _the most important thing_ for any language. Now about design. Ocaml allows a mix of imperative+functional+oop! Which means that there are uncountably many ways to screw up the design, and only a few to get it right. And each OCaml primadonna developer thinks that his way is the right way. And the rest can't read his code. Fuck that.

To CEO of Esper. Interesting. How many decades of experience do you have? How many projects you've successfully shipped? How many of these there using this approach of using non-mainstream languages?


> Oh. Ocaml again.

Nothing forces you to read the posts you don't like.

> Well, I'll chirp on the opposite side of discusion. The syntax is completely <->:%^&#$^ $% %^&*% up. Unreadable. Yes, it maybe somewhat pleasant to write code in such syntax, but readability sucks.

Readability is mostly a matter of experience.

> Which means that there are uncountably many ways to screw up the design

And many ways to make it fit a given problem. It's again a matter of experience.

> And each OCaml primadonna developer thinks that his way is the right way. And the rest can't read his code. Fuck that.

Glad you give your opinion. Apparently you have an axe to grind against the OCaml community though. You could probably replace OCaml with any language with a lot of expressiveness, and still be correct - assuming there are "primadonna"s in the OCaml community and that "any other". Or do you mean that this happen only with OCaml?


>> Readability is mostly a matter of experience.

"It is totaly bogus claim." A single example of Perl, Brainfuck or Ocaml can prove that you are wrong. "The only people that can't read their own Perl after 6 months are the people that don't really know Perl." "Replace "perl" with "APL" or "BrainFuck" (or any language with baroque syntax) and the above sentence is as (in)valid."


> OCaml includes many features that are not available in the more mainstream programming languages ... and we believe this gives us a competitive advantage. ... The purpose of this post is to explain the benefits of OCaml and compare it to other languages. We hope to convince readers, especially other developers, to consider adopting OCaml for their projects as well.

The authors did a great job explaining the benefits of OCaml and why I as a developer should think about using the language. I actually want to play around with it now in a side project.

As a start-up manager though, I think the business case is strongly against them and don't believe there is any "competitive advantage" from using OCaml... in fact, I think the opposite. Here's why:

* There is not a strong community behind OCaml. You can't take advantage of the numerous libraries and references created by the community like you can with Ruby, Python, Javascript, Java, or any of the other mainstream languages. * OCaml is hard (and so are most functional programming languages). You will always need exceptionally talented developers to work on your code base and that will get expensive. * OCaml is not popular. You will have to pay an additional premium for OCaml developers due to the lack of experienced talent. * OCaml is a risk. If you're starting a consumer-oriented service, you need to prove there is a market for it. Will OCaml help you get to market faster? I don't know. Will Ruby-on-Rails? Yes. I'd pick RoR any day just to eliminate the risk of getting to market late.


What about the risk of hiring schlubs who can watch enough Railscasts to duct tape things together, but don't know anything about writing solidly-engineered, maintainable code?

The hiring pool is absolutely swimming with these guys because demand for Rails developers is so high right now. You are more likely to find a better OCaml developer for cheaper than you can find an equivalently skilled Ruby developer just by virtue of being a place offering professional work in OCaml. To attract a similar caliber of Ruby developer you need to offer some crazy perks like sponsored open-source work or some really interesting problems which 90% of early-stage startups don't actually have.

You might think it's foolish to worry about the quality of code when you are just trying to rush to market, but as soon as you get to market you need to start iterating, and that's where you find yourself immediately saddled with technical debt. Worse—and I say this as a professional Ruby developer for nearly a decade—Ruby provides very little in the way of guarantees that developers won't do really stupid things. If you don't have a solid test suite you are dead in the water for maintaining a large app because the interpreter by itself gives you nothing. All those awesome libraries which exist for Ruby but not OCaml? The half-life on those things is like 18-months in the Rails community, meaning you are in constant maintenance mode to keep up with the flavor of the month or else face maintaining an old stack yourself once the original creators abandon it and the community moves on.

Not that this doesn't come with benefits, but you're overselling them to justify picking the modern no-one-ever-got-fired-for-buying-IBM choice compared to the risk of using something less well-known but certainly with critical community mass and much better maintainability characteristics.


People say functional programming is hard, but it feels just like a knee-jerk reaction. In practice, it doesn't seem to be much of an issue. For example, IMVU gave an experience report at BayHac about switching from PHP to Haskell; they found that onboarding a new employee (without significant Haskell experience) took about as much time as it did for their particular flavor of PHP (conventions, frameworks... etc)[1]. They found Haskell became much easier to teach given a strong set of opinions on code style and library choice.

Jane Street has had a similar experience with OCaml. In fact, they even teach all of their new traders—largely non-programmers—OCaml in a few months! [2] In fact, one of the main reasons they stuck with OCaml was because it made it easier for their domain experts to review and read code.

The same happens for hiring: it's not nearly as difficult as people seem to think. Sure, the supply is relatively low in absolute terms—but demand is even lower, proportionally! If anything, it's probably easier to hire quality developers in a language like OCaml because it serves both as a filter (applicants really self-select) as well as an additional perk to people interested in the language.

[1]: https://www.youtube.com/watch?v=gl3expkos4Q#t=483 [2]: https://www.janestreet.com/ocaml-bootcamp/


I've always been curious about Jane Street. It seems like a too-good-to-be-true story. Picking an obscure but powerful alternative to C++ and Java for a highly-competent core team makes sense, and has been done by several banks. But hacking OCaml does not fit the personality profile of (m)any traders I've met — most will happily whip up a spreadsheet to help their work, but writing extensive software and learning Hindley-Milner type systems falls outside their ordinary needs or interests.


> learning Hindley-Milner type systems

Well, HM is about type inference and they can just assume it works correctly. And I don't think they are expected to write extensive, complex programs in the language but more put together pieces from their "vast trove of proprietary libraries", which are most likely designed just for the fact of faciliating the use of the language for users who are not (primarily) software developers.


I'm not sure you can realistically wire together pieces of functionality written in a statically typed language without understanding its underlying type system. Sure, by the time it compiles without errors the code probably needs less debugging than in a dynamically-typed language, but getting to that point can be difficult and frustrating. (Speaking from experience as a TA.)


You don't have to write extensive software or have a deep interest in type systems to use ML languages. You just have to know enough of the syntax to understand the API and write some functions to work with provided DSLs.


> There is not a strong community behind OCaml. You can't take advantage of the numerous libraries and references created by the community like you can with Ruby, Python, Javascript, Java, or any of the other mainstream languages.

There certainly seems to be a strong community behind OCaml. There may be languages with stronger communities (but doesn't distinguish, particularly in individual domains, OCaml from the other languages you list.)

> OCaml is hard (and so are most functional programming languages).

Functional programming languages may be initially hard, compared to unfamiliar imperative languages, for people who have deep imperative programming experience. I don't see any reason to believe they are objectively hard, and plenty of people learn functional programming (even if not in a pure functional language) early on in their programming education and are familiar with it to a degree that functional programming languages aren't categorically difficult.

> OCaml is not popular. You will have to pay an additional premium for OCaml developers due to the lack of experienced talent.

This assumes that "not popular" means "low current supply" but not also "low current demand". If the benefits are underrecognized among firms, then the current supply of OCaml developers could well be underpriced compared to value.

> OCaml is a risk. If you're starting a consumer-oriented service, you need to prove there is a market for it. Will OCaml help you get to market faster?

Faster than what?

> I don't know. Will Ruby-on-Rails? Yes.

In the Ruby-on-Rails case, do you really know, are you just following accepted wisdom? What's the comparison "faster" against in the first place?

> I'd pick RoR any day just to eliminate the risk of getting to market late.

If RoR really could eliminate that risk, and had no other costs for doing it, that would be a no brainer. However, I don't see the basis for the concluding that that is genuinely and universally the case.


While your mileage will vary depending on the nature of your startup, I can point you to a paper we wrote summarising our experiences using OCaml to build the XenServer toolstack in a startup environment [1].

You should bear in mind that OCaml represents almost twenty years of continuous development (see the history chapter in Real World OCaml [2]), with a community steeped in some of the most cutting edge technology in modern programming languages (e.g. the Coq theorem prover or the CompCert certified C compiler).

I was the startup manager at XenSource that took the risk, and all of your points were not true for us:

- by joining the Caml Consortium (very cheap), we got access to the core developers and a yearly face-to-face. We're talking about Xavier Leroy and Damien Doligez here. Do you get that with Java?

- the compiler and runtime are utterly rock solid and engineered for a modern Unix environment. We found more serious gcc bugs (5+) than OCaml bugs (one, to do with compiling functions with >32 arguments, and was already fixed in trunk and a backport available inside a day).

- Hiring OCaml developers gave us the cream of the crop even for entry level jobs, and I work with several of the original team that we assembled to this day. See the paper [1] for details on several of the responses from within the company and how we responded.

- The OCaml community is very pragmatic, since the language is used in quite a few large codebases like Coq, CompCert, Why3, XenServer, Pfff (Facebook) as well as closed-source codebases such as Jane Street and Lexifi's. Most language evolution discussions are evaluated against these large users. I consider OCaml rather unique in that, despite being a small community, the ones that do use it do so seriously and often at scale.

I find it amusing that so-called "risk-averse" startup managers discount a 20-year old language with legendary stability in favour of relatively new languages. Nothing beats having developers that deeply understand their language and libraries, and that's only imparted with age and stability.

I would also note that being based in Cambridge, it's quite easy to grads that know ML (and the same is true in many US universities that teach OCaml, like Cornell, Harvard, Princeton and Yale).

[1] http://anil.recoil.org/papers/2010-icfp-xen.pdf

[2] https://realworldocaml.org/v1/en/html/prologue.html#a-brief-...


> I find it amusing that so-called "risk-averse" startup managers discount a 20-year old language with legendary stability in favour of relatively new languages.

While I mostly agree with your post, the four languages mentioned as opposed to OCaml in the grandparent post (Ruby, Python, Javascript, and Java) are all older than OCaml (which is 18 years old) -- Java, JavaScript, and Ruby are all 19 years old, and Python is 23 -- so they aren't "relatively new languages" compared to OCaml.


Yeah, although I guess it depends if you count Caml or not (1987). The intellectual history of the language can easily be traced back to 70s via the various LCF implementations, although the module system evolved significantly since then.

Either way, you're correct that all of these languages bear the proud scars of being tested for decades...


20 years for language X do not bring the same amount of benefits as 20 years for language Y, as can be clearly seen if we compare Javascript and OCaml.


Not sure which language looks better here... I'd say that even with its greater popularity, JavaScript is worse in pretty much every aspect than OCaml.


Yes, I meant OCaml looks better, in light of all the javascript traps.


by joining the Caml Consortium (very cheap), we got access to the core developers and a yearly face-to-face. We're talking about Xavier Leroy and Damien Doligez here. Do you get that with Java?

What do you find most useful about talking to core OCaml devs? If you were developing in Java, do you think you would see a similar benefit from talking to core Java devs?

How do you find the OCaml standard library compares to standard libraries from other languages? Is it expansive? Do you find yourself reaching for 3rd party libraries frequently (or writing your own)?


> What do you find most useful about talking to core OCaml devs? If you were developing in Java, do you think you would see a similar benefit from talking to core Java devs?

OCaml (and Java, and Python, and any other "old" language) evolves in small steps to avoid breaking existing code. A forum like this allows language developers making those design decisions to check them with big users, as well as to gather the qualitative feedback that only comes from trying to use a feature in a production system.

For example, OCaml's recent move in 4.02 towards immutable strings has generated a lot of debate [1] about the right tradeoffs to make with respect to backwards compatibility, and a number of the module system improvements such as module aliases have been driven by big libraries such as Core (to reduce compilation time and binary sizes).

> How do you find the OCaml standard library compares to standard libraries from other languages? Is it expansive?

I consider the OCaml "standard library" to actually mean the compiler standard library, since it really exists for the core toolchain to use. There are several alternatives that are one "open" statement away -- I've co-written a book about the Core library (see https://realworldocaml.org) for instance, which is extremely expansive. The OPAM package manager makes it trivial to use third-party packages now (http://opam.ocaml.org), so the distinction between standard library or not is pretty moot now.

> Do you find yourself reaching for 3rd party libraries frequently (or writing your own)?

I've written my own operating system in OCaml, so I'm possibly the wrong person to ask about that...

[1] http://blog.camlcity.org/blog/bytes1.html


Java has JCP.


Fully agree, doing what everyone else is doing is not managing risk, it's just taking on the same risk as everyone else.


But there is a strong community behind OCaml. A lot of undergraduate programs use a dialect of ML for their core classes.

While most OCaml implementations have poor standard libraries, Jane Street Capital has released several open source libraries that make it easy to implement high quality, performant applications.


> But there is a strong community behind OCaml. A lot of undergraduate programs use a dialect of ML for their core classes

I don't understand your response. Are you saying the OCaml community primarily consists of undergraduates?

> While most OCaml implementations have poor standard libraries

That right there should tell you something.


>I don't understand your response. Are you saying the OCaml community primarily consists of undergraduates?

What I am saying is that a sizable portion of academia uses a variant of ML. Maybe it hasn't caught on in industry, but that doesn't mean that there isn't a community that uses it.

> That right there should tell you something.

A lot of development of core tools for mainstream languages comes from the companies that use them. Clang and LLVM, for example, received support from Intel, Google and Apple just to name a few.

Jane Street has reimplemented much of the OCaml standard library for their own purposes. Facebook has already started implementing tools in Haskell. Once functional languages gain more traction, I suspect that the tools will improve substantially.


Not really disaggreeing with you but,

> What I am saying is that a sizable portion of academia uses a variant of ML. Maybe it hasn't caught on in industry, but that doesn't mean that there isn't a community that uses it.

Academic communities tend to be scattered when it comes to software and the primary focus is on publishing papers rather than producing software that is useful to others. Also, I know a number of skilled academics that use functional languages, but they tend not to have an interest in changing into the industry, rather stay at university. So I wouldn't depend on them as source for recruiting.

Of course there are exceptions, e.g. OCaml Labs seems to be keen on producing well-written, usable software that can be used in industrial settings. The development of Mirage is a pretty exciting area.


Yeah. The movers and shakers in software are always the big companies. But there are quite a few institutions that feature functional programming in their undergraduate curriculum.

One of the main gripes that people mention with functional languages is that it can be difficult to find developers. Several good universities are pouring out competent programmers.


>> While most OCaml implementations have poor standard libraries

> That right there should tell you something.

I wouldn't use that as the main criteria to judge a language. C has a poor standard library (for a definition of "poor" relative to, say, python) and so does C++. But they are certainly useful tools in their domains.

In any case, OCaml has a couple of standard library replacements/augmentations that reduce the gap a bit. (Batteries Included [1] and Jane Street's Core [2])

[1] https://github.com/ocaml-batteries-team/batteries-included/

[2] https://github.com/janestreet/core and https://github.com/janestreet/core_kernel


I think it comes down to if your company has strong enough technical leadership to retain an ocaml engineering team. If there is dissonance between how engineering thinks about the business and how the leadership thinks about the business, you'll not reap the benefits of ocaml and the team will leave. In practice I think this means the founding team has to have an ocaml expert for ocaml to work (or any other not-mainstream tech choice). You can't just hire one and hope for the best.


You speak of "the business case" as if there is only one kind of software business in the world, one that focuses on getting to market quickly with inexpensive developers. Reasons like "OCaml is hard" are short-sighted unless you are only planning for the short term. You wouldn't have to hire for OCaml, just for functional programmers.


That's stupid - all software businesses are (or at least should be) focusing on getting to market quickly, and there's no justification for the view that "a swarm of middling developers" is commonplace.


Parent was worried about availability of OCaml developers, and would "pick RoR any day just to eliminate the risk of getting to market late." Never mind that RoR is a web framework, this kind of thinking, that some technology is a silver bullet, or that another is unusable because it is less popular, or that six so-so developers are worth as much as two or three good ones because they're easier to find, is a real problem. It's immature to say out-of-hand that some language, for any application, is "too risky" when it has a good industry track record.


I was trying to write an eloquent reply but I remembered that PG did a better job than I could.

http://www.paulgraham.com/avg.html


and especially this:

"when you're writing software that only has to run on your own servers, you can use any language you want"


Having substantial experience running a startup on Scala (Not OCaml, but many of the same aspects apply) I'm going to throw my 2c in.

> There is not a strong community behind OCaml. You can't take advantage of the numerous libraries and references created by the community like you can with Ruby, Python, Javascript, Java, or any of the other mainstream languages.

You'd be surprised how good the community is around FP languages.

> OCaml is hard (and so are most functional programming languages)

I'm going to strongly disagree on this one. FP is different to what you're used to. Once you've learned it I actually think FP is substantially easier, especially when maintaining large codebases. The amount of cognitive effort required to write code in an FP style is substantially lower.

> You will always need exceptionally talented developers to work on your code base and that will get expensive

I work with Scala, not OCaml personally, but I've been able to train Junior Java devs up on Scala (In a pure-fp style) with minimal effort. In addition, I find there's a very large talent pool of excellent developers wanting to work with FP in a commercial environment, and are willing to work for less for the opportunity.

Hiring has actually been substantially easier since I switched to Scala, and I'm getting better developers for less.

> OCaml is a risk. If you're starting a consumer-oriented service, you need to prove there is a market for it. Will OCaml help you get to market faster? I don't know. Will Ruby-on-Rails? Yes. I'd pick RoR any day just to eliminate the risk of getting to market late.

I'm sure people were saying the same thing about PHP vs RoR back on the day.

Any new technology is a risk if you're new to it.

For those that have taken the 'risk' of using an FP language, the payoff has been worth it. From my experience with FP Scala, the go to market time has been substantially quicker than with any other platform I'd used previously. The defect rate has been 90%(!) lower, productivity is higher, the codebase is easier to work on, an it's easier to find top-tier developers.

Mind you I can understand not wanting to bet the farm on it from day one - this is why Scala was an easier choice for us, because we could fall back to Java if we have problems (We didn't).


I think the issue is expressive power, since FP languages have more power people find them daunting, like calculus, however, like calculus once you understand it many formerly complex problems are quite simple.


The fact that OCaml is not an easy language might be a good way to find great developers.


It is. For the most part, the developers who use an esoteric language and understand it's benefits are usually of higher skill that your average developer, and those who are excited enough about it to want to work in it are usually even better still. It's the whole "Beating the Averages" thing that PG talks about. There are risks, but they can be managed, and the benefits are great if you know what you're doing!


I often hear how using how using less mainstream and more difficult to learn (as in requires some mind warping if coming from more mainstream language) languages acts as a filter that will leave more capable programmers to choose from, even if there are less of them.

Does anyone know if there have been studies to back this up? Or studies that back up the above comment.

My experience is that programmers who enjoy warping their minds with something different tend to be more capable, or at the very least, up for a challenge. Would be nice to see something else than anecdotes.


I would agree that in my personal experience, those people do tend to be quite intelligent. However, I have not found them to be more productive (and in some cases, they seem to be less productive, because they spend so much time fiddling and tweaking instead of just finishing things).


This is similar to the PhD filter. You have to find folks who have the right balance between theory and practice.


>There is not a strong community behind OCaml. You can't take advantage of the numerous libraries and references created by the community like you can with Ruby, Python, Javascript, Java, or any of the other mainstream languages.

For a project of moderate-to-high complexity, the advantage I gain as a developer by using a more powerful, if more esoteric language, hugely outweighs the time I spend building tooling/libraries etc..

RoR et al might win out for low-complexity projects, but as the project grows the power of the language quickly eclipses the advantages of pre-existing libraries and easy-to-find "talent".

>You will always need exceptionally talented developers to work on your code base

You should strive to hire these people anyway!

This is another non-issue, if you're hiring The Right Way: hiring smart people. I just started a job writing Java having never written Java before in my life (coming from a background in Perl, JS & Erlang), and in a previous position picked up Erlang on-the-job. It's pretty much a non-issue if you hire talented engineers.

Basho's experience with Erlang illustrates these points well: http://basho.com/erlang-at-basho-five-years-later/

> Will OCaml help you get to market faster? I don't know.

They know, this article is basically them explaining that they feel ocaml gives them a competitive advantage. PG said the same thing about viaweb using Lisp vs. Perl/C+CGI


From the "About" page: "We want to see a world where everyone gets the mental luxury of an assistant, including the assistant. We are making this future happen. [...] Oh, did we mention every team member gets an assistant?"

So ... there's some kind of cycle of assistants, where X is my assistant, Y is X's assistant, and I'm Y's assistant? Esper currently employs infinity people? Assistants are employees but not, you know, team members?


The assistants mentioned in the About page are electronic, not real persons, and I suppose that it's the product Esper is building. I'm not affiliated with them in any way but I like the concept.


Good read. I really liked how the author concretely defined the things they use in OCaml, instead of just saying "functional programming" and waving their hands.


"Algebraic data types!" [waves hands vigorously]. Seriously though, I agree. It's a very solid run-down of all the main ways OCaml is cool. I'll probably link people here if they ask my about why they should bother with OCaml.


Interesting list. I'm curious if the author has looked at Rust and how they think it stacks up. Rust is obviously still pre-1.0, and it doesn't have an identical feature list, but it seems to me to perhaps be a lot more practical for a lot of work than OCaml (largely because Rust can basically be used anywhere C++ can be, and it has good support for C FFI).


I think that, having used both of them, Rust is going to feel a lot lower level than OCaml. The single biggest thing is that it's pretty hard (slash near impossible) to write Rust code without thinking about memory allocation, which adds non-trivial mental overhead to the work. That isn't to put down Rust - I think they are the first language that actually has a static, type-checkable story about memory allocation, and that's phenomenal, _but_, the reason for this is to be able to write soft-realtime code (for example, browser engines, games, etc). Web code, at least in the early stages, probably usually has lower performance requirements, and most people would probably trade some performance (and I don't mean to say that OCaml is slow, anymore than Go is slow, etc) for not having to even think about that stuff.


Eh, I don't see that as a downside. And actually, I'm not sure that's even true. It's already apparent that a lot of people are using a lot of completely unnecessary allocation in Rust, precisely because they aren't thinking about what they're doing (and so are doing things like using heap-allocated String objects when they really just need &str slices, or using Box unnecessarily; this was one of the reason cited for moving away from the ~ sigil, as it was considered to "hide" allocation too much).

And speaking personally, I almost never have to consciously think about allocation, unless I'm doing really performance-sensitive work. The straightforward approach is usually correct.


Why do you think Rust is applicable, here? It isn't obvious to me that these people would benefit from a language that can be used anywhere that C++ can be.


Given that C++ is in fact used pretty much everywhere, I'm not sure what you mean by that.

And I think Rust is applicable because it hits all of the big features the author is touting as good in OCaml, such as first-class functions, immutable values, strong static type checking and type inference, ADTs and pattern matching.

It also has benefits that OCaml doesn't, such as full memory safety, no required garbage collector, and good C FFI (I'm no OCaml programmer but glancing at the beginning of the Real World OCaml's chapter on FFI, it appears OCaml uses dlsym() to look up functions at runtime, whereas Rust can outright link to them like any C program would). I'm sure there are others too, but I'm not familiar enough with OCaml to list them. I'm also not sure what the performance of OCaml is like (a glance at the Computer Language Benchmarks Game suggests it's often slower than C++), but Rust aims to have C++-equivalent performance.


> I'm no OCaml programmer but glancing at the beginning of the Real World OCaml's chapter on FFI, it appears OCaml uses dlsym() to look up functions at runtime, whereas Rust can outright link to them like any C program would

This is not a limitation of OCaml but a deliberate choice of the authors of Real World OCaml to use the ctypes library. The OCaml implementation also supports directly linking with C code. What you need:

  * Your ordinary C code
  * Some C stubs that do the conversion between regular C types and the C types the OCaml runtime uses. Usually they are quite trivial, just use the few macros that OCaml ships with.
  * Some type signatures to tell the type system what types your C stubs expect as that can't be inferred.
Generally I think the system is pretty easy to understand and use. Overall it is pretty neat to have two alternatives on how to do FFI, so you can pick the type of FFI (dynamic/static) exactly as your project requires.


You have to write C stubs? I'm glad it's possible, but that's still quite unfortunate. On Rust's side, you usually have to write wrappers around your C FFI functions in order to do any type translation and to add any necessary safety, but those are at least written in Rust (and are not actually necessary to call C, just necessary to provide a safe idiomatically-correct Rust API; you could just vend the C FFI functions directly if you wanted to).


> Given that C++ is in fact used pretty much everywhere, I'm not sure what you mean by that.

1. It isn't a given, from what I've read, that their product would need a language that could be realistically used everywhere. A lot of applications can get away with technology that is more "limited" in that sense. And being "limited" can be a plus.

2. Even if a language is used everywhere doesn't mean that it is/was an appropriate choice in all those cases. Look at people implementing stuff in Python, then reimplementing it in some static language later, giving a performance boost and less bugs, and almost/just as productive (though having the experience of doing it for the second time probably helps). Turns out that they didn't really need the dynamicity of Python after all. In C++'s case, maybe someone started developing an app and found out that they didn't really need the performance that a no-cost abstraction language is able to give, and so wouldn't need to pay the cost of dealing with the complexity of C++.

> And I think Rust is applicable because it hits all of the big features the author is touting as good in OCaml, such as first-class functions, immutable values, strong static type checking and type inference, ADTs and pattern matching.

And you also have the added features of smart pointers, managing pointers, heap/stack allocation, explicit use of views vs allocated memory (string allocated on heap vs string slice, for example), etc. These are all features, or burdens, depending on your application area. But why would they be features, in this context?

> It also has benefits that OCaml doesn't, such as full memory safety,

Doesn't OCaml have full memory safety?

> no required garbage collector,

Why is a garbage collector problematic, in this context?

> , and good C FFI (I'm no OCaml programmer but glancing at the beginning of the Real World OCaml's chapter on FFI, it appears OCaml uses dlsym() to look up functions at runtime, whereas Rust can outright link to them like any C program would). I'm sure there are others too, but I'm not familiar enough with OCaml to list them. I'm also not sure what the performance of OCaml is like (a glance at the Computer Language Benchmarks Game suggests it's often slower than C++), but Rust aims to have C++-equivalent performance.

It boils down to fine-grained control over performance, I guess. But, again, I don't see how that is a plus in this context. It can also be a burden, hence all the languages that deliberately do not let you have explicit control over memory - it makes for less stuff in the language, hence less complex language overall, or perhaps more room for other stuff that might be relevant to the application of the language.

I don't get the apparent attitude of "it has all the features of OCaml, plus all this other stuff". More stuff is not necessarily good, and can be a burden if you don't really need it.


> Doesn't OCaml have full memory safety?

I have no idea. Note here that "full memory safety" in Rust includes data shared between multiple threads, and includes protection against data races. Garbage collectors help avoid referencing free'd data, but if OCaml lets you share mutable data between two threads, then I doubt it's fully memory-safe (at the very least that suggests you can get data races).

> Why is a garbage collector problematic, in this context?

What, in the context of server-side software? I don't know if it necessarily is, but it's definitely problematic in other contexts. And I do know that there have been issues with other languages causing unpredictable performance on servers because of garbage collection, e.g. ending up with a big GC pause after some N requests. I hope OCaml doesn't have that problem, but I don't know.

> It boils down to fine-grained control over performance, I guess. But, again, I don't see how that is a plus in this context.

I'm not sure what specifically you're referring to in that second sentence. Just the general ability to have better control over performance? Having that ability typically is a plus even in the context of server-side software because it means you don't need to change languages just to write the performance-critical aspects of your software. Of course, using the same language is only a good idea if the language is also a good choice for the parts of the software that aren't performance-criticial. My claim is that Rust is indeed suitable for the rest of the program too.


For a more in-depth list of reasons to use OCaml, I recommend “OCaml: What You Gain” at http://roscidus.com/blog/blog/2014/02/13/ocaml-what-you-gain....

It is part of the series of posts linked in “Python to OCaml: Retrospective” at http://roscidus.com/blog/blog/2014/06/06/python-to-ocaml-ret... (HN story: https://news.ycombinator.com/item?id=7858276). I learned a lot about OCaml from that series of posts.

For more comparisons between OCaml and other langages, see the first two posts in the series. In the first post, http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-r..., the author compares OCaml to many languages, including Python, Go, and Haskell. The second post http://roscidus.com/blog/blog/2013/06/20/replacing-python-ro... summarizes his conclusions about the languages – he decided that either Haskell or OCaml would meet his needs best. (He chose OCaml after that, obviously.)


A bit surprised F# was not even mentioned. I guess they are hardcore meta-programming users?


F# is an ML without any of the things which make ML good (modularity). It's a breath of fresh air if you're on a Microsoft platform, but if you can use OCaml, it is superior.


Hey Jon, being an F# user who is not very familiar with OCAML, I'd love to know a little more about what specifically F# lacks that makes OCAML superior. I want to know what I'm missing out on.


Where Ocaml is better:

- Single Core Speed (Ocaml is fast)

- native compilation without requiring some installed runtime

- Polymorphic Variants (Last I read, to be used only when regular variants are not sufficient)

- Modules and Functors (there's a proof of principle for F# supporting these)

- GADTs (allow for richer more expressive types, much more flexibility than regular algebraic data types)

- camlp4

- more pervasive structural typing, higher kinded types...type system is not weighted down by a foreign runtime

Where F# is better:

- better support for not-sequential programming in all its forms: actors with lightweight threads, parallel, async, gpu (many production ready choices), reducers (and Go style channels if they accept joinads)

- Active Patterns are a dark horse

- Type Providers are curious. They seem like dumbed down metaprogramming at first but it's one of those cases where constraints benefit creativity. Although you could certainly do what they provide (and more easily at times) with metaprogramming, I've never seen metaprogramming used that way before. And especially with the proliferation of APIs, stuff like json inference makes going to a language without them like going from 3 monitors to one.

- Units of Measure

- More libraries and Better cross platform support via Xamarin and unity3d

- #light. F# syntax is a tiny bit cleaner and surprisingly close to Python at times.

- Computation expressions/do notation are not quite monads and can be more flexible. Tomas Petricek argues the case here: http://tomasp.net/blog/2013/computation-zoo-padl/

Why the above do not matter: The MLs tend to be more pragmatically focused than other functional languages and espouse using as little fancy code as possible. The core of both languages are the same, so much of the time and ignoring library choices, you won't be seeing many differences between F# and OCaml. It's more like Portuguese vs Spanish than English vs German.


I'm surprised you mention camlp4 as an advantage over F#. It is being removed from the official distribution due to the problems that it causes[1], to be replaced with extension points[2].

We use camlp4 a bit at Red Lizard Software and we are eagerly looking to move to extension points as soon as they are released.

[1]: https://blogs.janestreet.com/ocaml-4-02-everything-else/ [2]: https://blogs.janestreet.com/extension-points-or-how-ocaml-i...


Thanks for the great comparison! Just want to note that OCaml's GADT stuff is really great, but you can even do a final encoding of GADTs if all you have is signatures and structures (as in SML). I'm not familiar enough with F# to say—is this also possible in F#?


I've never used F#, but I found an SO question [0] which provided the following:

- Functors (https://realworldocaml.org/v1/en/html/functors.html)

- OCaml-style objects (https://realworldocaml.org/v1/en/html/objects.html)

- Polymorphic variants (https://realworldocaml.org/v1/en/html/variants.html#polymorp...)

- The camlp4 preprocessor (https://realworldocaml.org/v1/en/html/the-compiler-frontend-...)

- Stronger guarantees from type system. F# allows null values, it seems, while you would need to use an Option type in OCaml.

[0] http://stackoverflow.com/questions/179492/f-changes-to-ocaml


Thanks, this was super helpful. I'll have to spend the time going through functors this weekend to try to grok them. But also F# doesn't allow null values, it uses the Option type as well.


F# most definitely allows null values. They might not be encouraged, you can't always assign a literal null to a type, but null is very much a first class concept in F#.

You're far less likely to run into them in F# code compared to C#, though.


Jon mentioned modularity, so what I imagine he's talking about is functors. They have a scary name (and no relation to Haskell functors) so you might rather call them parametric signatures and they allow one signature to depend upon a previously defined one. Ultimately that means that you can decompose signatures into constituent, reusable parts which is nifty sounding but transformative in how you express APIs


Functors are not parameterized signatures: they do not allow a signature to depend upon another signature, but rather a structure to depend on another structure. (However, SML/NJ has an extension called `funsig` which does what you have described).


Oomph. That's what I get for talking about OCaml after not using it for a really long time, s/signature/struct/.


How is it that they have no relation to Haskell functors?

> Functors are, roughly speaking, functions from modules to modules (https://realworldocaml.org/v1/en/html/functors.html)

Functors in haskell have functions (fmap) that take a value (a function) from one category into another.

Are they not at least the same Functor as in Category Theory?


Well, they're related at that level. Many, many things are category theoretic functors, though. It's a very general idea of a structure-preserving map between structures.


Functors (also known as parameterized modules). See: http://ocaml.org/learn/tutorials/modules.html


Hey James, I know that most .NET/Windows programming has ALL-CAPS names but you should still write "OCaml" with small letters a-m-l.


Apart from the reasons already mentioned, there's also the OS dividing line (Mono is still treated like a redheaded stepchild on Unix) and possibly memories of some incessant trolling/spamming a few years ago.

I'm actually more surprised that SML seems to be completely restricted to academia nowadays. Standardized language, several decent compilers available, used in introductory books...


Or maybe the support for OCaml on their platform of choice (Linux, Java via OCamlJava, don't know) is better than of F#. Or maybe they started off from an existing (older) codebase that was started before F# existed.

Many possible reasons.


Why would they? They neither mention other close cousins like Standard ML, second cousins like Haskell or distant relatives like Scala. This is targeted squarely at the 1% of languages that control 60% of mindshare wealth.


What are some advantages of OCaml over Haskell?


Haskell uses lazy evaluation by default, which makes it hard to reason about the space usage (or termination) of a particular program. OCaml on the other hand is not lazy by default (but supports it if you need it). At least that is one of the reasons why I chose to learn more OCaml than Haskell.


Why do you think termination is easier to reason about in eager languages? I think it's quite the opposite: in lazy languages functions compose, in strict ones they don't necessarily. For example, you cannot compose a "take ten values" and "square all elements" function in a strict language if the argument you apply their composition to has infinite length (e.g. is cyclic).


With a lazy language its not always obvious if a certain expression needs to be evaluated now or not. In particular I was writing some list comprehensions in Haskell, trying to solve some of the project Euler problems, and when I introduced a bug I got an infinite loop. When I fixed the bug I got the correct answer thanks to lazy evaluation, but the buggy code and the correct code looked awfully similar. Maybe it is just my inexperience with lazy languages that caused trouble (I came from C), and learning two radically new concepts: functional programming and lazy evaluation was too steep of a learning curve. Or perhaps list comprehensions aren't really supposed to be (ab)used like that. Unfortunately I don't have that code anymore, it would've made it more clear what I'm refering to...


Maybe it is because you didn't show an example, but that doesn't seem like something caused by lazy evaluation. It is pretty easy to get into infinite loops due to logic errors in Project Euler type problems even when using imperative loops.


Because I can walk through a strict program in my head (or on paper), and see every value, what is being calculated, and what is being stored. With a lazy language, I can't necessarily do that. The entire memory of my program can quickly fill up with thunks. The optimizer determines when things actually get calculated, and I need a much deeper understanding of my code to figure out the space constraints.

Just writing code? I guess I can agree that lazy evaluation makes it easier, but performance still needs to be considered.


The eager vs. lazy distinction is not relevant here. There are some programs that under eager application semantics will not terminate, but under lazy application semantics they will, and vice versa. What complicates reasoning about termination is the presence of mutation. In a lazy language, you'll never have mutation so you don't have to worry about those complications. You could have an eager language with no mutation (e.g., Elm[0]), but for the most part eager languages include some form of mutation and so you may have a harder time proving termination.

[0]: http://elm-lang.org


I don't think that's right. There is a basic theorem in (untyped) lambda calculus that says that if a term has a normal form, then any evaluation strategy, including strict and non-strict, will reduce that term to that normal form. Since non-strict evaluation can "skip" arguments that may have no normal form, there are expressions in non-strict languages that terminate while their strict counterparts don't. The opposite scenario does not exist: if a term has to be evaluated, it has to be done in both the strict and non-strict versions. (And in simply typed lambda calculus, all reduction sequences terminate.)


For (actual) programming languages without mutation and with more than just lambdas and application for control flow, it is right:

    f x y = y
    f (raise Done) Ω
Under lazy application semantics this program will never terminate. Under eager application semantics it will.


It doesn't necessarily have to be that way.

You can have both composable functions and non-wasteful semantics without turning the whole language into a lazy mess.

Note that this approach will also allow you to abstract over different data sources more easily.


I've used Haskell to build JavaScript tools and analyses[0], among other things[1], and have been using OCaml for the past nine months to develop a software-defined networking controller called frenetic[2]. Off the top of my head, here are a few areas where OCaml has an edge on Haskell:

1. The module system. OCaml's module system is a language in and of itself. It not only allows you to define modules that export certain identifiers and types, it also allows you to write functors, which are essentially functions in the module language that can take a module as an argument and produce a new module as a result. This is a great way of reusing code in a project as well as defining external APIs in a general but natural way. OCaml's module system the closest I've seen to realizing the dream of building software by taking some modules from here or there and composing them together.

2. Mutation. Unlike Haskell, OCaml allows mutation. Specifically, OCaml allows value mutation in certain contexts, while both Haskell and OCaml do not allow variable mutation. What this means is that in both languages, if you have an identifier, you cannot change the value that the identifier points to; you can only shadow the identifier with a new binding. But in OCaml, you can declare certain fields of your types to be mutable. You can also wrap values in a "box" which allows you get the same feel that you would out of variable mutation. While in general it's a good idea to limit your use of mutation, sometimes you know it's ok and you just want to do it. OCaml lets you do that, but it requires you to be explicit about it rather than just letting you do it willy-nilly.

3. gdb-able. If you know how to use gdb (and even valgrind I believe), you can use it to debug your OCaml programs. If you try and use these tools with Haskell, you will get nothing but nonsense until you learn how to read the matrix. This fact by itself for some people will make OCaml a candidate for systems programming over Haskell.

4. Subtyping. Certain features of OCaml's type system allow you to do subtyping, complete with type variable annotations to indicate covariance or contravariance. This is a feature that Haskell's type system does not have, so in a sense this is a strength of OCaml. However in my experience, this feature of the type system is hard to use and reason about, and I've seen little (maybe no?) code in the wild that takes advantage of it, with the exception of some simple inference the type system can do in this respect related to polymorphic variants[3].

All that being said, Haskell's still my hobby language of choice. But for building real systems, I'm warming to idea of OCaml as a viable candidate language.

[0]: http://www.cs.brown.edu/research/plt/dl/adsafety/v1/

[1]: https://github.com/seliopou/typo

[2]: https://github.com/frenetic-lang/frenetic

[3]: https://realworldocaml.org/v1/en/html/variants.html#polymorp...


One more thing: OCaml is strict by default, Haskell is lazy by default. The later allows for some nice code idioms[0], but makes reasoning about performance and memory characteristics very difficult[1].

[0] indexedList :: [a] -> [(Integer,a)] indexedList l = zip [1..] l

[1] http://www.reddit.com/r/haskell/comments/15h6tz/what_isnt_ha...

http://www.haskell.org/pipermail/haskell-cafe/2013-September...

http://stackoverflow.com/questions/2064426/reasoning-about-p...


Or even:

indexedList = zip [1..]

Love how ML-like languages let you express just the bare essence of an operation.


Yea, I didn't do that just to make it clearer: if you aren't used to ML, I Imagine that this would be very confusing.


Neat reply, thanks.

In regards to #2, this sounds like the ST monad, are they comparable at all?

What makes OCaml easier to debug with GDB opposed to haskell specifically? I don't have experience doing either, but that's a curious statement, I would have assumed they were similar (both native code w/ some sort of GC...)


OCaml allows you to use mutation pervasively without marking your type. Haskell requires you mark your type with `IO`, `ST`, `State`. You can see Haskell as advantageous because it means that if you have a type without one of those markers you can be certain there is no observable mutation occurring. You can also see Haskell as disadvantageous because those markers are a little annoying.

The ST monad lets you transition from regions which allow mutation to pure regions and then back again.


Basically, all OCaml code runs in IO. You can still only modify things that you've marked as modifiable (analogous to an IORef), but there is no constraint on where you can modify them from.

(In case it's unclear, I'm agreeing with tel and rephrasing.)


One advantage that gives you, is that you can easily define a monad on top of that and not worry about monad transformers. Jane Streets async library does that in its Deferred module https://ocaml.janestreet.com/ocaml-core/111.17.00/doc/async/...


But I like transformers... a lot!


Your Haskell program doesn't use the C stack, so using gdb may tell you something about the Haskell runtime you're using or some C library you called into, but it won't tell you much about the state of your actual program.


It has polymorphic / open variants and a very powerful module system. It is also an impure functional programming language, so you can write straightforward and reasonably fast imperative code in it, if you need to.


A few mentioned in this article are true modules and polymorphic variants. Both can be partially modeled in Haskell, but it's tougher. I feel that almost nobody would disagree that these are advantages of OCaml.

Most people also suggest that strict evaluation and impurity are good traits of OCaml. This is more a contested point, however, as lazy evaluation and strict evaluation are more like duals than one being definitely better than the other. Furthermore, unrestricted side effects are a major tradeoff between convenience and safety—it's up to your use case to decide what's best.

Finally, OCaml obviously has an "O"bject system in it. My understanding is that serious OCamlers shy away from it for reasons of complexity and low value. The major stuff is provided by the module system and doesn't have anything particularly "object" about it.


I would say Ocaml code is much easier to understand as a beginner than Haskell. Even with context I've never been able to just look at some production haskell and get the gist quickly. Ocaml takes work, but I at the very least have an idea of what's going on after giving a chunk of code a once-over.

Here's an example: http://llvm.org/docs/tutorial/OCamlLangImpl1.html


Haskell tries to be exceedingly (some might argue excessivly) clever. OCaml strives to be pragmatic and predictable.


Do you have any specific examples? I ask as someone that has landed on the Haskell boat but still keeps an eye/ear on the Ocaml one ;)


Very small and seemingly innocent changes in a Haskell program can vastly change the runtime characteristics, especially if they introduce a space leak or alter the compiler's strictness analysis.

That isn't a factor at all in OCaml - in fact the compiler is fairly "dumb".


The MLs seem to have better module systems. Or, they have a module system.


I think the HTML example in the blog post is excellent. It's a real, recognizable problem, and the solution is simple and concise.

Many FP blog posts get too fundamental/abstract at this stuff, but here it really shows how static typing and pattern matching makes something very simple that is much more involved in e.g. Python or C#.


Could someone comment on what instances you would typically apply lambdas and closures in real-world code?

I figure that they are at least more convenient than callbacks with a *userData parameter like in C.


Here's an example I -just- had, actually, in production code (not in OCaml; below is pseudocode). It's not super powerful, but it made me happy because it turned what would have been a good 30 minutes to refactor and re-test into a quick 1 minute task.

I had written a synchronous interface for some functionality, that had quite a bit of input data. It called an external web api twice, once to post some data, then a recursive check to periodically ping the API until some changes took effect (yes, none of this was ideal, but I couldn't change the API).

I later realized that the code calling this interface needed to do some work in between these two calls. To refactor it into two calls would be a lot of work, and require a lot of book keeping, passing variables around or recalculating them, etc, and bloat the code.

Instead, I just wrapped the second call in a closure, changing the interface; now rather than returning the result of that second function, it just returned that second function, which the calling code could invoke after it did its work.

That is, I went from

  calling_func() ->
    Val = interface();
    ...

  interface() -> 
    ...//Do stuff to calculate vars
    do_work1();
    do_work2(Var1, Var2, ...);
to

  calling_func() ->
    SynchFunc = interface();
    ...//Do whatever needs to happen between the two calls
    Val = SynchFunc();
    ...

  interface() -> 
    ...//Do stuff to calculate vars
    do_work1();
    fun() -> do_work2(Var1, Var2, ...) end;


I could also have done (provided I just needed side effects, not values) -

  calling_func() ->
    Val = interface(fun() -> ... end);

  interface(Func) ->
    ...//Do stuff to calculate vars
    do_work1();
    Func();
    do_work2(Var1, Var2, ...);
to achieve the same effect, depending on how I want the interface to behave. I could also keep all existing calls working if my language supports multiple function arities, with

  interface() -> interface(fun() -> pass; end)
or similar. The thing that closures give you, that I love, is that utility. I can minimally touch a function to inject entire chunks of functionality, without having to do major re-architecturing.


This is hard to answer because an honest answer is "practically everywhere". First class functions, used properly, will take over every aspect of a program.

Here's a neat example from a paper which tried to compare programming speed between functional, oo, imperative languages [0]. We'd like to build a "shape server" which allows you to build geometries of overlapping shapes and query as to whether a given point (in longitude/latitude) is covered by your shapes. The idea was to model a radar or early engagement system or something like that.

The obvious way might be to build a whole nest of objects which communicate among one another to consider the formation of the geometry. Another method is to just use functions from points to booleans which model the eventual question "is this point covered".

    type Geometry = (Lat, Long) -> Bool

    type Radius = Double
    type Length = Double

    circle :: Radius -> (Lat, Long) -> Geometry
    circle rad (x0, y0) (x1, y1) = sqrt (dx*dx + dy*dy) where
      dx = x0 - x1
      dy = y0 - y1

    square :: Length -> Length -> (Lat, Long) -> Geometry
    square width height (top, left) (x, y) =
         y < top
      && y > top - height
      && x > left
      && x < left + width
So here we build our geometry straight out of lambdas. A Geometry is just a function from (Lat, Long) to Bool and we generate them through partial application. We can also combine them

    union :: Geometry -> Geometry -> Geometry
    union g1 g2 pt = g1 pt || g2 pt

    intersect :: Geometry -> Geometry -> Geometry
    intersect g1 g2 pt = g1 pt && g2 pt

    minus :: Geometry -> Geometry -> Geometry
    minus g1 g2 pt = g1 pt && not (g2 pt)
and then using all of these "combinators" build a sophisticated geometry which describes the final question "is a point covered by this geometry".

The ultimate modeling tool was just lambdas. They are used so pervasively here I'd have a hard time pointing out each and every application.

[0] The comparison itself is sort of stupid, but the paper is still neat http://cpsc.yale.edu/sites/default/files/files/tr1049.pdf


This example is not very different from an object oriented approach (an opaque interface with a "contains" method). That said, in a functional setting the tail recursion is great for functions like union and intersect.


Sure, and functions can feel a lot like OO. I often think of it as though OO were blown apart into all of its constituent parts and those parts were made available. Then, further, those parts "hang together" better than the variety of OO formalisms ever did anyway.


One instance would be function composition. If functions are values in your language, you can define function composition in the language, that is given a function f : a -> b and g : b -> c, you can define their composition g . f : a -> c as

g . f = \x -> g(f(x))

(Here \ denotes lambda)

Why would it be useful to have function composition in your language? Well it gives you similar power as "method chains" in an object oriented language, without being tied to specific classes, especially if the language also supports polymorphic functions. It also interacts nicely with other abstractions usually found in functional languages: For example consider map, of Map-Reduce fame

map : (Functor f) => (a -> b) -> (f a -> f b)

then one has

map (g . f) = map g . map f

Now imagine that map would cause the function to be send to thousands of nodes in a cluster, then the above identity tells you that instead of doing that twice, once for f and once for g, you might aswell take g . f and send it out once. Also say you would for some reason know that f . g = id, the identity function, then

map id = id,

so you would not need to do anything. This might appear trivial, but if you can teach the compiler about those cases, you can do interesting stuff with it. In the case of GHC (the Glasgow Haskell Compiler), it is able to use such rules in its optimization phase, which allows people to write apparently inefficient but declarative code and let the compiler eliminate intermediate values. See for example https://hackage.haskell.org/package/repa.


Why do you need lambdas/closures in order to have function composition? Don't you just need higher order functions?

The thing about map id = id etc. probably has more to do with equational reasoning (can use equals to substitute terms, since there are no side effects, at least in Haskell), but I don't see the connection to lambdas/closures.


The function returned by 'compose' is a closure because it captures references to its local environment (the two functions passed to 'compose'). If it did not close over these variables, it would not work. It might be possible to define a limited 'compose' operator in a language without closures that worked at compile-time/define-time, but you wouldn't be able to choose functions to compose at run-time like you could with a capturing 'compose.'

Nitpick: Lambdas and closures are different things. A closure is a semantic notion of a function captures its local environment. A lambda is a mostly-syntactic notion of defining a function without giving a name. Whether a lambda is a closure depends on the language's scoping rules.


What you need is that functions are values in your language. Lambdas are just a notation for function values. Typed lambda calculus is the internal language of cartesian closed categories and function values are then called internal morphisms. The composition above is then the internal composition of internal morphisms. It would be possible for external composition to be already defined by the language, take the unix shell for example, with its buildin "|" operator. But if you want to be able to define function composition within the language, you need to have something like lambda.


> What you need is that functions are values in your language.

Well yeah, that's what I meant by higher order functions.

> Lambdas are just a notation for function values.

But regular (named) functions can still be used as function values. So this doesn't explain why you need things like lambdas in order to implement function composition.

> Typed lambda calculus is the internal language of cartesian closed categories and function values are then called internal morphisms. The composition above is then the internal composition of internal morphisms.

Ok bud.

> But if you want to be able to define function composition within the language, you need to have something like lambda.

Well I could implement function composition without the syntactic construct lambda:

(.) g f x = g (f x)

I am not using any lambdas, in the sense of anonymous functions or closures. To implement function composition with a lambda is more of a stylistic choice, in this case. Granted, maybe functions-used-as-values are also lambdas, for all I know.


(.) g f x = g (f x)

Interesting. I might say that partial application is a kind of closure. Certainly, it winds up the same - "function carrying some data that it uses internally". I think you are correct that compose and apply does not require closures of any sort.


Well (.) g f x = g (f x) in Haskell is just sugar for (.) g f = \x -> g (f x), which ultimately is turned into (.) = \g -> \f -> \x -> g (f x)


Each time you want to write such callback, you'll need a special userData struct right ? otherwise you'll have a generic bag and lose typechecking.

Let's say Closures are free typed anonymously defined structs.


As argument to a 'map' function for example.


the "with-" pattern (originally from lisp, i believe, but ruby did a lot to bring it to the masses), where something like a filehandle manages its own lifecycle, and calls your closure in between. so rather than the C-like

    let f = open-file-for-writing(filename);  
    for line in array {  
      write-line-to-file(f, line);
    }
    close-file(f);
you can do

    with-open-file-for-writing(filename) {|f|
      for line in array {
        write-line-to-file(f, line);
      }
    }
where the definition of with-open-file-for-writing() would look like

    def with-open-file-for-writing(filename, closure) {
      let f = open-file-for-writing(filename);
      call-closure(closure, f);
      close-file(f);
    }
the benefit of having this be a closure rather than just a function pointer can be seen in the write array to file example above, where the "array" variable is in the scope of the calling function, but when with-open-file-for-writing calls your closure it can make full use of its own local variables.


Of course, you can build your own closure:

    void do_stuff_with_file(struct relevant_data *, FILE *);

    ...

    {
        struct relevant_data data = { ... }
        with_open_file_for_writing(do_stuff_with_file, data, filename);

    }

IMO, the biggest downside there being how far it typically pushes the definition of that function from the call site. Small functions - a good practice anyway - ameliorates that a bit.


you can, but it's sufficiently clunky that it simply doesn't feel like a natural thing to do in the language. good language design is a lot more about the things it makes easy and natural than the things it makes possible.


"you can, but it's sufficiently clunky that it simply doesn't feel like a natural thing to do in the language."

It does to me, but I've done enough functional programming that I easily reach for concepts from that space.

"good language design is a lot more about the things it makes easy and natural than the things it makes possible."

Of course. I don't know where you got the idea I was saying closures aren't a good thing to have language support for. I said precisely the opposite.


You can also wrap the call-closure with an exception handler to make sure that 'f' is always closed when you leave with-open-file-for-writing.


right. and the beautiful thing is that once you realise that you only need to do it once, not everywhere you open, write to, and close a file.


One simple use I like a lot is using tail recursion as a replacement for gotos. Its great for state machines and other "algorithmy" tasks. You get the benefits of gotos (the code you write is the same as the code you think) but the end result is actually manageable.

http://www.lua.org/pil/6.3.html

Lambda the ultiamte goto: http://library.readscheme.org/page1.html


I have to confess to some surprise that this question is still being asked in 2014.


They mention they use Ocaml for server-side tasks. Is there a popular solution in OCaml for writing web pages or rest apis?


You can have a look at

http://github.com/MLstate/opalang


I don't know about the current popularity, and I'm sure other stuff has come along since I've been familiar with what's going on in OCamlland, but there's certainly http://en.wikipedia.org/wiki/Ocsigen


It is quite new compared to what others posted: https://github.com/rgrinberg/opium


Cool tech, but ...

I went to the About-Page and I got the following message out of it:

"We all want to spend our time dooing meaningful things so we ... bla bla ... have to make an app for that".

Come on, I think this is just bullshit. Yes, the premise is correct, people are working too much on stuff they don't like (and we have to fix that), but solving this problem via some time-managment-assistant-whatever-app (oh, there are surely some contrived ML problems to solve here) is just hilarious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: