Hacker News new | past | comments | ask | show | jobs | submit login

> Why guarantee 0 instead of ⊥? I'd say it's due to a general agreement that ⊥ has the lowest desirability: if we have the option of returning ⊥ or something else, we should pick that something else.

Then `⊥ || True` would return True, but it doesn't. The reason Haskell specifies how it evaluates values is simply because any language which doesn't is broken. A language with no evaluation order is strictly less usable than one with (any) specified order of evaluation.

> Imagine a situation like this

I am really confused about what you're confused about. If head throws an exception E, and assuming getName and getDOB are nontrivial, getUserById reduces to `MkUser id E E` (this is not a language level statement). There is no jumping or impurity or any such thing happening.

> Note I said "attractive"

To academia, yes.




> Then `⊥ || True` would return True, but it doesn't

This claim is a little strong out of context. If you're just talking about Haskell, with the Prelude definition of `||` and no rewriting shenanigans, then you're right. That doesn't mean ⊥ is desirable though; it's just unavoidable in this case, due to constraints imposed by other desirable properties of the language.

Haskell's designers found the semantics of lambda calculus desirable enough to use as a base for Haskell, even though it removes their ability to define such a "parallel or" function.

This is similar to the desirability of strictness: most languages find it compelling, even though it removes the ability to avoid some ⊥ results like in `fst (0, ⊥)`.

> The reason Haskell specifies how it evaluates values is simply because any language which doesn't is broken.

Haskell only constrains evaluation order to be "non-strict". Implementations are free to use any non-strict evaluation order they like, although I agree that any "serious" language implementation should document what users should expect to happen. Note they should also specify what not to expect, e.g. it might say that the evaluation order of arguments is undefined, even if it just-so-happens to work in some predictable way at the moment!

In any case, in your `⊥ || True` example it's not the evaluation order that triggers the ⊥, but the data dependency in the definition of `||`:

    x || y = if x
                then True
                else y
If the language semantics allows something like Conal Elliot's `unamb` operator ( http://conal.net/blog/posts/functional-concurrency-with-unam... ) we could define `||` in a non-strict way but, as I said, Haskell's designers preferred to pick lambda calculus semantics over parallel-or semantics.

> If head throws an exception E, and assuming getName and getDOB are nontrivial, getUserById reduces to `MkUser id E E` (this is not a language level statement).

That's the first reduction step. The question is whether or not we should reduce it any further, to get `E`. Strict languages would do this, non-strict ones wouldn't.

If we do perform this (strict) reduction, we'd trigger some ⊥ and exception results unneccessarily, e.g. `getId (MkUser id E E)` would give `E` rather than `id` (and likewise for ⊥ instead of E).

If when we don't do this strict reduction that things get tricky, since we'll end up passing potentially-exceptional values around our program. This is just like Haskell passing around potentially-⊥ values.

The tricky part is handling these exceptions. If we define a handler at the point they're triggered, we'll have to put handlers all over our pure functions. For example the following is a pure function, but if we call it with the `MkUser id E E` value we got from `getUserById` we end up needing a handler for the EmptyList exception:

    isMillenial user = dob > 1989-12-31 && dob < 2010-01-01
      dob = try (getDOB user) (handle-exception-here)
Alternatively, we could define a handler at the point they're thrown, e.g.

    safeGetUserById db id default = try (getUserById db id) (exception-handler-goes-here)
Yet `getUserById` doesn't throw an exception (in our non-strict setting), so this handler won't be invoked; we'll just have `MkUser id E E` like before, with the exceptions potentially neing triggered elsewhere.

Alternatively, we could "attach" the handler to the result, so if the exceptions get triggered that handler will be invoked. That's the "jumping" I was talking about.

The other difficulty is where do we return to after handling an exception? If our handler's triggered during `isMillenial`, then it had better return a DOB; if it's triggered during `greetUser` then it had better return a UserName, etc.

We then have to consider what to do if a value contains multiple exceptional values, all with different handlers...

> > Note I said "attractive"

> To academia, yes.

Not sure what you're getting at here? "Attractive" doesn't mean "a solved problem which everyone would be mad to avoid", just a nice source of inspiration. Heck, Google's MapReduce is clearly inspired by functional programming ideas like confluence, and that's been out of academia for so long that it's become deprecated!


> This claim is a little strong out of context.

The context is whether exception handling restricts a language beyond how it is naturally restricted, not the particulars of any one language, so the claim is exactly as strong as it needs to be.

Haskell made its choice, yes, and there are other options like that offered by `unlamb`, sure, but the point is that both of those are a choice. Nobody leaves it up to chance. (You might mention C's lack of specified evaluation order in certain cases, but you should note that this doesn't break the guarantees you're looking for, since C never offered them, it just relaxes a different one.)

No doubt there are counterexamples with total languages, but we should keep that special case separate.

> Haskell only constrains evaluation order to be "non-strict".

This is an internal detail; the two claims (Haskell is lazy/non-strict) differ only in that the latter allows more implementations, not that there is any difference at the language level. Since one tends to assume the as-if rule, even that minutia goes away most of the time.

> That's the first reduction step.

No, in a lazy language it's most likely the last reduction step, if it ever happens at all. (In strict languages it would be first, but at the same time strict languages don't tend to perform reduction on programs, so there's never a `MkUser id E E` anyway.)

Importantly, this means it's senseless to talk about exceptions "being triggered". You either reduce to one or you don't. The most it makes sense to do is `deepseq` it, but as you'll note that is explicitly enforcing order of evaluation so that, not exception handling, is the thing that you should be complaining about!

> Not sure what you're getting at here?

I'm not trying to be subtle here: the advantages of pure, lazy evaluation with regards to automatic parallelisation of code are of interest only to academics.

> Google's MapReduce is clearly inspired by functional programming ideas like confluence

I don't agree remotely. They claim to take inspiration from the map and reduce functions from Lisp, which is strict and whose map and reduce functions are more correlated than intrinsically related to functional programming as a whole; even C++ has a particularly imperative one in its standard library.


> > This claim is a little strong out of context.

> The context is...

I was merely pointing out that you made the statement "X doesn't reduce to Y", but didn't specify which language you're talking about. I wrote the subsequent stuff under the assumption that you're talking specifically about (GHC) Haskell.

> Nobody leaves it up to chance.

Of course. I'm not saying that languages or implementations shouldn't pick (and document) an evaluation strategy. I'm saying that confluence is a useful feature, that exception handlers are a useful feature, that simplicity/predictability is a useful feature, but surprisingly (to me) we can only seem to pick two.

> No, in a lazy language...

Yes, in a lazy language. I'm talking more generally, about whether different (arbitrary) strategies would lead to different behaviour.

> Importantly, this means it's senseless to talk about exceptions "being triggered". You either reduce to one or you don't.

Whether you reduce to an exception or not depends on the evaluation order. Again, I'm not talking specifically about Haskell, but about confluence.

By "triggered", I meant attempting to evaluate an `E`. In the case of `fst (0, E)` a call-by-name strategy would produce 0 without "triggering" (attempting to evaluate) the exception; call-by-need would do the same; call-by-value would trigger it; etc.

Note that I'm assuming E behaves like _|_: if we "trigger" an E in any sub-expression, then the whole expression becomes E, and this propagates upwards until either the whole program is E, or we reach a `try x y` expression. In tht case an exceptional x would cause this expression to reduce to y.

That's why we break confluence: `try (fst (0, E)) 1` would reduce to 0 under call-by-name and 1 under call-by-value.

> the advantages of pure, lazy evaluation with regards to automatic parallelisation of code are of interest only to academics.

You're the one who's talking about lazy evalution and automatic parallelisation :)

I'm talking about confluence, which is useful whether a language is strict, lazy, serial, parallel, concurrent, etc. I'm saying it's especially useful for concurrency (and concurrency is a prerequisite for parallelism).

"Useful"(/"attractive") doesn't mean "automatic parallelisation": it's a whole bag of stuff, including easier to understand code, having the same semantics for both serial and concurrent uses (e.g. no need for the "thread safe"/"not thread safe" notes which litter Java's class reference), etc. Yes, automatic parallelisation is in that bag, but if you think it's only of academic interest then why bother mentioning it at all?

> > Google's MapReduce is clearly inspired by functional programming ideas like confluence

> I don't agree remotely. They claim to take inspiration from the map and reduce functions from Lisp, which is strict and whose map and reduce functions are more correlated than intrinsically related to functional programming as a whole; even C++ has a particularly imperative one in its standard library.

Lisp is a direct descendent of lambda calculus (e.g. McCarthy's "Recursive functions of symbolic expressions and their computation by machine, Part I" uses lambda notation, famously calling the s-expression alternative 'rather formidable for humans to read' ;) ). That makes it pretty functional in my view.

So what if Lisp is strict? It also has dynamic scope. That has nothing to do with whether or not it's functional. A more prudent argument against Lisp being functional would be the inclusion of impure, side-effecting operations, which makes programming in e.g. Common Lisp not much different than other impure, non-functional, imperative languages (except for the macros).

Still, map and reduce are not imperative in this way, and in fact they're often used to 'remove imperativeness' in many languages (e.g. defining map, filter and reduce with loops, then avoiding loops elsewhere). In fact, reading through "Recursive functions of symbolic expressions and their computation by machine, Part I" I see that a `maplist` function is used as an example :)

PS: It looks like you're refuting an argument like 'Haskell is the best language, it invented map/reduce and can automatically parallelise all code, if only people stopped catching exceptions'. That's not at all what I'm saying :)


> Of course. I'm not saying that languages or implementations shouldn't pick (and document) an evaluation strategy. I'm saying that confluence is a useful feature, that exception handlers are a useful feature, that simplicity/predictability is a useful feature, but surprisingly (to me) we can only seem to pick two.

My point is basically that you end up making this trade-off far earlier than that, as evidenced by Haskell, so it's not really right to blame exception handlers; `try (fst (0, E)) 1` isn't any more important than `fst (0, E)` and programmers won't let language writers leave the latter unspecified.

> Note that I'm assuming E behaves like _|_: if we "trigger" an E in any sub-expression, then the whole expression becomes E, and this propagates upwards until either the whole program is E, or we reach a `try x y` expression. In tht case an exceptional x would cause this expression to reduce to y.

I feel this gets to the crux of the issue: if you assume this, then a lot of what you said does hold, but you don't have to assume it and many languages (eg. Haskell) don't. Why are you assuming this anyway?

> You're the one who's talking about lazy evalution and automatic parallelisation :)

You brought it up, not me!

> having the same semantics for both serial and concurrent uses (e.g. no need for the "thread safe"/"not thread safe" notes which litter Java's class reference)

Thread safety of this kind just requires a lack of shared mutability (see Rust); purity and more extreme forms like being agnostic to order of evaluation don't really add anything in practice.

> [On Lisp]

I'm not saying Lisp isn't functional (it is), but arguing against your claim that "MapReduce is clearly inspired by functional programming ideas like confluence".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: