Worth noting that I wrote this as a result of a discussion on #haskell on Freenode/IRC. I have to say it’s quite (happily) surprising to me that people on HN enjoy it after all those years :-D
My source really was people talking about it on #freenode and playing with LambdaBot at the time. There’s a good chance the linked blog post is a transitive source though!
Haskell, Ruby, COBOL, FORTRAN, and MISRA C need a yearly Monty Python "I'm not dead yet" doodle. tcl, Forth, and Squeak... that's enough out of them. Never Expect dejagnu.
I've been very excited about this myself before, but unfortunately, it's something of a stupid trick. The big problem is circular dependencies: they produce infinite loops and there's no particularly good way of detecting them. Regular old catamorphisms with `Either CycleDetected a` are probably a better idea in practice.
Can you elaborate on what kind of circular dependencies are you referring to and how would catamorphisms with `Either CycleDetected a` help to reduce those dependencies?
Okay, imagine you've got a list of securities and their underlyers. You've got a flat file that represents this information. You try to represent it as
Security
Underlying Security (Security)
Where that's the Security object, not the Security ID.
This works great for moeb, until the day that someone accidentally introduces a cycle. Instead you want a representation that looks like:
Security
Either CycleDetected a
You can call traverse to make that `Either CycleDetected Security a`.
You can't detect all possible cycles statically, but you can definitely detect them when they happen at runtime. When calculating a value, track its dynamic dependencies. If it ends up depending on itself, you have a cycle.
is particularly dangerous because we can pick b to be uninhabited (Void)
forall a. Either (a -> Void) a
which is the law of excluded middle, for any type we either can immediately summon an example or prove that no example exists, as having a function (A -> Void) would allow us to create a value of the uninhabited type if we had a value of A.
But forall r a. Cont r (Either (a -> r) a) is fine. Let's see what happens when we set c = Void and unwrap the Cont
lem : forall r a. Cont r (Either (a -> r) a)
runCont lem @Void : forall a. (Either (a -> Void) a -> Void) -> Void
It turns into a weird statement, a double-negation, that there exists no proof that (Either (a -> Void) a), the true LEM type, is uninhabited. Not the same as actually having such a value.
Consider the challenge of constructing (Either (A -> Void) A -> Void) for different types A.
Let's say you have a positive capability of generating values of type A. This is constructive proof that (A -> Void) cannot exist so you know that any caller of your function will give you (Right value) if they are capable of calling you at all. That's not particularly helpful since you're on the hook to generate an uninhabited type Void, now.
Alternatively, let's say you lack a method to construct values of A. Now either you (a) never get called, (b) get called with a parameter like (Right value), proving to you that values of A exist but asking you to use one to produce a value of type Void (impossible!), or (c) you get a value of type (Left fn), which constructively proves that no values of type A can be constructed.
So in this double-negative formulation, allowing for the potential that you simply never get called, nothing about this LEM-like type is problematic.
But it sure looks close to something dangerous.
---
You can see how this construction works by stripping away the Cont wrapper.
runCont (cont f) = f
runCont (cont (\c -> (c . Left) (c . Right))) = \c -> (c . Left) (c . Right)
\c -> (c . Left) (c . Right)
lem :: forall a . (Either (a -> Void) Void -> Void) -> Void
lem c = c (Left (\x -> c (Right x)))
So we see that our callback, discussed above, is always called with a value of shape (Left fn), which appears to be a constructive proof that no values of type `a` exist. It works via a trick, though. If you were to attempt to disprove the statement of that function (a -> Void) by offering it a value of type `a`, that function will just use the value you constructed and hand it back to you as (Right value), forcing you to yourself prove (a -> Void).
This is messy circular logic, but common in this kind of construction. The only way to "construct a value" of Void is to somehow pass the buck and force someone else to do it.
And so this formulation of "LEM" does just what it says on the tin: shows that it's impossible to prove that LEM is impossible. Any such proof can be turned against itself.
is a CPS computation that produces an intermediate result of type "a", within a CPS computation whose final result type is "r"
In your example of type "Cont c (Either (b -> c) b)", it produces a final value of type "c". (There is no Either in the final result, i.e. there is no "excluded middle")
It produces an intermediate result of type (Either (b -> c) b) by feeding values through Left and Right constructors, calling the continuation multiple times, once for Left and once for Right.
I don't see what you mean by "thin air" because the Left and Right constructors in the definition are creating the Either, first creating the Right, and then feeding that result to the Left.
If you think the (Either (b -> c) b) is the final result, then you don't understand or are misrepresenting the meaning of the definition of
Cont r a
(don't make the mistake of thinking the final result is of type "a". it is of type "r")
There is no excluded middle here. It just uses the Either as a way to encode & pass 2 different functions to the continuation.
The naming of the function as "excludedMiddle" is disingenuous.
Actually I called it that because it's derived from the law of excluded middle: "A OR not A" is always true.
You can encode logical statements in the Haskell type system as follows:
a OR b: Either a b
a AND b: ( a, b )
not A: a -> Void
a implies b: a -> b
With this encoding, having a well-formed value for a given type is a proof of its validity.
The one thing we can't use directly is double negation, because we would have to make A appear out of nowhere:
((a -> Void) -> Void) -> a
However, this is exactly the type of Cont Void. This means we can use the Cont Void monad to make logical proofs, including double negation. The type:
Cont Void (Either (a -> Void) a))
Encodes the law of excluded middle, and the value I wrote is a proof of it. It so happens that you can use a type parameter instead of Void because an arbitrary type can also not be used for anything.
There is nothing "out of thin air", since `a` is the input, not the ouput, unless you just misread the type. But that can be said for any higher-order type, like
You can read those lower case letters as preceded by "for all", and it's up to the caller to choose which concrete type the letter should represent, as the function promises to return an implementation "for all" concrete types.
So if wrote the function foo,
foo :: Int -> b
and you happened to need a BufferedReader, you could just get it by calling (foo 3). That's what the "thin air" comment is. Where could the implementation of BufferedReader possibly have come from?
I love type signatures like this. Not hard to implement - it's like a game. But then it's like..wtf does this do? And you can kind of guess by reading the type as well as you can the body.
The use here is as a section, not to change precedence: `($ x)` could also have been written as `\f -> f x`, which I think is even less readable.
As for the usage on the lower code snippets like `for […] $ \x -> …`, then it saves some parentheses. Opinions on this go both ways, with some people preferring lisp-style syntax, others use more Python-like indentation code formatting. Both are okay, and I’ve switched my preference multiple times.
Where I agree $ is terrible is in expressions like `foo x . bar y z $ abc`, often (!) much harder to read than just adding parentheses.
> Where I agree $ is terrible is in expressions like `foo x . bar y z $ abc`, often (!) much harder to read than just adding parentheses.
This case is actually valuable for the clear delineation of the lefthand-side that's relatively fixed and can be moved to an outer scope (or even toplevel) and the righthand-side that's contextual and is dependent on a more dynamic value-part.
If I'm not mistaken, it reads from bottom to top. I hated the function application direction my whole life, even in math (I read and write from left to right, so f applied to x should be x f, and g after that should be x f g), but it is ridiculously bad in Haskell. Transforming data, filtering and piping should be where it excels due to the lazy data structures, but bottom to top piping order is annoying.
The & operator in Data.Function--which is just flipped $ with slightly higher, but still very low, operator precedence--at least gives you a way to deal with this in base; it's unfortunate it isn't more widely used in tutorials and example code (at least from what I've seen).
For chains of function applications, `$` is more readable than like 5 nested parens, especially if the thing at the end of your `f $ g $ ...` is a do-block, which will span multiple lines.
Pretty hard. The key to this implementation is Haskell's laziness, so you have to implement laziness in Javascript, which will probably not be very ergonomic. You can write
let loeb = (structure => {
let inner = () => structure.map(item => item(inner()));
return inner;
})
let fs = [ _ => 1
, x => x[0] + 1
, x => x[1] + 1
, x => x[2] + 1
]
console.log(loeb(fs)())
and get an infinite loop, or
let loeb = (structure => {
let inner = () => structure.map(item => item(inner));
return inner;
})
let fs = [ _ => 1
, x => x[0] + 1
, x => x[1] + 1
, x => x[2] + 1
]
console.log(loeb(fs)())
You can typically simulate laziness by lifting variables into functions. A 0-arity function works fine, but for lists it's slightly more elegant to define them as functions from an index to a value (in that case the functor map is just composition, which is neat).
So the following works
let loeb = (structure => {
let inner = (i) => structure(i)(inner);
return inner;
})
let fs = (i) => {switch(i) {
case 0: return _ => 1
case 1: return (x) => x(0) + 1
case 2: return (x) => x(1) + 1
case 3: return (x) => x(2) + 1
}}
If javascript was sensible you could just do
let fs = [ _ => 1
, x => x(0) + 1
, x => x(1) + 1
, x => x(2) + 1
].at
but alas that doesn't work, you can do
let fs = (i) => [ _ => 1
, x => x(0) + 1
, x => x(1) + 1
, x => x(2) + 1
][i]
though. In that case javascript may end up rebuilding the list every time, I'm not quite sure.
def done(x):
return (type(x) != type(lambda: None))
def allof(f, *xs):
return all(map(f, xs))
def isnt(f):
return lambda x: not f(x)
def myprint(xs):
print("\t-> ", "\t".join([str(x) if done(x) else "_" for x in xs]))
def lazy(f, *deps):
"Evaluate f(xs), only if dependencies (listed by index in xs) are resolved."
def lazy_f(xs):
if allof(done, *[xs[d] for d in deps]):
return f(xs)
else:
return lazy(f, *deps)
return lazy_f
def loeb_unsafe(fs):
"Loeb for lists, but doesn't detect dependency cycles."
xs = fs[:]
# THE IMPORTANT PART!
while not allof(done, *xs):
# 'if done(x) is special for Python: convert function to value, since
# Python doesn't know how to evaluate/call a value like Haskell does.
xs = [x if done(x) else x(xs) for x in xs]
return xs
def loeb_safe(fs):
"Loeb for lists, stops when it detects a dependency cycle."
xs = fs[:]
# THE IMPORTANT PART!
while not allof(done, *xs):
myprint(xs)
# 'if done(x) is special for Python: convert function to value, since
# Python doesn't know how to evaluate/call a value like Haskell does.
new_xs = [x if done(x) else x(xs) for x in xs]
if list(filter(done, new_xs)) == list(filter(done, xs)):
print("CYCLE DETECTED!")
break
xs = new_xs
return xs
fs = [
lazy(lambda xs: xs[1] + xs[2], 1, 2),
lazy(lambda xs: xs[3] + 1, 3),
lazy(lambda xs: xs[1] + 1, 1),
1,
lazy(lambda xs: xs[5] + 1, 5),
# unsafe!
# lazy(lambda xs: xs[4] + 1, 4),
lambda xs: 100,
200
]
print("safe:")
myprint(loeb_safe(fs))
print("\nunsafe:")
myprint(loeb_unsafe(fs))
Unfortunately, it's nothing new. America's mainstream historical ethos derives from a deep anti-intellectual sentiment of an expat merchant class who frowned upon academia and the Old World. The Puritans at least left a positive legacy in Boston, despite generally shunning ideas and people who dared to think differently. The American frontier acted as a risk tolerance filter of peoples from around the world to emigrate. There are systemic problems of economic and policing inequality, and under-socialism in the US.
How does America receive those from distant lands now?
What future does America have without prestigious, selective academic institutions except a brain drain exodus?
More likely than not, the author was just trying to make joke—possibly even self-deprecating. From Haskell programmer to Haskell programmer, this specific joke might even be kind of funny: What's ahead is clearly not "normal" Haskell, and will be quite puzzling at first glance. Haskell has lots of these zingers that Haskell programmers have come to either enjoy or respect. (The "zygohistomorphic prepromorphism" joke is another old one.) The joke being made, while in jest, may also serve to be a sort of hook; the reader may respond internally "I am smart enough!" and proceed to try to crack the puzzle.
However, the joke might feel tone-deaf to the non-Haskell community, and feel absurdly pretentious. To many, Haskell itself is already quite unapproachable. Even with efforts to make Haskell seem like a "normal programmer's language", it still has a reputation for being academic, mathematical, buried in abstraction, and otherworldly. To a non-Haskell programmer, the joke comes across as pretentious: I will show you one simple line of code that I guarantee you are not intelligent enough to understand. It's a joke the author really savors too, since the author spends a couple paragraphs dedicated to it.
The HN crowd is probably one of the best audiences to share something interesting about Haskell, because curiosity is a sort of "core tenet" of the culture here. But I think those who are curious like learning from people who want to share their knowledge with empathy and compassion. Haskell, deserved or not, is so steeped in its reputation of unapproachability—something many HNers are carrying with them into the article—that the final stomp of "you're not smart" is just a perfect way to turn somebody off and away.
I do think that, in this case, the general academic feeling of Haskell is a critical component to understanding why this joke may be a turn-off to many. If it were a goofy piece of JavaScript, where everybody and their dog knows that it's "Bad JavaScript" (TM), it wouldn't have the same feeling of lacking compassion for the reader, I think.
> However, the joke might feel tone-deaf to the non-Haskell community, and feel absurdly pretentious. [...] To a non-Haskell programmer, the joke comes across as pretentious: I will show you one simple line of code that I guarantee you are not intelligent enough to understand.
How's that different to a humorous basketball/hockey/football dribbling demonstrated by a pro-player on a youtube video for their audience's awe and admiration? I bet no one is treating it as "pretentious" at those moments, in fact many of the younger folks regularly get inspired by this kind of performances to start in the sport.
These types of depreciating and other-party-blaming comments tend to come from ego-offended senior developers who're used to thinking that they can always pick a new programming language in 20 minutes or so, because all of them resemble their favourite flavour of $java (not really).
> a humorous basketball/hockey/football dribbling demonstrated by a pro-player on a youtube video for their audience's awe and admiration
Oh, you can totally do that in a pretentious and insulting way. There's lots of "(someone) took over and amazed the crowd" style videos where it seems like the pro is inserting themselves into the situation to awe people in a very cringe way. The worst ones in that style are the videos of that one guy joining buskers to show off his boogie piano skills while they sometimes enjoy it and sometimes just look like they wait for him to leave. (can't find the channel, I blocked him)
I think the critical difference is that said basketball player doesn't open their demonstration by challenging the audience's intelligence explicitly.
I don't want to personally say whether the joke in the article is in good taste or not, and I don't want to assess whether programmers' egos are too fragile or sensitive. I don't think that's productive.
I do, however, want to emphasize that when read by a broader audience than just the community of Haskell programmers, due in part to Haskell's reputation, the joke lands as off-color, as the G*P comment demonstrates.
I wanted to add though that, to me personally, the author's comment doesn't register as "you're not smart" at all. Rather I read it as message along the lines of "don't worry if you don't understand this piece of code, I didn't either". And in this instance the joke works pretty well because the code resists almost all the normal mental tools a programmer has developed to analyse and understand an unknown piece of code.
The joke reads as a gentle challenge to try to understand it without help, but not to try too hard: no one can reasonably expect you to reverse-engineer this by yourself.
I think the curse of knowledge is at work in both with monads and C preprocessor hacks, they both make sense only within a domain-specific context - if you’re living and breathing in that context, you can appreciate the techniques. At the same time, if you never step outside you can forget that programming is a tool for humans, and think that your tools, systems and languages are infallible.
Compare Löb’s theorem with the signature of `loeb` with some renaming to make them match up:
> in any formal system F, for any formula P, if it is provable in F that "if P is provable in F then P is true", then P is provable in F
To see it even better, look at the section that introduces the modal logic operator (the square that operates on formulae).See more on the Löb-spreadsheet equivalence here http://blog.sigfpe.com/2006/11/from-l-theorem-to-spreadsheet...