There is a story that one of the designers of Java was pushing very hard in favour of Java references being non-nullable by default. But Gosling rejected the idea claiming that it would be too confusing for programmers and it would kill any chances of Java achieving popularity.
Here is a relevant paper, "Non-null References by Default in Java: Alleviating the Nullity Annotation Burden":
"we conducted an empirical study of 5 open source projects (including the Eclipse 3.3 JDT Core) totaling over 700 KLOC. The results allow us to confirm that in Java programs, at least 2/3 of declarations of reference types are meant to be non-null, by design."
> There is a story that one of the designers of Java was pushing very hard in favour of Java references being non-nullable by default. But Gosling rejected the idea
Is there a source for that? And has Gosling apologized to the world since then? (I mean Hoare apologized, Gosling should grovel and beg for forgiveness)
I don't have a source and I don't quite remember where I heard it. It's probably not true. Although this post from Lambda the Ultimate mentions a similar sentiment:
It's still a bit weird, I think: a lot of the migration to Java was from enterprise C++, and while C++ has null pointers it doesn't have null references. So that wouldn't have been much of a change.
I've always thought it bizarre that many languages with static typing allow arbitrary values to be null. It's par for the course with dynamic typing, of course. Damn you, C.A.R. Hoare!
I like Objective-C's solution, where one can just send messages to nil. The practical result when writing code is pretty close to using the Maybe monad applicatively (i.e., write your code like the value is never nil, only check for nil when it's part of the (bigger) logical flow in your code.)
The problem is that, though operations flow better, it's a good way to hide bugs until you've completely lost its context, and then finding out its source is a pain unless you have a backtracking debugger.
An option type, especially one which allows you to lift operations into it (as Haskell's does) is a much better solution as it gives the programmer the choice of behavior: does he not care and just want to keep his option, or does he want to know and unwrap the value within the container?
Objective-C allows half of this, but it's generally the wrong half.
Most other statically typed languages allow neither solution and are therefore broken.
Although you eliminate null pointer exceptions, lazy evaluation does introduce another subtle and very similar class of errors.
Consider a program that takes a leaf-labelled tree of Ints and replaces each int with the maximum found in the whole tree. You can do this in two passes over the tree - one to find the max value, one to re-write the tree.
But you can also do it in a single pass using lazy evaluation:
----------------
data Tree = Branch Tree Tree | Leaf Int
label :: Tree -> Tree
label tree = t2
where (t2,k) = thread tree k
thread :: Tree -> Int -> (Tree,Int)
thread (Leaf v) k = (Leaf k, v)
thread (Branch l r) k = (Branch l' r', max lv rv)
where
(l',lv) = thread l k
(r',rv) = thread r k
----------------
The idea here is that `thread` runs through the tree replacing the values with some input value `k` and also calculates the max value of the tree. The function `label` then uses the output of thread as its own input.
This all works fine as long as you never look at the value of `k` inside `thread`. If you do, you go into an infinite loop as `k` then depends on itself.
This is similar to `k` being a pointer to an Int that doesn't get initialised until the function `thread` returns. If you examine it before then, you get a null pointer exception.
> Although you eliminate null pointer exceptions, lazy evaluation does introduce another subtle and very similar class of errors.
But those are completely different features, option types don't imply lazy evaluation (and the other way around). OP didn't talk about lazy evaluation once. There might be issues with lazy eval, but that's completely not the subject.
I don't know why the author mentioned that Ruby allows any value to be null. Languages like Ruby and Javascript don't have static type checking, therefore their handling of null "not even wrong" compared to Haskell and OCaml which allow you to have the compiler eliminate null pointer exceptions at run time.
IIRC, some Java dialects re-introduce the idea of a variable that cannot be null, and it is possible to do fancy tricks with annotations to produce the same effect.
In ruby of course you have raganwald's andand which is sort of the same as the maybe monad. It should really be built into the language like in haskell I think (and be allowed a shorter name, like .&& or maybe something else that doesn't upset the parsing).
Maybe is not built into Haskell. Haskell has syntax sugar that turns do { x <- foo; bar x } into "foo >>= \x -> bar x" for any Monad. Maybe is just a particular instance of Monad that is included in the Prelude. But you can write your own that is exactly identical in every way.
I guess I'm missing what's specifically monadic about it. Isn't it just an algebraic data type with two alternatives? In ML you declare the morally-equivalent "option" type as:
datatype 'a option = NONE | SOME of 'a
It's possible that the Haskell Maybe monad is doing some additional magic, but I didn't get that from this article if so (does it have something that lets you avoid manually pattern-matching on the two cases and/or using accessors?).
Haskell has that exact same definition for its Maybe type here [1]:
data Maybe a = Nothing | Just a
After which it defines the monad operations on that type
instance Monad Maybe where
[...]
This says "the Maybe type is a monad", but the Maybe type could exist and be used without this, as I gather you're familiar with from ML.
When you use it a bare type, every time in your code you use a Maybe, you have to manually check for Nothing and extract the value in Just. And you can't compose functions involving Maybe without either letting Maybe bleed into the input types of functions where it is not necessary, or writing some clever wrappers (which will probably actually end up defining a monad).
Using Maybe in this way has a common structure:
f :: Maybe a -> Maybe b
f Nothing = Nothing
f (Just x) = Just (do something with x)
Pass on Nothing from Nothing, apply the internal logic of the function if there's Just something. We can abstract this out, meaning we can write functions (a -> Maybe b), and compose them to take care of the Just/Nothing check. We can also do other things, like take functions with no knowledge of Maybe and "lift" them from (a -> b) to (Maybe a -> Maybe b). Here's the simplest expression of this abstraction:
wrapMaybe :: Maybe a -> (a -> Maybe b) -> Maybe b
wrapMaybe Nothing f = Nothing
wrapMaybe (Just x) f = f x
... this is bind for Maybe, the dreaded (>>=) that scares people away from Haskell.[2] It's half the definition of the Maybe monad:
instance Monad Maybe where
Nothing >>= f = Nothing
(Just x) >>= f = f x
return x = Just x
It's known as a monad because monads are a further abstraction, expressing a structure in common with other patterns than Maybe:
class Monad m where:
(>>=) :: m a -> (a -> m b) -> m b -- (AKA bind)
return :: a -> m b
Just by defining these two functions for a type (in this case the Maybe type), a bunch of higher-order functions, like the "lifting" I mentioned, and composition of monadic functions similar to composition of normal functions, and composition of monads to stack the effects they represent, are already defined, derived from these two functions (return and bind/>>=), taking advantage of having the monadic structure in common.
What seems trivial with Maybe as a monad is that "return", the other part of defining a monad, is just the data constructor Just, whereas in other monads it's more complicated.
---
EDIT: I had previously defined: (Just x) >>= f = Just (f x) -- this is actually fmap for Maybe, which is also the lifting function I mentioned
[2] Forgive me, it is at this point that I realize you may actually know all about monads and were just wondering what's monadic about Maybe. But I was already too deep into Writing A Monad Tutorial to throw it away. Incidentally,
NB: I've never actually exposed my tentative understanding of monads in public before, so before trying to follow anything I've said here, wait on someone on surer ground to pipe up with approval (or criticism, which I would very much appreciate).
WARNING: This is the blind leading the blind. I was already well-upvoted for my previous post before realizing (thanks to you, tkahn) that my definition of bind was actually fmap (I think). Crying out for authority here; help us out HN.
---
You are not, strictly/pedantically speaking, because fmap is the single function that defines a Functor to begin with [1]:
class Functor f where
fmap :: (a -> b) -> f a -> f b
and Monads are a special case of Functors; so, as the defining relation of a Functor, it doesn't a priori have anything to do with Monads.
You are right in seeing that fmap is lifting -- liftM, which is defined in terms of just bind and return for Monads, is (I believe... see warning above) identically fmap. fmap = map for Lists as seen as Functors, for example, which lifts a function (a -> b) to ([a] -> [b])... "into" the Functor/Monad, I suppose.
All Monads are, in theory if not in practice, Functors (there happens to be an instance of Functor defined for Maybe). fmap is not a functor but an arrow, in this case a function, that lifts f into the context of the Functor.
(which evaluates to nil if there's no person, or if they have no occupation, etc), my colleague pointed out the other day that you can pass it a block:
This could be fixed in Java by two simple changes.
x = func(null); // func is ignored and x is null
proc(null); // proc is ignored and an NPE is thrown.
and so implicitly
y = a(b(c(d(null)))); // a b c d are ignored and y == null
This would mean you could treat null like Nothing, threading it through non side effecting functions and only testing the result. If you tried to use it with a side effect, it would halt.
The compiler would be free to omit these checks where x is provably not null. For example:
foo (x);
y = bar(x);
The assignment to y does not need a null check, because a null would have thrown NPE in foo.
Side effect of this design: code will never receive a null in parameters. Parameter values can be assumed to be definitely not-null by coder and compiler alike.
EDIT: one more change would be needed: prim = func(ref) is illegal unless ref can be proved not null. Also, add a no-op built-in, notnull(x) that does nothing except NPE if x is null. That could be used as proof to the compiler.
Acceptable — barely — in dynamically typed languages, but utterly moronic for a strongly typed language where you can make references null-safe statically and at the type level.
Hiding nulls under the carpet is only a recipe for painfully hard to discover bugs in the long run.
This design is type level. Properties and primitives are not null, references used after a guard procedure are not null, other references are Maybe types. Functions are Maybe typed if they are operating on nullable references. Those types are not syntactically top level, but that's because I was considering an alteration, not a rewrite.
The first 11 pages really helped me understand the motivations for monads and it explains bind and return. It's written really clearly and doesn't assume much prior knowledge. I think it may be the original paper introducing monads to Haskell but I could be wrong.
(For some reason its a .pdf when it should be .ps.gz)
I got to page 8, where he introduces an upside-down A as I guess a symbol meaning "the set of all" and do not understand even after three readings, anything that is said on page 9.
You guessed pretty much right: ∀ is usually read as for all. It and others he uses (like ⊢ for is derived from) are defined in http://en.wikipedia.org/wiki/List_of_logic_symbols . Hope that helps; as a beginner in Haskell and strong type systems, it looks like I’m going to have trouble with this too.
∀ is the standard symbol for the universal quantifier: some proposition is true for all elements in the universe being quantified over. This is stated explicitly in the paper: "The type given for runST above is implicitly universally quantified over both s and a."
We could thus expand the type given for runST as follows: runST is a function which, for every instance of the type a, and every instance of the type s, accepts a state transformer (with type ST s a) and returns a value of type a.
If you want to attain a better understanding of the logical concepts and notation used in the article, try getting hold of an introductory logic text. I recommend Enderton's 'A Mathematical Introduction to Logic'.
Both your and the previous reply indicate that the blanks I filled in (the "for all" and "derived from" symbols) were indeed as I suspected, and yes I understood the definition of runST to be as you describe it. But I still don't comprehend their discussion of how defining runST thusly somehow provides safety in the example they say it would automatically detect is disallowed.
Here is a relevant paper, "Non-null References by Default in Java: Alleviating the Nullity Annotation Burden":
http://www.springerlink.com/content/f197143t80671627/
"we conducted an empirical study of 5 open source projects (including the Eclipse 3.3 JDT Core) totaling over 700 KLOC. The results allow us to confirm that in Java programs, at least 2/3 of declarations of reference types are meant to be non-null, by design."