The problem with Haskell monad tutorials is that they focus on one thing that monads can do--in your case state. Monads can do more than that, though; nondeterministic computation comes to mind, as does simple error handling through the Maybe monad, and you can do continuations through a monad. It's a very abstract concept, and because of that people have a lot of trouble with it.
Monads can do more than that, though; nondeterministic computation comes to mind, as does simple error handling through the Maybe monad, and you can do continuations through a monad.
Please correct me if I am wrong, but all the examples you cite are about hiding mutable state using monads.
1. Non deterministic computation -> remembering last value of m and n. I assume you are talikng pseudo random generators here.
2. error handling with Maybe -> remembering last value or Nothing. Almost like passing along a single error code variable.
3. continuations -> Remember what state your computation variable and PC were last at.
If I am seeing this wrong, then I will be happy to take a second look at monads :)
It is about state not necessarily mutable state. Monads are rather abstract and not really worth all the attention they get. A monad is simply: a type constructor, a function that puts a value into the type constructor, and a function that pulls the value from the type constructor and pass it on to the next function(often referred to as return and bind respectively). Then depending on how you define the type constructor and functions to push and pull you get different Monads. Monads are no different than Objects or design patterns, they are a code organization technique that happens to fit functional programming well.
There's no "remembering" the last value or Nothing; it immediately short-circuits, and not because Haskell is lazy, but because that's how monads are defined.
If you don't understand why "monads are programmable semi-colons", take that second look. See also the list monad; the key is not the "storing in a list" but the way it does "nondeterministic computations".
No, I'm talking about the definition of Monad themselves. They don't work the way most people think they do, especially most of the critics. Monads have that function application step, and at every point that function can be applied one, zero, or many times. There is no "remembering" of a Nothing; the Monad definition is such that it calls the next application function zero times and immediately returns Nothing, which short-circuits the remainder of the monadic computation. That is, it doesn't "keep running" anyhow the way you might expect an imperative language would (or might, anyhow).
This is why I also point out the list monad; fully understanding that is necessary to be sure you understand monads. I have seen numerous "monad" implementations in $YOUR_FAVORITE_LANGUAGE that can't do the list monad correctly because they can do 0 and 1 applications of a function, but can't correctly do arbitrarily many, because the people implementing the Monad interface didn't actually understand the interface. (In fact when I see such an implementation the first thing I look for now is the list monad, and so far of the four or five I've seen only one has gotten it right.)
They're no more about hiding mutable state than imperative languages are about hiding the purely-functional representation of state transfer semantics :)
All you've done is listed alternative ways of implementing things, which is obviously possible in a range of paradigms (albeit with semantics that are usually harder to reason about in an imperative setting).
Well, it runs on a CPU with mutable state. So you could claim that everything is all about mutation of state. But that's not a very useful position to take, because reasoning about, reading and writing code at that low level is too concrete.
When I say nondeterministic computation I do not mean PRNGs. I mean "give me all possible solutions to the N-queens problem". I mean "solve this sudoku puzzle with no explicit loop". Constraint solvers. Some classes of search algorithm. Proof systems. That kind of nondeterministic computation.
You'll probably respond claiming all this is state. Sure, fine, whatever makes you happy. Everything in programming is state. Everything is also lambda calls. Everything is Horn clauses. You can think about these in some very low level dungeon but that doesn't mean you ought to.