I want to elaborate on "values" since I've been thinking about them all morning. They're just a simplification of all of programming.
Everyone remembers the first time that they learned how messy IEEE Floating Point math actually is. You were probably trying to treat Floats like R and then the abstraction leaked all over you. The same thing happens with int32/64 types when they roll over on you or (worse) throw exceptions. There's a mismatch between your mental model of these objects and their actual behavior. Anyone who's ever used BigInts has probably noticed the sigh of relief you get when you make that tradeoff: "these might be arbitrarily slow, but I also know that my computations will work exactly how I expect pretty much forever".
(Yes, you can flat out overflow your memory if you want, but even if it has to swap to your disk for 30 years, the value is intact.)
That sigh of relief is the thing we should be buying with our massive CPUs.
That's also exactly the premise of immutable values. They get computed and then they stay computed allowing you to have an extremely low-impact mental model of their behavior. They work platonically—once you compute them they just exist to be inspected.
Better than that, values you compute can often be finite or efficiently infinite structures so that you don't even truly worry about arbitrary slowdowns. It's relatively easy to play within the "easily computable" playground.
Nobody—not even the purest Haskeller—thinks that everything ought to have value semantics. Instead, they just form a significantly simpler underlying language. This just leaves less room for the semantics of your language to be complicated. They only get complicated when you legitimately want them to—so if you want mutable skiplists then you can have them by creating an environment where values has mutable properties. And then that environment, ST, is itself a value that you can pass around. (That's half the point of Monads.)
Everyone remembers the first time that they learned how messy IEEE Floating Point math actually is. You were probably trying to treat Floats like R and then the abstraction leaked all over you. The same thing happens with int32/64 types when they roll over on you or (worse) throw exceptions. There's a mismatch between your mental model of these objects and their actual behavior. Anyone who's ever used BigInts has probably noticed the sigh of relief you get when you make that tradeoff: "these might be arbitrarily slow, but I also know that my computations will work exactly how I expect pretty much forever".
(Yes, you can flat out overflow your memory if you want, but even if it has to swap to your disk for 30 years, the value is intact.)
That sigh of relief is the thing we should be buying with our massive CPUs.
That's also exactly the premise of immutable values. They get computed and then they stay computed allowing you to have an extremely low-impact mental model of their behavior. They work platonically—once you compute them they just exist to be inspected.
Better than that, values you compute can often be finite or efficiently infinite structures so that you don't even truly worry about arbitrary slowdowns. It's relatively easy to play within the "easily computable" playground.
Nobody—not even the purest Haskeller—thinks that everything ought to have value semantics. Instead, they just form a significantly simpler underlying language. This just leaves less room for the semantics of your language to be complicated. They only get complicated when you legitimately want them to—so if you want mutable skiplists then you can have them by creating an environment where values has mutable properties. And then that environment, ST, is itself a value that you can pass around. (That's half the point of Monads.)