This is the first one-liner I've read that actually explains something useful I can do with higher-order type inference. Preventing errors at compile time doesn't count; doing something is run-time by definition.
As an easy to follow (though questionably idiomatic) example, consider the typeclass Default:
class Default a where
def :: a
and an unwise pair of instances:
instance Default Int where
def = 7
instance Default String where
def = "foo"
This means I can say:
3 + def + length ("c" ++ def)
and get back 14. Note that which "def" depends on the type expected where it is used.
---
It's not limited to values, either. Consider the "return" function in the Monad typeclass:
return :: Monad m => a -> m a
which is a function that takes something and returns that something wrapped in a monad. Which monad? Whatever is expected at the call site.
[1, 2, 3] ++ return 4
gives us [1, 2, 3, 4] because list is a monad and return for lists gives a single element list, whereas
putStrLn "foo" >> return 4
gives us an IO action that, when executed, prints "foo" and yeilds a 4.
---
A super complex example is variadic functions like printf, with the type
printf :: PrintfType r => String -> r
PrintfType can be a String or IO (), giving you something like C's sprintf or printf based on the call site (which is itself cool), but it can also be a function that takes an instance of PrintfArg and gives back some new PrintfType (in an interesting intersection with currying).