I am not sure if Future is designed for doing small bit of CPU bound computation concurrently. This kind of performance benchmark, IMO doesn't represent real world applications where people either parallelize computational heavy operations or IO latencies.
I would not write Future(n+1) in any real application. A Future.successful is the right way to constructed a future with no real work in it.
What better way to measure two Liskov substitutable[1] approaches?
Factoring out extraneous computations in order to measure performance differences between two abstractions allows empirical analysis of those implementations themselves.
> This kind of performance benchmark, IMO doesn't represent real world applications where people either parallelize computational heavy operations or IO latencies.
That is not what this benchmark measures. It measures the performance difference between scalaz.Task and scala.concurrent.Future. The conclusions are drawn from the author's measurements between these two substitutable Monads.
What those containers eventually schedule is moot it this specific context.
You (and the article) are right: scalaz's Task can minimize context-switching overhead. Trampolining can further reduce it.
GP's right: In most usage scenarios, this gain is irrelevant.
I had the same gut reaction as the GP but refrained from commenting...until now. What bothered me most about the post is the final claim that a 3-orders-of-magnitude speedup is possible, reducing runtime of a computation to hours from weeks. That claim is only borne out if you do a very contrived example where scheduling time dominates.
I would not write Future(n+1) in any real application. A Future.successful is the right way to constructed a future with no real work in it.