"Like many techniques in machine learning, the simplest strategy is hard to beat." is a thoroughly ridiculous statement.
It should instead say "Like many techniques in machine learning, the simplest strategy is easiest to implement" as the title of the post (20 lines) makes it clear.
For many problems in machine learning, k-nearest-neighbors and a large dataset is very hard to beat in terms of error rate. Of course, the time to run a query is beyond atrocious, so other models are favored even if kNN has a lower error rate.
According to [1], k-NN is pretty far from being the top general classifier. Admittedly, all the data sets are small, with no more than 130,000 instances. When does it start becoming "very hard to beat", and what are you basing that on?
The vast majority of the data sets are very small (on the order of 100 to 1000 examples!). In fact, in the paper they discarded some of the larger UCI datasets.
It's not surprising that they found random forests perform better, even though the conventional wisdom is boosting outperforms (it's much harder to overfit with forests).
Not just easiest to implement -- literally "hard to beat".
Not "impossible to beat" mind you -- merely hard. Because the simplest strategies work so well in so many cases, that you need to go quite subtle and implement harder algorithms (doesn't matter if you just take them ready off of a lib for the argument) to see an improvement. 80-20, et al.
It should instead say "Like many techniques in machine learning, the simplest strategy is easiest to implement" as the title of the post (20 lines) makes it clear.