I'm not trying to be negative but this site literally does not do what it claims: "automatically and continuously track technical metrics". It doesn't.
I think you are correct: this site doesn't really do what it claims. I'm not sure how it's different from just searching arXiv.
I am pleased to see paperswithcode.com as a collection point of research. But the fact that it casts all of AI as one big set of contests on datasets where "progress" is equated with small increments of classification accuracy is, well, a disappointing view of the field.
Papers with code, does however: https://paperswithcode.com/sota/image-classification-on-imag...
Papers with code literally tracks the progress on common ML tasks programmatically. Please correct me if there's something I missed!