I don't really understand the article. My understanding was that the mistake was that the error bounds differ depending on the test score from the original DK paper. A test score of 0 or 100 means a potential error of 0-100, whereas a test score of 50 means a potential error of 50. So if you take a group of people who score 0-25 points, if their self-assessment is completely random you'd still see a bias of overestimating score,because people who would give themselves a lower score if possible are unable to.
The charts make it clear that people's self-assessment was (roughly) independent of their skill level. It's not obvious that students' self-assessment would be mostly random / unrelated to skill level. For me that's a non-obvious result.
If people wander off through the verbiage of any article, where the chatter isn't supported by data, sure, they'll tend to get speculation.
I don't really understand what you're saying. Are you the saying the charts don't actually make it clear, or that they make it clear that self assessment is independent but not necessarily uniform?
The charts make it clear that self-assessment was roughly unrelated to ability. That's not an artifact of autocorrelation, instead it appears to be an experimental result.