I can't find a source now, but I recall reading something like:
The first measurement of the speed of light was too low. Subsequent experimenters assumed that when their measurement differed from the previous measurement by too much, then their experiment was wrong. So they fiddled with it until it wasn't too far off the previous value, and then published.
I think this can be healthy. Being wrong is not necessarily a bad thing as long as you don't stop there. Science is an iterative process: questioning your methodology-- especially with unexpected results-- and then going back to tweak things or double-check, I think that's a healthy way to approach a problem. So it takes longer to hone in on an accurate result. That's okay when it means we get a result with more confidence. Speed is nice, but it's not the goal, and a certain amount of skepticism (call it confirmation bias if you want) can serve the real goal if it is used a ls a tool to spur further investigation.
That chart doesn't really back up the story. There's an oil drop experiment shortly afterwards but presumably that just made the same mistake. Then there's one paper after that. And then they got it right.
Not the OP, but I am guessing something along the lines of the following.
If you assume that it's your calculation errors that lead to observed discrepancies (in eg. speed of light), you won't be able to measure speed of light until you can get those errors below the observation threshold.
If you don't even assume that you can get those errors below the threshold, you won't try and you will live happily in belief that the speed of light is, well, infinite.