in my humble experience using OCR programs, there is always a considerable amount inaccuracy. no matter what font I use or font size, I always either end up proof reading the scanned document or just typing it by hand. the letter "O" is almost always translated by the OCR as a "0" or a zero is translated as an "O". it can be pretty frustrating.
I used the ABBYY orc engine to digitize printed documents (idk why they couldn't just keep around the file used to print) and it was quite accurate. At worst one out of a couple hundred would have enough issues where readability was an issue.
Similar experience here, when building a mobile app that did OCR + translations.. As long as the source image was in decent shape, ABBYY did very well. It's also incredibly expensive.
I wish there was a library that allowed you to input expected data (ex: we expect to see zeros 20% more often then the letter O), then the interpreter could compare against that fact and determine the likelihood of it being each letter. As it stands for most libraries that I'm aware of, you just have to get the data and run your own tests to see whether it should be an O or a zero.