No that's just wrong. It's not just failures. Imagine if I verified they were all zero then I wouldn't have to do a full parse. Or if I verify they have only a few digits then I would avoid bignum parsing. I think this proves what I'm saying - this simply isn't true in general.
It's not necessarily wrong, though what you say makes sense.
Any code that you write, you can move into the parser module. In your example, you have a function that checks the strings are all zero, and you wouldn't call the parser if it returned true. But you can simply declare this code to be part of the parser and just not call the rest of the parser. The difference is in the API: in the one case, you return false and in the other you return a parse error.
Now, you may say that, knowing this information, you're going to parse into ints instead of BigNums. But the parser can do this too.
You might also say that I happen to know, due to some context, that all the numbers have 10 digits or fewer. And that therefore I can do better than the compiler. But if you make the context and the strings the input to the parser, then you bring it level again.
Or you might say that my application has some context that gives it an edge but the parser is a general purpose library that does not understand the context. In that case, your application does indeed have an edge. But that applies to any general purpose library and is not a point about the merits of parsing versus validation.
What's maybe more interesting is if you might only need some of the parsed results or the parsed results wouldn't fit into memory. But for that you just need an incremental or streaming compiler.
This is actually pretty common in real world compilers. For example, javascript parsers typically don't lex javascript functions in their initial pass through the source.
> In your example, you have a function that checks the strings are all zero, and you wouldn't call the parser if it returned true. But you can simply declare this code to be part of the parser and just not call the rest of the parser.
Again -- like with the previous argument about "error" vs. "success", this is a red herring built around the fact that my example was so trivial and easy to describe this way. What if the logic was supposed to be "if the digits are all zero then don't store the numbers in the database, otherwise subtract them from my account balance"? You avoid a database hit if they're all zero, so are you going to say debiting your account is now "parsing" too?