Getting fewer errors does not mean better results -- sorry, I don't really want to continue this conversation of answering vastly general assertions backed by nothing with counter-assertions backed by nothing. My bad, since I started this way. The real objection I have that takes longer to articulate is that this logic is way too simplistic, broad, and worthless.
I'll try to articulate a bit though, but it probably needs expanding into more than a comment to have a chance at persuasion. When you start trying to derive logic chains that conclude in good, leading you to further conclude other things are bad, and base your decision making on what's good/bad in these chains, you've made a mistake. Engineering should be focused on trade-offs, not some binary and local good/bad. Engineering should be focused on measurements, not platonic qualities like "good". The world isn't so coarsely binary. So what if your results are better? Are they accomplishing something good down the line, like being better at [insert something you find morally objectionable]? And given we have limited resources, is the amount something is better worth expending the effort on it vs. something else, or even worth it relative to a measured good-enough steady state? Does the local change meaningfully impact the overall system at all? (You may be familiar with a semi-famous (around HN) article "The optimal amount of fraud is non-zero", if not, I recommend it.) Lastly, you need to actually look at what kind of errors there are, how they surface, when they surface (e.g. during software development, during design and prior to any code, during a test phase, or discovered by the end user), and their consequences for surfacing. You need to analyze whether something is actually less error prone or is just good at hiding its errors and continuing on. You need to look at whether the errors are loud or subtle.
All this high level chatter is of course further pointless in this specific concrete context. As someone already pointed out, it's exceedingly unlikely for a developer to confuse authN and authZ in practice for any significant duration. You can't just import and write code for authN when you meant authZ and expect things to "work" while perhaps errors accumulate (silently or not), because these terms express different concepts, different protocols, and different APIs. The code will simply not work, immediately. In a sense, this makes it quite error-prone if you typo what module you're importing to do your authN/authZ work. This isn't necessarily a bad thing because you'll see the error and fix it before it ever has a chance of impacting anything important. Would it have been better to not make the typo to begin with? Sure, but not meaningfully so. Focus on more important problems.
Meanwhile, to take a different concrete case, if you're writing crypto code and accidentally use ECB block cipher mode vs. CBC (a reasonable typo to be concerned about, even), everything will continue to appear to work. But you're already FUBAR. Of course, there are many subtle such errors in cryptography, and you can even choose the right cipher mode and still make huge mistakes that aren't immediately obvious, and not because of some trivial typo either. (Another semi-famous article you might want to check out, if you haven't, is "If You're Typing The Letters A-E-S Into Your Code, You're Doing It Wrong".) It's interesting to consider that the industry's broadest recommendation in the face of how error-prone implementing crypto code can be is to say "Don't" rather than to try and somehow make it less error-prone.