Well, I'd object to the idea that even a careful human team in production can hit 100%. In domains where the human "gold standard" is somehow falsifiable (that is, not tautologically correct as in some judgment call situations), it always ends up being a numbers game until the humans can no longer provide 100%.
It's kind of frustrating when you're trying to sell ML-based solutions to a skeptic. I've found that executives will often try to poke holes in the predictions, especially if the ML solution is risky or potentially threatening to them.
It helps a lot to frame things with a known human error rate and cost, as the Data Scientist in the story does, because then the conversation becomes win-win (how do we optimize for best outcomes) rather than unwinnable (why isn't your fancy ML algorithm right about this example X which I can plainly see myself is wrong??).
> I've found that executives will often try to poke holes in the predictions, especially if the ML solution is risky or potentially threatening to them
In particular, automation that reduces headcount reduces their justifiable budget and therefore power within the firm, salary and benefits, and external status. For an example of the latter, Havard Business School asks you how many people you are currently managing when you apply for an MBA.
This creates a strong incentive to block any attempts at automation or increased efficiency, especially when said inefficiency is not reflected on the KPIs used to gauge the executive's performance. Customer satisfaction and error rates are rarely measured well, nor with a refresh rate sufficiently high to be such a KPI. Blocking is easiest to do by seeding mistrust in the person attempting to build the automation, and in the automation itself.
Part of being an effective data scientist/big data engineer/whatever the buzzword du jour is consists of figuring out what KPIs the executive wants to maximise and sell him on that instead. The good old "work on making your boss look good".
It's kind of frustrating when you're trying to sell ML-based solutions to a skeptic. I've found that executives will often try to poke holes in the predictions, especially if the ML solution is risky or potentially threatening to them.
It helps a lot to frame things with a known human error rate and cost, as the Data Scientist in the story does, because then the conversation becomes win-win (how do we optimize for best outcomes) rather than unwinnable (why isn't your fancy ML algorithm right about this example X which I can plainly see myself is wrong??).