People keep offering this hypothetical 10% acceptable false positive rate, but the article says it’s more like 35%. Imagine if your workplace implemented AI and it created 35% more unfruitful work for you. It might not seem like an “unqualified good” as it’s been referred to elsewhere.
It depends if you do stuff that matters or not. If your job is meaningless, then detecting errors with a 35% false positive rate would just be extra work. On the other hand, if the quality of your output matters - 35% seems like an incredibly small price to pay if it also detects real issues.
Lots to unpack here but I'll just say that I think it would probably matter to a lot of people if they were forced to use something that increased their pointless work by 35%, regardless of whether their work mattered to you or not.