Hacker News new | past | comments | ask | show | jobs | submit login

Your scientific take is useful in the case where selection bias is unavoidable and needs to be corrected for.

This case is not like that; if the insurance agency wants to dispute the 90% false denial rate, it would be trivial for them to take a random sample of _all_ cases, go through the appeal process for those, and publish the resulting number without selection bias.

As long as that doesn't happen, the most logical conclusion for us outside observers is: the number is probably not so much lower than 90% that it makes a difference.






The insurance company may well have already done that; this is being put by someone who is suing them and looking for reasons that the AI bot is bad. The article is silent on what the company response to the accusation was and, realistically, we'd expect the appealed denials to have a very high rate of error whether determined by bots or humans. Few people indeed are going to waste time arguing a hopeless case against an insurance company - this is classic selection bias.

What do you think the claim approval rate is? Less than 10%?

It stands to reason that the overwhelming majority of cases where the claim was approved were approved correctly. Unless that rate is well under 15%, it’s impossible to have the claimed “90% error rate”.


It's clear from the quoted paragraph that by "error rate" they actually meant "false denial rate". That's also the words I used in the comment you are replying to.

Did you comment because you take issue with misuse of the term error rate, or because you think that correct approvals make up for incorrect denials, and that therefore overall error rate is a useful metric?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: