This story appears to be a sloppy, confusing summary of a Business Insider piece by Albert Cahn, the man mentioned in the article. The fact that SCNR makes no reference to this piece is telling:
Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOP, a New York-based civil-rights and privacy group, so he certainly has a dog in the ring here, but he also has a horror story to back it up.
The source article on Business Insider contains much more important details:
> Travelers admitted that it screwed up. It never conceded that its AI was wrong to tag me. But it revealed the reason I couldn't find my cancellation notice: The company never sent it.
> Travelers may have invested huge sums in neural networks and drones, but it apparently never updated its billing software to reliably handle the basics. Without a nonrenewal notice, it couldn't legally cancel coverage. Bad cutting edge tech screwed me over; bad basic software bailed me out.
So basically, this comes down to a dispute over how much moss is too much moss to make a roof structurally unsafe. But it sounds like the process goes straight from "AI detects a problem" to "policy gets cancelled," without human review in the middle. Perhaps a less error-prone way of handling it is for the AI's recommendations to trigger a human to go out to the home and investigate?
https://www.businessinsider.com/homeowners-insurance-nightma...
Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOP, a New York-based civil-rights and privacy group, so he certainly has a dog in the ring here, but he also has a horror story to back it up.
The source article on Business Insider contains much more important details:
> Travelers admitted that it screwed up. It never conceded that its AI was wrong to tag me. But it revealed the reason I couldn't find my cancellation notice: The company never sent it.
> Travelers may have invested huge sums in neural networks and drones, but it apparently never updated its billing software to reliably handle the basics. Without a nonrenewal notice, it couldn't legally cancel coverage. Bad cutting edge tech screwed me over; bad basic software bailed me out.
So basically, this comes down to a dispute over how much moss is too much moss to make a roof structurally unsafe. But it sounds like the process goes straight from "AI detects a problem" to "policy gets cancelled," without human review in the middle. Perhaps a less error-prone way of handling it is for the AI's recommendations to trigger a human to go out to the home and investigate?