Have you considered that useful predictive models for reality are simply irreducible to being comprehensible by swollen savannah monkey brains? It's not magic. It simply cannot both be explained to a human and be worth anything.
By the Good Regulator Theorem, if models of reality were incapable of being made comprehensible, then we would have went extinct. What we need to figure out then becomes how to externalize what we know unconsciously.
Can a swollen savannah monkey brain drive a car safely on public roads, using nothing but its attached eyes and ears? Yes (most of the time). Can AI do it? Not yet...
Sure you can drive, but can you explain in details HOW you do it? Probably not and that's why a machine cannot do it yet (until it learns by itself, as we all did).
Not an AI in a publicly available release that can be obtained by Joe consumer.
Supposedly the development versions we the general public can’t get yet are getting pretty good and would meet your “most of the time” bar on public roads.
But we don’t really know how good AI is at any given moment, because those that are leading in any AI area sometimes have reasons for not telling yet. Or because we don’t believe them when they do tell us.