Hacker News new | past | comments | ask | show | jobs | submit login

I'm not an expert in AI, but as you can imagine I have to interact with a lot of people who are. What I do know is what it means to be able to do end-to-end verification and it seems fairly antithetical to the basis of machine learning which has an open-ended failure domain. If you could write an AI as a total program, it would cease to be probabilistic. It would become propositional, but would also seem to cease to be writable and computationally tractable. As it would need to encode all possible states of all possible features of all relevant domains and all possible things that can interact with each of those things and what their effects would be.

Though, I'm happy to be proven wrong here as it would make me a lot less nervous about the future were there a way to legitimately assure the artificial driver in an AV won't fail in emergent ways, or at all.




I work on the formal verification of AI systems as well, but focusing on model-based reasoning systems rather than the model-free ML systems. What you've described is exactly what terrifies me most about ML in safety critical contexts: it can fail in a large number of ways, and predicting these ahead of time is generally intractable.

I like to think of it in terms of quantifiers: for most systems it's fine to say that there exists a set of representative tests that the system passes. For critical systems, you really want "for all" in there instead.

Fortunately the model-based systems are much easier to reason about, even if they suffer from difficulty modeling certain cases. For a model-free system I think the starting point would be to precisely state a correctness criterion that could be used as a decision procedure on the output of the model; the combined system would be "verified" to a certain degree. Unfortunately you can't guarantee that it would always be able to yield an answer (it might always give a bad one which was rejected) and, the more fundamental problem, one might reasonably expect that if you can express the correctness property to a sufficient level of detail that you can perform verification on it you might be able to write an easier-to-verify implementation that doesn't rely on ML.

This stuff is hard, yo.


"What foolishness! What nonsense! .... For as children tremble and fear everything in the blind darkness, so we in the light sometimes fear what is no more to be feared than the things children in the dark hold in terror and imagine will come true. This terror therefore and darkness of mind must be dispelled not by the rays of the sun and glittering shafts of day, but by the aspect and law of nature."

LUCRETIUS, De Rerum Natura

:)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: