> humans can't explain or understand how we drive, speak, translate, play chess, etc, so why should we expect to understand how models that do these work?
This implies there's no point in pursuing explainability, but many domains involve inferences where the significant predictors are much easier to abstract at a useful level.
For example, if a DLNN could make suggestions as to how to tune a greenhouse given certain yield objectives, then it's reasonable to pursue heuristic techniques aiming to explain what about the parameters most significantly led to the given suggestions.
This implies there's no point in pursuing explainability, but many domains involve inferences where the significant predictors are much easier to abstract at a useful level.
For example, if a DLNN could make suggestions as to how to tune a greenhouse given certain yield objectives, then it's reasonable to pursue heuristic techniques aiming to explain what about the parameters most significantly led to the given suggestions.