The AI that is not explainable is not because it cannot log things, its because the semantic interpretation of what it can log is hard. Starting with the real world input (which we understand) a lot of algorithms progressively apply mathematical transformations till reaching the output. It is the real world "meanings"of these transformations, or what is eventually learned: the stack of these transformations - that is hard to grasp.