How important is the "knowing why" if the mistakes are still there? And in reverse, we "know" GPT doesn't use a calculator unless specially pointed at one.
Floating point errors creeping in is why we have to use quaternions instead of matrixes for 3D games. Apparently. I'd already given up on doing my own true-3D game engine by that point.
In some sense we "know why" humans make mistakes too — and in many fields from advertising to political zeitgeist we manipulate using knowledge of common human flaws.
On this basis I think the application of pedagogical and psychological studies to AI will be increasingly important.