Irreducibility may be a thing, e.g. turbulence. For example, we can easily make an ML model that recognizes an image or speech and we can explain every little detail about how it works, but we can't explain the high level emergent dynamics of this model and thus can't really explain how it works. I believe we'll build a real AI soon, it'll exceed all expectations, and we still be puzzled by the complexity of it's turbulent emergent dynamics. In other words, if we could ask an oracle how cognition works, he would write a bunch of diff equations followed by a million volumes of hard math theorems and the complexity will be so irreducible that by the time we start reading volume 2, we'd forget volume 1. Yet another way to look at it. We can imagine a square because it's a simple object. However we can't imagine a 10 dimensional calabi yau manifold no matter how hard we try: it has more complexity that fits into our brains. If the theory behind cognition is as irreducible as that manifold, well never "get" it, even though we'll be able to describe all its local properties.
This is a totally different debate though - 'Is materialism/phyiscalism/reductionism etc. correct?'.