I don’t understand your last point, but the point about it being hard to be clear about what a model means is exactly right. But it’s not because it’s not clear what a model is, but rather because it’s not clear what the modeling language of thought is. Here’s where the algebra analogy breaks down. Pretty obviously, the model or models that we are reasoning with in this discussion aren’t simple algebraic equations, but some sort of rich representations of cognitive science and computer science concepts. And, sure, there are NNs running those models, and NNs running the reasoning over them, but they have almost nothing to do with language in the sense of the syntax of sentences. Also, we didn’t get trained with eleventy zillion examples of AI discussions in order to form the models we are employing at this very moment.