At this point you have to start entertaining the question of what is the difference between general intelligence and a "sufficiently complicated" next token prediction algorithm.
A sufficiently large lookup table in DB is mathematically indistinguishable from a sufficiently complicated next token prediction algorithm is mathematically indistinguishable from general intelligence.
All that means is that treating something as a black box doesn't tell you anything about what's inside the box.
Of course it can. Reasoning is algorithmic in nature, and algorithms can be encoded as sufficiently large state transition tables. I don't buy into Searle's "it can't reason because of course it can't" nonsense.
We were talking about a "sufficiently large" table, which means that it can be larger than realistic hardware allows for. Any algorithm operating on bounded memory can be ultimately encoded as a finite state automaton with the table defining all valid state transitions.