It's more than that. The overall goal function in LLM training is judging predicted text continuation by whether it looks ok to humans, in fully general sense of that statement. This naturally captures all human capabilities that are observable through textual (and now multimodal) communication, including creating new abstractions and concepts, as well as thinking, reasoning, even feeling.
Whether or not they're good at it or have anything comparable to our internal cognitive processes is a different, broader topic - but the goal function on the outside, applying tremendous optimization pressure to a big bag of floats, is both beautifully simple and unexpectedly powerful.
Whether or not they're good at it or have anything comparable to our internal cognitive processes is a different, broader topic - but the goal function on the outside, applying tremendous optimization pressure to a big bag of floats, is both beautifully simple and unexpectedly powerful.