Humans can look at a list of words separated by commas, and call it a "list". We can also call it, "not a CSV file".
Humans can look at a 5, and call it five. We can hold up a hand, and say, "this many".
This behavior is named "semiotics". It's the act of using one thing to represent another.
LLMs are designed to intentionally avoid this approach. Instead of constructing an explicit grammar (like a parser), an LLM takes the opposite approach: inference.
Inference makes it possible to model the ambiguous patterns that natural language is made of. Inference also makes it impossible to define symbols.
Humans use both semiotics and inference. So far, it looks like no one has quite cracked how to do that artificially.
Humans can look at a list of words separated by commas, and call it a "list". We can also call it, "not a CSV file".
Humans can look at a 5, and call it five. We can hold up a hand, and say, "this many".
This behavior is named "semiotics". It's the act of using one thing to represent another.
LLMs are designed to intentionally avoid this approach. Instead of constructing an explicit grammar (like a parser), an LLM takes the opposite approach: inference.
Inference makes it possible to model the ambiguous patterns that natural language is made of. Inference also makes it impossible to define symbols.
Humans use both semiotics and inference. So far, it looks like no one has quite cracked how to do that artificially.