This doesn't make any sense. Empathy itself is qualia; the language is merely a medium to communicate it, and far from the only one (e.g. facial expressions are generally better at it).
As for LLMs "following the structural patterns" of empathetic language - sure, that's exactly what simulating empathy is.
I don't see what practical difference any of this makes. We can play word games all day long, but that won't convince Blake Lemoine or countless Replika users. To them, it's not "a character in a story", and that's the important point here.
The character of a story does not think, or do anything outside of what the writer of that story writes them to. The character cannot write itself!
That is the distinction I am making here.
Any person using an LLM is effectively playing "word games"; except instead of words, the game uses tokens; and instead of game rules, they follow pre-modeled token patterns. Those patterns are wholly dependent on the content of the training corpus text. The user only gets to interact by writing prompts: each prompt gets tokenized, the tokens get modeled, and the resulting difference gets printed back to the user.
No new behavior is ever created from within the model itself.
All an LLM can do is follow the structural patterns, and use them to shuffle.