So because of the way LLMs are trained they naturally prefer to use the language one has initiated the conversation. I would guess that for the LLM, or specifically the transformer in it, the difference between human languages is not that big. But it would be interesting what it would lead to if we give transformers a possibility to answer in much more denser spaces. Like what if we can extract concepts itself and allow the LLM to answer with them instead of using the human language intermediate. - think a compressed embedding
So because of the way LLMs are trained they naturally prefer to use the language one has initiated the conversation. I would guess that for the LLM, or specifically the transformer in it, the difference between human languages is not that big. But it would be interesting what it would lead to if we give transformers a possibility to answer in much more denser spaces. Like what if we can extract concepts itself and allow the LLM to answer with them instead of using the human language intermediate. - think a compressed embedding