Why isn't someone providing a "meta model" that uses an LLM to choose between various fine tuned models depending on the question to get overall better results than gpt4?
Founding AI Engineer at OpenPipe here, using a fine tuned "router LLM" to route between various specialized (inc fine tuned but not necessarily) applied models depending on the input is becoming a common pattern in more modern "graph like" LLM applications.
You can see how that "routing function" could include a call to a "Router LLM." And yes, fine tuning is a great method to better improve the routing intelligence of said Router LLM.
Worth mentioning that you don’t even need separate models to implement this. Dynamically loading LoRA adapters is much more efficient, and is the approach Apple took.