Hacker News new | past | comments | ask | show | jobs | submit | rish-b's comments login

A common reason is to reduce cost and latency. Larger models typically require GPUs with more memory (and hence higher costs), plus the time to serve requests is also higher (more matrix multiplications to be done).


Got it. That makes sense. Thank you. But what about the quality then? Can the quality of 13B model be the same as the quality of, say, 30B model?


Flan-T5 is a 3B model that is of comparable quality to Llama 13B.

Moreover, you can fine-tune model for your specific tasks and you need fewer resources to fine tune a smaller model.


As a general principle the larger models are better quality.

However, fine tuned small models can outperform general purpose large models on specific tasks.

There are also many lightweight tasks like basic sentiment analysis where the correctness of small models can be good enough to point of being indistinguishable from large models.


This is such an interesting direction for LLM research (especially because it's easy to imagine applicability in industry as well).

If all it takes is ~1k high-quality examples (of course, quality can be tricky to define) to tune an LLM successfully, then we should expect to see these tuned LLMs for many different narrow use cases.

Of course, devil is likely in the details. Even in this paper, the prompts on which the model is evaluated were written by the authors and "inspired by their own interests or those of their friends." Can be tricky to make a jump from these prompts and answers to real world LLM use cases, but super super promising.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: