Hacker News new | past | comments | ask | show | jobs | submit login

According to this tutorial [1] by Google, part of why LLMs are so verbose is a phenomenon called 'chain of thought reasoning'.

Basically, the LLM will formulate a better answer to the question if it talks itself through its reasoning process.

[1] https://youtu.be/zizonToFXDs?si=5f_IxvR7h0iJy2Db&t=678




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: