Hacker News new | past | comments | ask | show | jobs | submit login

This is not correct. You can get an LLM to improve reasoning through iteration and interrogation. By changing the content in its context window you can evolve a conversation quite nicely and get elaborated explanations, reversals of opinions, etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: