Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is driven by the idea that since LLMs are probabilistic engines during inference that, if the LLM "knows" or can "surmise" the correct answer, it will have a higher probability of being generated and thus self consistency methods can more reliably tease it out.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: