Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They could, if you wouldn't have to expect them to make hidden mistakes that a learner isn't able to spot. Using an LLM when you are qualified to verify its output is one thing, but a learner often can not do that or only with extreme difficulty, making them unsuitable.

Especially with math, most LLMs will happy explain to you a "proof" for something that isn't proven or known false.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: