Hacker News new | past | comments | ask | show | jobs | submit login

When ChatGPT’s output looks correct, it usually just means that it actually met the problem already, and “learned” the answer verbatim, and now it just applies some transformations to its output to fit your context better, giving the illusion of something more than a search engine would have done.

It sucks at 3-years old level novel logic, let alone math proofs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: