Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If we defined "LLM" as "any deep learning model which uses the GPT transformer architecture and is trained using autoregressive next-token prediction", and then we empirically observed that such a model proved the Riemann Hypothesis before any human mathematician, it would seem very silly to say that it was "not intelligent and not capable of reasoning" because of an a-priori logical argument. To be clear, I think that probably won't happen! But I think it's ultimately an empirical question, not a logical or philosophical one. (Unless there's some sort of actual mathematical proof that would set upper bounds on the capabilities of such a model, which would be extremely interesting if true! but I haven't seen one.)


Let's talk when we've got LLMs proving the Riemann Hypothesis (or any mathematical hypothesis) without any proofs in the training data. I'm confident in my belief that an LLM can't do that, and will never be able to. LLMs can barely solve elementary school math problems reliably.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: