Hacker News new | past | comments | ask | show | jobs | submit login

Reading anything Terrence Tao writes is thought provoking and I doubt I’m seeing anything others haven’t.

There’s at least a “complexity” if not a “problem” in terms of judging models that to a first approximation have been trained on “everything”.

Have people tried putting these things up against serious mathematical problems that are well studied? With or with Lean hinting has anyone gotten like, the Shimura-Taniyama conjecture/proof out?




I believe this is the farthest anyone has gotten: https://deepmind.google/discover/blog/ai-solves-imo-problems...

No FLT yet, but as someone who was initially quite skeptical, I’m starting to be convinced!


Those are not serious mathematical problems. Those are toy math problems, crafted backwards from known facts, designed to be solved in under 1hr, that are hard for most humans because they lack the memorization and recall and search speed that the computer has.


Sure. But even many high-caliber research mathematicians can’t do Putnam problems in a heartbeat. If we get to the point where an LLM can solve any homework problem that appears in a textbook, including graduate textbooks, that would already be something like a “lemma prover” if not a full-blown “theorem prover”.

Anyway, I think five years ago I was skeptical that ML would even get to the point of being able to solve competition problems, and I was proven wrong, so my priors have been updated.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: