Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it is dispositive, just that it likely didn't copy the proof we know was in the training set.

A) It is still possible a proof from someone else with a similar method was in the training set.

B) something similar to erdos's proof was in the training set for a different problem and had a similar alternate solution to chatgpt, and was also in the training set, which would be more impressive than A)



It is still possible a proof from someone else with a similar method was in the training set.

A proof that Terence Tao and his colleagues have never heard of? If he says the LLM solved the problem with a novel approach, different from what the existing literature describes, I'm certainly not able to argue with him.


> A proof that Terence Tao and his colleagues have never heard of?

Tao et al. didn't know of the literature proof that started this subthread.


there is an immense amount of stuff out there on ArXiv that no one has ever looked at


Right, but someone else did ("colleagues.")


No, they searched for it. There's a lot of math literature out there, not even an expert is going to know all of it.


Point being, it's not the same proof.


Your point seemed to be, if Tao et al. haven't heard of it then it must not exist. The now known literature proof contradicts that claim.


There's an update from Tao after emailing Tenenbaum (the paper author) about this:

> He speculated that "the formulation [of the problem] has been altered in some way"....

[snip]

> More broadly, I think what has happened is that Rogers' nice result (which, incidentally, can also be proven using the method of compressions) simply has not had the dissemination it deserves. (I for one was unaware of it until KoishiChan unearthed it.) The result appears only in the Halberstam-Roth book, without any separate published reference, and is only cited a handful of times in the literature. (Amusingly, the main purpose of Rogers' theorem in that book is to simplify the proof of another theorem of Erdos.) Filaseta, Ford, Konyagin, Pomerance, and Yu - all highly regarded experts in the field - were unaware of this result when writing their celebrated 2007 solution to #2, and only included a mention of Rogers' theorem after being alerted to it by Tenenbaum. So it is perhaps not inconceivable that even Erdos did not recall Rogers' theorem when preparing his long paper of open questions with Graham in 1980.

(emphasis mine)

I think the value of LLM guided literature searches is pretty clear!


This whole thread is pretty funny. Either it can demo some pretty clever, but still limited, features resulting in math skills OR it's literally the best search engine ever invented. My guess is the former, it's pretty whatever at web search and I'd expect to see something similar to the easily retrievable, more visible proof method from Rogers' (as opposed to some alleged proof hidden in some dataset).


Either it can demo some pretty clever, but still limited, features resulting in math skills OR it's literally the best search engine ever invented.

Both are precisely true. It is a better search engine than anything else -- which, while true, is something you won't realize unless you've used the non-free 'pro research' features from Google and/or OpenAI. And it can perform limited but increasingly-capable reasoning about what it finds before presenting the results to the user.

Note that no online Web search or tool usage at all was involved in the recent IMO results. I think a lot of people missed that little detail.


Does it matter if it copied or not? How the hell would one even define if it is a copy or original at this point?

At this point the only conclusion here is: The original proof was on the training set. The author and Terence did not care enough to find the publication by erdos himself




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: