Hacker News new | past | comments | ask | show | jobs | submit login

> Also these annoying “proof left as an exercise” is now “ask chatgpt for the answer”.

ChatGPT is horrendously bad at mathematical proofs (I've written about this before), so I fear that this is a rather dangerous approach: you could be learning things that are wrong without realising it.




I just opened up a Tao’s real analysis pdf and copy and pasted one of their exercises (didn’t cherry pick), would be great if you can point out what is bad about it.

Feel free to take another exercise from that Tao’s real analysis pdf if you think you have a good counter example, would love to know its limits on honours undergrad level math texts.

https://math.unm.edu/~crisp/courses/math401/tao.pdf

https://chat.openai.com/share/6026ae38-e7f6-439f-b91f-110046...


Establishing a proof of a statement in mathematics means giving a series of steps of the form

A(1) && A(2) && ... && A(n) && B => A(n+1),

where each of A(i) has been proved earlier, B is one of the axioms with which you are working, and => means derivation using some fixed rules. The axiom list for B is context dependent, so that a journal paper may be using an extended set of the form "everything already known by the community, given a reference", etc. A textbook will use lower level textbooks mentioned in the introduction as lists of such contextual axioms.

The IMHO biggest issue with this chatgpt proof is that even though correct in principle, it really misses the context of your exercise: it does not really know if e.g. the well orderedness principle had been introduced already, which exact definition of the natural numbers is being used (Peano, intuitive?), etc.

As a result, the "proof" it provides is primarily name dropping — albeit correct in principle, it still requires filling in the actual argument. So, might be helpful as a hint for a student, but requires the proof to actually be produced.


Chatgpt would have no problems introducing the well-ordered principle if prompted, so the self learner can dive into the details if needed.

Hints are probably what you want if you’re a self learner and stuck, so actually chatgpt is doing a good job to guide self learners through a text they’re stuck on.

Being stuck on one statement for days is not a strategic way to learn.


> Chatgpt would have no problems introducing the well-ordered principle

Yes, well, that is part of what I was trying to say. The statement of the principle is not important outside of the structure you are building when following one particular proof, or reading a book (so, following several proofs).

You could do just as well with Zorn's lemma or axiom of choice as you would with the well-ordering; what if your course introduces one of these, but not the well-ordering principle, and then asks to solve this particular exercise? In that case, the gist of the exercise would actually be to re-derive, say, the (axiom of choice)=>(well-ordering) implication for the natural numbers. A point that would be thoroughly lost on chatgpt without the course context.


When ChatGPT’s output looks correct, it usually just means that it actually met the problem already, and “learned” the answer verbatim, and now it just applies some transformations to its output to fit your context better, giving the illusion of something more than a search engine would have done.

It sucks at 3-years old level novel logic, let alone math proofs.


I don't have time right now to go through a textbook I'm not familiar with, but here's an example from a different area: https://news.ycombinator.com/item?id=37903860

This is about computability, not analysis, but I think the point still applies: ChatGPT is quick to give you an answer that sounds plausible but is actually complete nonsense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: