Hacker News new | past | comments | ask | show | jobs | submit login

Use ChatGpt.

Screenshot the math, crop it down to the equation, paste into the chat window.

It can explain everything about it, what each symbol means, and how it applies to the subject.

It’s an amazing accelerator for learning math. There’s no more getting stuck.

I think it’s underrated because people hear “LLM’s aren’t good at math”. They are not good at certain kinds of problem solving (yet), but GPT4 is a fantastic conversational tutor.




Don't suggest this. While I agree it can be helpful, the problem is if you're a novice you won't be able to distinguish hallucinations. Which in my experience are fairly common, especially as you do advance topice. If you got good math rigor then it's extremely helpful, because often things are hard to exactly search, but it's a potential trap for novices. But if you have no better resource, then I can't blame anyone, just give a warning to take care.


That’s kind of like telling people not to go online because you can’t believe everything you read on the Internet.

What proportion of the problems you’ve encountered were with the free version vs premium? It’s a huge difference and the topic here is GPT4.

Also since it is fairly common for you are there any real world examples you can share?


> That’s kind of like telling people not to go online because you can’t believe everything you read on the Internet.

Uhhh... it's like telling people to trust SO over reddit, especially a subreddit known to lie.

> What proportion of the problems you’ve encountered were with the free version vs premium? It’s a huge difference and the topic here is GPT4.

Both. Can we stop doing this? This is a fairly well established principle with tons of papers written about it, especially around math. Just search arxiv, there's a new one at least every week


I’ll take that as it happens so infrequently with GPT4 you have no illustrative prompts that can be shared.

There have not been tons of papers written about this.

You seem to be conflating papers about GPT4 as a solver with it as a math tutor. It’s a completely different problem space.



I don’t get the relevance those seem to be security related?

My main point consistently has been that GPT4 can be an invaluable resource specifically for learning math subjects.

I am not aware of any papers, studying people using it as a conversational tutor for learning math and having problems with hallucinations.


> I don’t get the relevance those seem to be security related?

And?

> My main point consistently has been that GPT4 can be an invaluable resource specifically for learning math subjects.

This can also be true. I use it a lot. Don't confuse openly discussing limitations with calling it a pile of shit. No need to have only two extremes.

> I am not aware of any papers, studying people using it as a conversational tutor for learning math and having problems with hallucinations.

Very bad faith requirement. Unless you have good evidence that GPT hallucinates in many domains (as exemplified by said security report) and NOT math tutoring. If you have this really strong evidence that math tutoring is specifically unique then I suggest writing a paper. I'll help if you really can do it and be happy to give you first author and be proven wrong. But a much easier explanation is that math tutoring is not unique to GPT with regards of generating hallucinations. If you truly believe you do need a extremely specific example, you may need to pull the wool off your eyes. But I'm hoping you don't and are just arguing.


It works better than you think, as long as you use GPT 4. See my answer to the other person (https://news.ycombinator.com/item?id=38837646).

A lot of negativity comes from people who goofed around with 3.X for a while, came away unimpressed, muttered something under their breath about stochastic parrots or Markov chains that sounded profound (at least to them), and never bothered to look any further. 4 is different. 4 is starting to get a bit scary.

The real pedagogical value comes when you try to reconcile what it tells you about the equations with the equations themselves. Ask for clarification when something seems wrong, and there is an excellent chance it will catch its own mistakes.


That answer isn't very compelling as it is one of the most well known equations in ML. There are some very minor errors but nothing that changes the overall meaning. But you even seem to agree with me in your followup: don't rely on it, but use it. I'm only slightly stronger than you.

And stop all this 3.5 vs 4 nonesense. We all know 4 is much better. But there's plenty of literature that shows its limits, especially around memorization. You also don't understand stochastic parrots, but in fairness, seems like most people don't. LLMs start from compression algorithms and they are that at their core. But this doesn't mean it is a copy machine despite the NYT article but it also doesn't mean it is a thinking machine like the baby AGI people. Truth is in between but we can't have a real conversation because hype primed us to just bundle people into two camps and make us all true believers. Just please stop gaslighting people when they say they have run into issues. The machine is sensitive to prompts, so that can be a key difference or sometimes they might just see mistakes you don't. It's not an oracle so don't treat it like one. And don't confuse this criticism as saying LLMs suck, because I use them almost every day and love them. I just don't get why we can't be realistic about their limits and can only believe they are a golden goose or pile of shit. It's, again, neither.


You also don't understand stochastic parrots

You have a parrot that can paint original pictures, compose original songs and essays, and translate math into both English and program code?

I would like to buy your parrot. I'll keep it in my Chinese room. There used to be a guy in there, but he ran away screaming something about a basilisk.


> You have a parrot that can paint original pictures, compose original songs and essays, and translate math into both English and program code?

Kinda, kinda, yes, and yes.

I think there's far less originality than most people think. But it's not surprising when your job isn't leading you to look at thousands of pictures a day. I have yet to see a generative model that isn't pulling heavily towards the training data and you might be noticing the memorization rates are getting higher. But yes, a stochastic parrot doesn't mean memorization, it is about generalization and the stability around the p-norm ball around the training data.

Btw, what's wrong with a stochastic parrot? They are absolutely fucking useful. I use them every day. Hell, I even use things that are complete memorizations and all compression every day. What's with everyone equating powerful statistical systems with uselessness. Anyone saying that they aren't extremely useful is pulling wool over their eyes (but the same is true for anyone claiming baby AGI).

I'd also appreciate it if you discussed in good faith. The snarkiness is not appreciated.


I'm not being snarky! I genuinely feel I'm the one being gaslighted, by people telling me I shouldn't be utterly blown away by answers like the earlier example, or the one I just received:

https://i.imgur.com/JSWLFOi.png

I regularly get downvoted and criticized for suggesting this tool to other students, in defiance of what I can clearly see happening with my own eyes. I see a tool that, if developed further, will answer much deeper questions, including original ones, just as accurately and effectively. One that appears capable of taking humanity to the next level so fast it will make the monolith in 2001 look like an abacus by comparison.

Meanwhile, you tell me, "Don't suggest this to other students, it might hallucinate." Other people say, "Shut this down at once (or nerf it beyond any possibile utility), it might hurt somebody's feelings." Another contingent warns, "Shut this down at once, it might start a nuclear war." Still other people say, "Shut this down at once, it violates copyright law." The objections just get dumber from there, yet gain traction by the day.

There's never been a time when standing in the way of something like this was right. Why should I think it's time to do so now? (And yes, I acknowledge that you're not personally 'standing in the way', but it really bugs me when people who claim they aren't 'standing in the way' of the technology tell other people not to use it.)

I have yet to see a generative model that isn't pulling heavily towards the training data

When's the last time you saw a human mind that didn't work that way? (Or, for that matter, a parrot's mind.) The real truth behind the stochastic-parrot metaphor is that parrots, stochastic or otherwise, are nothing all that special, and neither are we. We're just better at using tools than the birds are, that's all.

Or at least we were up until now. But muh COPYRITE!!!11! ...


> I genuinely feel I'm the one being gaslighted, by people telling me I shouldn't be utterly blown away by answers like the earlier example, or the one I just received:

I think people in my camp (which often are confused with the Gary Marcus camp), aren't saying you shouldn't be blown away. Those people wouldn't say this

> And don't confuse this criticism as saying LLMs suck, because I use them almost every day and love them. I just don't get why we can't be realistic about their limits and can only believe they are a golden goose or pile of shit. It's, again, neither.

Fwiw, I give those people an ever harder time. They deny utility that is quite apparent. They also have these silly contrived doomer arguments that don't make any sense, as if one day AGI is just going to unexpectedly appear out of nowhere and, like you suggest, somehow jump the airgap and get control of the world's nuclear weapons without anyone noticing. What an insane hypothesis that doesn't have anything substantial evidence and is entirely based on "but what if!" It is conspiratorial and a distraction from the real harm these systems can do which is far more subtle and not really an existential crisis (at least arguably in the same way, but let's not get into that). Some of these people are shills and some are useful idiots/true believers. You're right to not pay attention to them.

I'll also mention that I too am blown away. But you can be blown away and still have criticism and be wary of a thing too. The answer is quite impressive, without a doubt. I mean we are literally putting lightning into rocks and making them capable of doing math and speaking human languages. If you're not blown away by any single one of those things then it is simply a lack of imagination.

> When's the last time you saw a human mind that didn't work that way?

Quite frequently. Same with even my cat, and she's dumb as shit. Probably ran into too many walls while chasing toys but I think that's just a feedback loop lol. She's dumb as shit but I'm also absolutely blown away by her brilliance. It may be hard to see that both those can be true, but that's the true state. But I disagree that there isn't anything special about stochastic parrots, any animal, or humans. They are all mind mindbogglingly impressive, just our brains are designed to normalize things to not be overburdened by the computational load (which itself is impressive!).

You are absolutely right though that there's a ton of exploitation that humans do (referring to exploration vs exploitation). I said memorization is incredibly useful. But creativity is far more subtle. I should put it this way, chimps (very impressive creatures), are far better at memorization than most humans. But they are nowhere near as creative. Certainly some creativity is leveraging prior works for inspiration. But a subtle aspect of this is that often when this form is considered brilliant it crosses domains, which is something no ML seems to even have the capacity to perform. This can be hard to know though because unless you have domain knowledge you may not have heard about how people like Einstein was called a mathematician and not a physicist or how Nash was said to "just used topology". This type of lore is important if we're going to discuss actual intelligence but not important for tools or our every day lives. The devil is in the details when we care about details.

It can be really hard to understand these distinctions. You have to look REALLY close at details. One thing I'll mention is that I know I have looked at the datasets we use in our group far more than anyone else that I know. This is unsurprisingly an uncommon thing because it is boring to look at the raw data and investigating things like LAION takes herculean efforts (something I haven't even approached). But your example is actually remarkably relevant to this topic. You couldn't have done anything better! Because most people rely on measurements of distance like cosine similarity or L2 to determine duplicates or near duplicates. But ask GPT this (you should get the right answer): "How does the curse of dimensionality relate to distance measurements in higher dimensions? Are there any problems this creates?" Or ask it another one, which even the fact blew me away the first time I heard it despite being absolutely obvious after I took just a moment to think about it: "If I have a n dimensional space, where n is very large, what is the expected angle between any two random tensors? What is it as n approaches infinity?" I'm positive it will again give you the correct answer.

But you also have to realize that this is frequently written about and without a doubt in the training data. You can absolutely overfit models and have them be incredibly useful. But the difference is that this won't be generalization and will be brittle. For a long time GPT was not able to correctly answer "Which weighs more, a pound of feathers or a kilogram of bricks" because it was too sensitive to the expected answer (it'll work now btw). It still has problems with a variation of the corn, goose, fox river crossing puzzle if you change it to allow all items in the boat at once (at least when I checked a month ago). But this is not the actions of sentient creatures. Ones that can think and comprehend. You're going to have to think really hard about how you think and especially how you think really hard to get a good understanding of this. But it comes down to the reason why someone can be absolutely brilliant while shockingly idiotic. This is not the quip from iRobot with the "can you?" about art and symphonies. There is something deeper and truth be told, many animals do things for no good reason (one that can't be clearly defined by our perceived loss functions, which may accurately be called emergent behaviors). Every mammal also is able to run complex simulations in their minds, at incredibly low computational costs. Even the small rat will twitch its legs while it sleeps or your dog may bark, being unable to distinguish reality from a dream, just as you do. That is truly a world model. Something we aren't remotely close to in AI, but that's okay. Why would it not be okay?

But in some way you are being gaslit, but not by what I intended to say (but maybe from how you read it. Which I apologize, I am trying to work on communicating better, but it is hard when we have a diverse global audience with many different base assumptions and knowing which type of imputation I need to direct my message at). There are plenty of people with highly invested interest to sell you these tools as far more than they are. I've written a few comments before that what's going on is as if we made a chocolate factory. One that sells the best god damn chocolate you're ever tasted. But then they started selling the chocolate as a cure for cancer. At that point, it doesn't matter how good the chocolate is, people will feel disenfranchised. Some people are responding by saying that the chocolate tastes like shit while others are saying it cured their cancer. But neither of these are true. It's damn good chocolate, but it isn't going to cure cancer. (ML certainly will be a very useful tool for tackling cancer. That was not the intent of this analogy) I just think there's this fear that people have that if something isn't a literal gift from god then it is a pile of shit, and I don't get it. Nothing we have fits that description but we have done and created so many incredible things as humans and made such leaps and bounds with these half baked incomplete things. There is nothing wrong with just okay chocolate, but the chocolate we have is without a doubt, better than just okay.

Does that make more sense?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: