AI certainly can't completely replace teachers, but the potential gains for personal tutoring from SOTA LLMs still seem enormous to me.
And I'm not trying to make a general argument against in person training. But I think the details of how virtual learning happens matters quite a lot. AI can make it much more personalized and make tutoring relatively affordable. Don't you think?
AI has personally tutored me about obscure, deep linear algebra concepts. It's so great to get applied examples and be able to ask why/how something works, rather than reading a stuffy Wikipedia article or math textbook.
It's been extremely effective for me, where reading a math textbook/wikipedia article seemed like too much effort, but a friendly conversation with my AI tutor was just fine.
How can you bring yourself to trust the AI? Just yesterday a friend and I asked Chat-GPT a physics question, and for some reason his assistant asserted that the speed of light was 3,000 m/s, which is off by two orders of magnitude. We know that's wrong so we can tell the AI to do it again but right this time, but if it was explaining a concept we didn't already understand, I can't see how the output would be any more meaningful than asking a random stranger and trusting their response.
How can you bring yourself to trust a human teacher? Humans are wrong sometimes too, often with confidence.
The trick to learning effective timely (with both LLMs and human teachers) is to recognize that you should learn from more than one source. Think critically about the information you are being exposed to - if something doesn't quite feel right, check it elsewhere.
I genuinely believe that knowing that an information source is occasionally unreliable can help you learn MORE effectively, because it encourages you to think critically about the material and explore beyond just a single source of information.
I've been learning things with the assistance of LLMs for nearly two years now. I often catch them making mistakes, and yet I still find them really useful for learning.
>
How can you bring yourself to trust a human teacher? Humans are wrong sometimes too, often with confidence.
If humans/AIs are wrong about a topic (in particular wrong in a confident way) multiple times, I will stop trusting them to be experts in the topic. What I experienced is rather that many human experts in academia tend to be honest when they are not sure about the answer.
A human understands what they're saying. If a human teacher is working through a math problem and isn't sure of their work, they're able to stop and correct their mistake. An AI math teacher is trained on a corpus of data - probably very similar to the data that the human teacher was trained on, though I'm sure the AI was trained on far more data than any single human ever was - but can't do the introspective part. To put it another way, I think we agree that humans learn better by assessing multiple sources and thinking critically. An AI is very good at the former, but very bad at the latter, and I would rather have a teacher that can think critically about what it is saying to me.
If you can't trust a teacher or a textbook, then you are in big trouble. Especially if it is a brand new subject to yourself where you don't have an intuition about what is correct/incorrect. Part of a teacher/student relationship is obviously trust.
No, you aren't. You can listen to ideas and think about them and attribute them to the sources and come to (or not) your own conclusions.
The reason it's such a bad idea to "trust" the way you are suggesting is that many fields are quackery. Do you trust that fancy textbook and sophisticated sounding professor from first year macroeconomics?
Nitpick: Your number of orders of magnitude is off by a (binary) order of magnitude.
The speed of light is about 300,000,000 m/s. (In fact it's exactly 299,792,458 m/s, because that's how the metre is defined.) So 3,000 m/s is off by five (decimal) orders of magnitude, not two.
Trust but verify. If you're doing your homework you should be able to notice things not lining up and ask the model about them. Human teachers can also make mistakes (though usually less than an AI hopefully) and it's the same process dealing with those.
In my opinion the best teachers just direct your questions in the direction where the answers you find give you the most useful information. I'm optimistic that AI could be an improvement to the average for scientifically minded learners, though I wouldn't expect it to be more effective than a 1 on 1 with a good teacher.
Ever since the step(s) beyond ChatGPT 3.5 I haven't noticed any huge errors like that, personally. Are you sure you were on a new model?
Also, how can you trust anyone? People are wrong. Teachers can be wrong. Web pages can be wrong. Books can be wrong. I think LLMs will probably soon be the least likely to be wrong out of any of those.
I just asked chatgpt: "comparing 9.9 and 9.11, which is larger?"
and it responded:
9.11 is larger than 9.9.
When comparing these two numbers:
9.9 can be written as 9.90 to have the same number of decimal places.
9.11 remains 9.11.
Comparing digit by digit:
The integer part (9) is the same for both.
The first decimal place (9 vs. 1): 9 is larger.
The second decimal place (0 vs. 1): 1 is larger, which makes 9.11 larger overall.
My dad, a lawyer, has been trying to use gpt-4o to assist in writing legal documents. He has said that the documents are well written and convincing, but the cases that are cited by 4o to support the document are more often than not completely made up.
A very easy way to get basically every current AI model to hallucinate:
1. Ask a highly non-trivial research question (in particular from math)
2. Ask the AI for paper and textbook references on the topic
At this point, already many of these references could be hallucinations.
3. If necessary ask the AI where in these papers/textbooks you can find explanations on the questions, and/or on which aspect of the question or research area the individual references focus.
This backs up what I mentioned in my other comment. My dad, an attorney, purchased both gpt-4o and Gemini Advanced to help write legal documents, which involves citing other legal cases. He says that he's found the legal cases that both models cite to almost always be completely fabricated.
This problem isn't exclusive to current implementations of AI.
I had a US business professor explain in one of my business classes that making a bit more money might push you over into the next tax bracket and cost you more in taxes than you made.
This guy had a PhD, had been teaching for decades and apparently didn't understand the marginal tax system.
> I had a US business professor explain in one of my business classes that making a bit more money might push you over into the next tax bracket and cost you more in taxes than you made.
He's not wrong. You are correct if you consider only income taxes. But there are other tax benefits that lead to discontinuities with respect to income.
As an example, in my state you can deduct up to $5000 of contributions to a 529 plan if your income is under $250K. Go a penny above that threshold, and you can deduct only $2500. That extra penny just reduced your refund by a few hundred dollars.
In the Netherlands we have a marginal tax rate, so every Euro over X gets taxed 10%, everything over Y gets taxed 15% etc. (simplified numbers obviously).
However, often times it's better to stay in the top of a lower bracket because of tangentially-related benefits, such as healthcare subsidies, rent subsidies and other things like that. If you go from tax bracket 1 -> 2 because you get a 100 euro raise, sure you'll get 100 euros more (well, more like 95 but whatever), but you also lose out on more than that in the form of a loss in other benefits.
My partner went through this recently, she got a raise at work, but as a result she actually lost the subsidized rent money she got from the gov't. She had to request her workplace lower her wage so she was under the limit, because otherwise she couldn't have afforded rent on her own, and if the raise was even 2 euros/hr higher, she might've even been kicked out of her social housing situation.
That's because the benefits aren't marginal, they work on a hard cut-off limit. Anything over X amount and you're just cut off, you're not gradually weened off it until you're at a high enough income to not require gov't help.
A likely truth no one wants talk about : LLMs will only help people who want to learn. Those people are likely already in very good shape in life. The amount of help from LLMs is likely very high for such people - as you note the ability to have a back and forth is very helpful.
For 99% of the population, they aren't going to do this. It is what it is.
Gotcha. So I guess the question is, can an AI run a Zoom meeting or interactive multiplayer learning game with a bunch of kids on it? Have to admit that might be a stretch.
And I'm not trying to make a general argument against in person training. But I think the details of how virtual learning happens matters quite a lot. AI can make it much more personalized and make tutoring relatively affordable. Don't you think?