Teachers aren’t interested in a recount of the battle of so-and-so but in training you to gather knowledge, structure your thoughts and express them clearly.
You can only learn that by doing. A chatbot bypasses the learning process, so you will have neither gained subject knowledge nor methodical one.
> The calculator was meant to make computation more convenient for people who already knew about numbers. Now, it threatens to crash the intellectual order, assuming the role of an end, when it is only a means.
I'm pretty sure that a "sufficiently smart" chatbot (or maybe even an extra dumb one) is a useful tool "in training you to gather knowledge, structure your thoughts, and express them clearly". I've found it remarkably useful for clarifying my thoughts, considering alternative arguments, and general tomfoolery that can spark creativity.
The problem is that computing things is something most people don't do that frequently, while "structuring your thoughs and expressing them clearly" is a prerequisite to have any sort of meaningful conversation or even opinion.
I am mostly worried about young people who will grow up relying too much on ChatGPT, what will they do when they do not have a bot hand-holding them through some complicated idea? And if this kind of bots become so ubiquitous, what is the place for humans?
When calculators became wide-spread, we calculated a lot more. When LLM become wide-spread, we will.. Think more? I seriously don't know.
I'm very, very sure that a machine which requires you to structure your thoughts and express them clearly will not lower the capacity for that in the general public. LLMs are extremely prone to garbage in, garbage out - if you can't be precise in structuring and expressing your thoughts, your results will be likewise questionable.
I certainly benefit greatly already from using LLMs to accomplish a number of tasks. I think the answer on where the responsibility lies depends greatly on your view of the same sorts of questions around auteur theory - is the director responsible for the quality of the film? Or is it the writer of the screenplay? What about the cast, or the producers? Is Microsoft the author if you write a novel in Word, without scribing the lines onto the page yourself? I think it's going to be very interesting to see how all of this plays out, and where the lines are drawn. I suspect that what is causing concern now will, in ten years perhaps, be normal, obvious and not even discussed.
> LLMs are extremely prone to garbage in, garbage out - if you can't be precise in structuring and expressing your thoughts, your results will be likewise questionable.
I agree, and that is the issue. People like us can use LLMs effectively because we are already capable of expressing our thoughts in a decent manner and we can recognize when the output does not make sense. But to know whether the results can be trusted or not, you already need to be one level above that. If one is not capable of producing a coherent argument on their own, how can they evaluate whether an argument they hear is itself coherent? And if one, say because of lazyness, relies on LLMs from their childhood to fill in all the difficult steps, how will they learn how to do it on their own? Practising has always been the best way to learn things.
> I think it's going to be very interesting to see how all of this plays out, and where the lines are drawn. I suspect that what is causing concern now will, in ten years perhaps, be normal, obvious and not even discussed.
Teachers aren’t interested in a recount of the battle of so-and-so but in training you to gather knowledge, structure your thoughts and express them clearly.
You can only learn that by doing. A chatbot bypasses the learning process, so you will have neither gained subject knowledge nor methodical one.