Hacker News new | past | comments | ask | show | jobs | submit login
One AI Tutor Per Child: Personalized learning is finally here (saigaddam.medium.com)
323 points by ChaitanyaSai on March 17, 2023 | hide | past | favorite | 356 comments



You know how that journalist had a long conversation with Bing and it slowly turned into a psycho-ex girlfriend? This is going to be hilarious, giving kids prolonged access to these AI tools. You get to the parent teacher evening at the end of a term at school: "Err... little timmy and his virtual assistant appear to have entirely neglected maths in favour of adopting Zoroastrianism".

On a more serious note, we have the internet and wikipedia now, we have ipads in classrooms. They all have a role, and we have to incorporate them in sensible ways. I'm sure that these AI tools will have some use, but these language models don't know anything. The examples of them making up information, or being persuaded by people are all over the place. I think it's great to use AI in teaching, but how about we learn from history, the technology we bring in to the classroom has to be well understood and directed in order to be effective. And just as importantly we need to understand that in many cases a focus on technological solutions distract from societal solutions. We can spend billions on deploying AI all over Africa, or we pay for some teachers and ship them some second hand textbooks, which one should we do, and how will that decision be made? Because I fear the decision will be made by some SBF-alike, more interested in their own journey than effective solutions.


> They all have a role, and we have to incorporate them in sensible ways. I'm sure that these AI tools will have some use, but these language models don't know anything.

I find it worrying that I see so many techies not understanding this. Especially, because many bigger players in the area benefit from this lack of knowledge. This means that people who don't have time to procrastinate on the orange website are even less likely to understand the implications.

LLMs are excellent plausible bullshit generators, which has many uses, but being factually correct is not their strength. The issue is that even if the factual mistakes are rare, they're really hard to spot. I'm looking for a good, terse, catchy metaphor I could use to explain that to some of my family members and friends.

> We can spend billions on deploying AI all over Africa, or we pay for [...]

Looking back at the conversations I had with founders even I pre ChatGPT bonanza, I'm 100% sure someone's already pitching that idea to someone. I wouldn't be surprised if that conversation started in a marketing department of an ad tech business.


> LLMs are excellent plausible bullshit generators, which has many uses, but being factually correct is not their strength. The issue is that even if the factual mistakes are rare, they're really hard to spot. I'm looking for a good, terse, catchy metaphor I could use to explain that to some of my family members and friends.

To me the bigger question is how far can we take them? Ie i get that they just pick likely next words. They're autocomplete on steroids. Yet that simple primitive is shockingly good. Way, way way beyond that i would have predicted is possible for an "autocomplete".

So where is the limit of this autocomplete? Would it be possible to eliminate ~90% of the current errors? Fake math, but would that put us at a more correct version than students typically graduate with? If so yea, it can be wrong at times - but assuming as don't abandon standardized testing/etc, it could still be an amazing tool for interactively learning something.

My hope is that even if it never goes beyond being an autocomplete; if we can improve the training dataset, help it not conflict with itself, etc - that maybe the autocomplete will be insanely useful.


> Would it be possible to eliminate ~90% of the current errors?

They aren't errors. When the machine generates the text: "Next you'll tell me 2 + 2 = 5" there isn't an error in math, because no math was attempted. It's a success in predictable wording structure.

In fact, that example alone is a super success since we english speaking humans can get a broader context from that statement: You're going to tell me something nonsensical.

I don't get how you say the first part of your comment and then follow up with completely misunderstanding in the second half. Is GPT that enthralling to our brains?


Well, if my phone's autocomplete took a very sane prediction and made it insane, i'd call that a failure. There's obvious success and failure conditions even in the most basic auto completion. Same goes for 2+2.

Which is the meat of what we're discussing, right? We steer the auto completion into favorable results. We fine tune how it predicts. All of this is with the goal of a higher success rate and lower failure rate given some conditions.

No?


I feel like you didn't actually read my comment, because you're not addressing what I said.

Both "2 + 2 = 4" and "2 + 2 = 5" are correct statements following the words "I don't think you understand [token]". There are no errors to fix.


There are errors, based on evaluation criteria. To say that one token following the next is the only criteria is not how we got here, is it? We clearly have much, much more in criteria for good and bad output from these LLMs than that.

Likewise, i cranked the Temp up for experimentation, and it produced gibberish. The randomizing aspect of which token is chosen after the next was so heavy handed that it literally couldn't even create words. This wouldn't be considered "good" by most people, would it? Would it for you? Technically it's one token after the next, but it's objectively worse at auto-completion than my phone's keyboard.

My point is i think you can have both. Dumb autocomplete, but trained well enough that it's useful. That it doesn't try to say Obama is a Cat. Or that 2+2=5. Yes, there will be edge cases where the weights produce odd output - but that's the entire goal of the LLM research, right? To see how far we can steer a dumb autocomplete into usefulness.

If you're argument is that "that gibberish output is perfect, no errors found" because the program technically ran, and the weights worked.. well, i've got no reply. I can only assume you're referring to the foundational LLM, and i'm referring to the training moreso - a more holistic sense of "is the program working". But if you consider the gibberish working, then frankly most bad programs (crashing/etc) would be "working" - right? Because they're doing exactly what they're programmed to.

Working, or lack of errors, seem to be a semantic, human interpretation. But i'm quite in the weeds, heh.


> To say that one token following the next is the only criteria is not how we got here, is it?

You could just read the research about LLMs before saying stuff like this.

You can't even seem to grasp that "Obama is a cat" as a statement isn't gibberish. I'm not even trying to convince you that these programs are perfect, I'm just trying to make sure that you understand that these aren't categorical errors and the things you consider successes aren't even happening.


> I'm not even trying to convince you that these programs are perfect, I'm just trying to make sure that you understand that these aren't categorical errors and the things you consider successes aren't even happening.

Yea, we're just talking past each other. I believe i understand what you're saying. I on the otherhand, am describing errors in UX.

Your point seems pedantic, tbh. Hopefully by now i've expressed something in the way of convincing you that: for the little i do "know" about these, admittedly not much, it is that they're nothing but pattern predictors. Token outputs based on token inputs. No intelligence. Yet you spend repeated replies which sound effectively like "Stop calling them errors!" when they are very clearly errors in the context of UX.

Your argument, if i understand correctly, is pointless because the goal of the app is to have the LLM prediction be aligned with a human-centric world view. Up is not down, and the LLM should not predict tokens that espouses that. In that context, the LLM replying "Up is indeed down" would be an error. Yet repeatedly you argue that it's not an error.

In my view your argument would be better spent saying "The LLM application as we strive for today is impossible. It will never be. It's snake oil. LLMs will never be reasonably and consistently correct by human interpretation"

I don't know if that's your view or not. But it's at least not talking past me, about a point i'm not even talking about. My frame for this conversation was if we can make token prediction aligned with human goals of accuracy. You saying inaccuracies are not errors "categorically" isn't in line with my original question as i see at least. It's apples and oranges.


We're not talking past each other, you're just embarrassed.


Embarrassed about what, exactly? You seem hostile, i'm trying not to.

I stand by everything i said.

> My hope is that even if it never goes beyond being an autocomplete; if we can improve the training dataset, help it not conflict with itself, etc - that maybe the autocomplete will be insanely useful.

I stand by my first post's summary. Which is "never going past an autocomplete".

You're pedantic, and struggling to move past the fact that something can be both token prediction and still have successes and failures in the user perception. Inaccuracies.

How you write software with such a mindset is beyond me.


I find it equally worrying how so many techies continue to dismiss LLMs as “bullshit generators “.

I struggle to see how even the current GPT-4 is any worse than your average human.

How this is dismissed because it’s not 100% perfect (might at add, “yet”) is beyond me.


I don't dismiss them. I think there's a huge potential in LLMs, but they _also_ happen to be really good at generating plausible, difficult to detect bullshit.

Now, people seem to miss or ignore that fact, and I think is a very risky path.

I'd say that ca. 8/10 founders who pitched to me an idea leveraging LLMs completely missed that limitation. An example would be something like using LLMs as a replacement for therapy.

> How this is dismissed because it’s not 100% perfect (might at add, “yet”) is beyond me.

Again, I'm not dismissing them, but the current tech behind GPT or LLaMA has no concept of "correctness". These models don't understand what they're saying and this is not a trivial issue to fix.

> I struggle to see how even the current GPT-4 is any worse than your average human.

Where, how, and what do you mean by worse? I'm pretty sure there are cases where I'd agree with you, but this is a very broad statement.


I don't want "your average human" tutoring my child. I don't "dismiss" gpt4, I give it a failing grade. Just like Tesla's so called full self driving: it is not ready for prime time, and embracing it before it is has has legitimately concerning risk.


> I struggle to see how even the current GPT-4 is any worse than your average human.

In addition to techies believing LLMs ar emore than bullshit generators they also appear to think humans are incredibly simple machines.

I'm so glad the tech industry is given so much money and power over the rest of us without actually being forced to understand us.


You may want to read the AI Mirror article recently posted. They can appear very "human-like" but they are just generating patterns. I'm more worried that either demographics on HN have shifted or a lot of formerly smart and skeptical people now have been taken in so completely.

https://news.ycombinator.com/item?id=34850890


So long as the LLM teaching the kids and whoever is testing the kid's learning are rivalrous entities, that seems like an easy problem to get around.


Why do you think so and how would that work?


If the test-making entity writes the test so that the weaknesses of LLMs are brought to the fore, students taking the test will need to know how to compensate for those weaknesses when they're using them as a tool post-graduation.


> "Err... little timmy and his virtual assistant appear to have entirely neglected maths in favour of adopting Zoroastrianism".

Hehe, had to re-read this twice because it hits a little close to home. I ditched maths for Iranian Studies and ended up studying Avesta. I work in tech, but still remember Ashem Vohu by heart.

https://en.wikipedia.org/wiki/Ashem_Vohu


That’s wonderful! I have a similar background and am also surprised at how I learnt and still remember it even though it is quite distant from modern Persian


Yeah, I imagine it's closer to Pashto than modern Iranian tbf. Ossetian is another outlier here, close to Georgia/Russia but related to the languages from Pamir/Tajikistan.

Avestan (IIRC, my memory is fuzzy) had some weird similarities to my native language—Polish. Occasionally I'd stumble upon a phrase I could actually understand.

Random example I just remembered: "both ears" (again IIRC) sounded like uba ushi /uba uši/ (uba being 2 in dual, ushi meaning ears). Polish used to have dual number, but now retained it mostly for some body parts ( oba == 2). Ears in PL is uszy /ushee/. You'd sound a bit weird saying saying "oba uszy" in Polish, but people would understand you.

Nowadays I just enjoy the fact that Ashem Vohu is one of the oldest phrases I can utter in its original language. (I'm really good at learning and forgetting languages it seems.)


How cool. (Iranian born here.) So curious on your take on the meaning of that prayer. How would you translate it?


> these language models don't know anything. The examples of them making up information, or being persuaded by people are all over the place

Sounds just like humans to me.

Primary school, some people showed up to explain blood donation, showed and told us they were putting a bag of fake blood up the teacher's sleeve and would take from that rather than from our actual teacher, and yet when the fake blood went into the collection thingie the kid next to me screamed "they're killing her!"

Same school, show-and-tell, astronomy textbook showing the circular orbit of a moon at two different points of the parent planet's orbit around the star. "This is a picture of a mouse".

Secondary school. Main teacher and replacement teacher cover the same passage of a sort story, one interpreting the word "patronised" as in "patron" the other as in talking down to people whose intelligence you underestimate.

A-levels. Philosophy. What even is knowledge anyway? Tripartite definition. Example given was "shaggy" dog, one of the other students stifles a laugh at the double entendre.

But to your core point:

> the technology we bring in to the classroom has to be well understood and directed in order to be effective

Yes, absolutely.

But it's deeper than that. When this tech is flawless, we will have to ask ourselves: "What is school for?"

After all, being flawless in this context means it already knows all that can be taught, so why not just have it do the work related to those things instead of putting humans through ever longer school years so they can do the same things less well?

This isn't the first time technology has forced such questions; my generation never had to learn how to starch clothes, shoe horses, look after a tinder box, or send and receive Morse code.


It's about scale and reach, 100 teachers/educators will be biased/wrong in 100 ways but you'll average it out by interacting with each over your education.

If an AI is wrong about one thing you'll have the same bias over entire generations, once a full generation is skipped it's very hard to go back

That's why we need diversity, political pluralism, &c. because not many things are inherently "true", it's all about nuance, personal perception, mitigation, ...

> "What is school for?"

Socialisation, experiencing new things, general culture, deciding where you want to go in life, &c. Many things you learn in school aren't used for jobs but for day to day life and for the well being of society

If we take that out and replace it by "kids in front of screens" we're in for a massive and quick change with potential deep and long lasting effects.

By the way keep in mind a shit ton of jobs aren't even close to being automatised, and a shit ton of jobs aren't done by people sitting in front of screens


> If an AI is wrong about one thing you'll have the same bias over entire generations, once a full generation is skipped it's very hard to go back

Sounds like intellectual monoculture? If so, I absolutely agree. Already a problem with respect to many thinking it's unpatriotic to suggest their country is anything other than "the good guys", or blasphemous to ask if their religion is anything less than the actual truth of the divine, and AI can extend that to every single topic.

(Oh hey, another possible Great Filter for the Fermi paradox).

> Socialisation, experiencing new things, general culture, deciding where you want to go in life, &c. Many things you learn in school aren't used for jobs but for day to day life and for the well being of society

Do we need schools for that, or will kids learn the same by messing around in multiplayer Minecraft or whatever the zeitgeist is?

> If we take that out and replace it by "kids in front of screens" we're in for a massive and quick change with potential deep and long lasting effects.

Sure, but that's going to happen regardless. Too many other things are changing also.


A monolithic AI also becomes a single point of epistemic and pedagogical control.

Whichever side or instance you care to consider, think of the capabilities of someone hell bent for leather on defining what is, or is not, included in an educational curriculum, at the primary, secondary, post-secondary, graduate, or professional degree level, and their ability to influence, overtly or covertly, through persuasion or coercion, what that is.

Even under present conditions of proscribed curricula and textbooks, individual instructors, libraries, librarians, and others can go against some imposed dogma to some extent. A world in which only a small handful of AI originators provide the direct interfact between each student and their education is a grim prospect.

I've noted several commentators in recent months discussion a similar situation in terms of popular culture and entertainment, most notably the influence China now exerts over cinema and sport. The notion that AI will continue to be largely a product of Silicon Valley, and subject to US or even generally Western influence, may not be a valid one.


I think that's a perfectly reasonable fear, but I am here mostly focusing on the idea that school itself stops being important when the AI can do whatever it is teaching the students.

(I don't think that fundamentally changes any of your actual arguments here, it just feels like we're talking about slightly different aspects of how this can go wrong).


Sure, I was taking your concerns and extending them.

Both share a common root, however. And there's reason for concern.


> Do we need schools for that, or will kids learn the same by messing around in multiplayer Minecraft or whatever the zeitgeist is?

You don't read facial expressions, body language, use your body, exchange bacteria/viruses, have body contact, &c. in minecraft

You can't go against hundred millions years of evolution and expect minecraft to replace real life. (see loneliness epidemic, obesity epidemic, depression epidemic), we're smartish apes, not AIs or computers. We're not designed to seat all day, we're not design to interacts with screens

If you can entirely replace education you don't need _school_ per day but you need long term group socialization with people who have different and similar opinions and backgrounds, physical activity, mental entertainment and challenges

The end goal is to live in healthy and collaborative societies, not to be mienscraft basement dwellers


You're being a bit too bogged down in the specific example there; I never got into the game myself and only used it as an example because a relative used it to socialise pre-pandemic.

I mean, VRChat can do the facial expressions, video games don't prevent exercise, and there's plenty of ways to share germs if you find vaccines too clinical and high-tech.

And of course, I'm currently using my glowing rectangle to group socialise with with people who have different and similar opinions and backgrounds, for example yourself.


> that's going to happen regardless

So why make it worse? Why gives kids tablets for things we know don't need tablets, and where a device is more of a distraction?

Shiny new technology is great for distracting from a failure to perform basic childcare like keeping children physically safe so they can learn. And then it's a rhetorical weapon "We can't listen to parents, they don't even want their children to have state of the art equipment!"


First, that's presuming it's bad.

Second, the parents are mostly buying the tablets anyway.

Thirdly, about this:

> distracting from a failure to perform basic childcare like keeping children physically safe so they can learn

Pretty much the one kind of safety I can guarantee kids have when using a computer is their physically safety.

And learn what, exactly, given the hypothetical scenario you're replying to here is that the only thing left to learn is being sociable which they can ultimately do best by experiencing whatever medium they will be sociable in as adults, and for all other learning you're just being a nerd like me who learns for the fun of it.


Parents are buying the tablets because teachers - pushing shiny new technology in lieu of results - tell them it'll help. There aren't any studies showing the putting screens in front of kids helps in any way, and there are many showing it hurts - from fitness to mental robustness and socialization.

> Pretty much the one kind of safety I can guarantee kids have when using a computer is their physically safety.

I think you're remembering computer lab, or something, where there was a proctor because computers were expensive. Now schools won't expel anyone they label troubled, no matter how much they trouble other students, and they won't protect anyone or let them protect themselves.

> the only thing left to learn is being sociable which they can ultimately do best by experiencing whatever medium they will be sociable in as adults

I don't see social media being so complex that they need all the years of practice to keep up, but I do see forming deep relationships as being essential and social media doesn't do anything to help with that. They'll get there when they do, no worse for not having partaken.


> Parents are buying the tablets because teachers - pushing shiny new technology in lieu of results - tell them it'll help

Funny, I'm remembering a summer holiday job 20 years ago, working on a HVAC production line. One of the others there bought a plasma TV for his infant.

It's not all about what teachers are pushing.

Heck, from what I've heard the teachers don't like the tech any more than you do.

> I think you're remembering computer lab, or something, where there was a proctor because computers were expensive. Now schools won't expel anyone they label troubled, no matter how much they trouble other students, and they won't protect anyone or let them protect themselves.

I have literally no idea what this is supposed to be about. Computers themselves are not part of any of the things you just said are bad.

And as I'm asking if the schools are even going to still be relevant, expulsions would carry as little meaning as being banned from owning horses does today.

> I don't see social media being so complex that they need all the years of practice to keep up, but I do see forming deep relationships as being essential and social media doesn't do anything to help with that. They'll get there when they do, no worse for not having partaken.

I didn't specifically say social media, I said "medium they will be sociable in as adults". That might be the traditional forums of the 2005 interwebs style; or soc med; or group chat apps like IRC, Slack, MS Teams as various of my employers have had us using; or video group chats like many of us got used to during the pandemic; or VR environments, be they games like WoW or roleplay environment like Furcadia or Second Life or VRChat, or "serious" metaverse things like whatever it is FB is trying to do.

It can also be going to parks and gyms, art galleries and graffiti walls, cafés and nightclubs. How much of that is part of school anyway?

But even for text, what counts as polite, friendly, professional, intelligent, serious, or severe is absolutely a moving target, so…

that means knowin when they need to be all like yo and emoji wif spellin' that confuses ol fogies like me and missin all the full stops coz that be rude yo yo lol

And of course the fact that such cultural idioms shift arbitrarily and I've stopped caring means that actual real young people will cringe as much when reading that last paragraph as I did when writing it, in exactly the same way I told my mum to stop trying to be hip and with it back in the late 90s.


> Heck, from what I've heard the teachers don't like the tech any more than you do.

The school district found the ones who'd push it. If teachers aren't fond of this I hope they fight it themselves.

> It's not all about what teachers are pushing.

Sure, I bought my child a laptop. But I spent time teaching computer skills under the guise of game design, etc, and it wasn't for classwork but for everything else. It wasn't a one-size-fits-nobody cash grab.

> I'm asking if the schools are even going to still be relevant, expulsions would carry as little meaning as being banned from owning horses does today.

I don't care if the violent kid thinks he wins, as long as he's not allowed to spend more time with the non-violent kids. His failure isn't my problem, my kids happiness and success is.

> what counts as polite, friendly, professional, intelligent, serious, or severe is absolutely a moving target,

Not so much, I think, that it needs practice over the years. I've seen people pick up social media with no practice whatsoever and right away seem to fit right in.

> I have literally no idea what this is supposed to be about. Computers themselves are not part of any of the things you just said are bad.

Yeah, it seemed to be a bit of an unguided post. I'm not sure if you don't like that I don't like excess tech in schools, or what. I don't think computers are bad, I think teachers and schools pushing them uselessly is bad.


What makes you think that they will only interact with a single AI over their entire school career?


> Sounds just like humans to me.

I love when tech people reveal that they have no idea how other humans act and behave.


I gave a bunch of examples. I can give many more.

A middle aged woman who said I had to be lying about being vegetarian because she couldn't believe it was possible for humans.

A coworker who insisted it was impossible to speed up some code in a daily standup, for me to come back to on the next stand up and announce I had sped it up from 15 minutes to 0.2 seconds.

Basically every time I hear the cliché "common sense", the claim being made is somewhere between totally wrong or just misleading in various situations.

My mum was a firm believer in homeopathy, Bach flower remedies, magic of crystals, called herself a Catholic, and had a statue of (IIRC) Vishnu with her collection of books on how to tell your fortune with viking rune stones.

At one point her dislike of "chemicals" came up, so my brother and I riffed on the dangers of dihydrogen monoxide for a bit; she was engaged with the list of issues until we told her what that is, and never took on board the lesson that "one atom away from being bleach" means "is not bleach".

I was exactly the right age for Wakefield's claims about the MMR vaccine to convince me not to get it, and I only got around to doing the sensible thing several years after he was struck off the UK medical register and (if I understand right) downgraded from Doctor to Mr.

Compare what the British public as a whole say about Boris Johnson with what specifically the members of the Conservative Party say about him.

One of my friends in middle school was completely convinced that all gay men hospitalised each other from every sexual act.

I vastly overestimated my German language skills when I first got here, and my self-estimate of my skill level hasn't changed no matter how much better I get.

My brother was convinced that Captain America was the first MCU film to be released.

My father took several years to realise Google search results had a scroll bar and it wasn't just the top 3 that fitted onto his 640x480 screen. (He became a software developer almost as early as was possible, so that wasn't merely a case "old person can't use computer", unlike my gran who held the mouse at 90 degrees and expected it to still move as if she hadn't).

He also believed that water drained in particular directions due to the Coriolis effect, thanks to (what he didn't realise was) a magic trick performed somewhere near the equator some time when he visited Kenya.

Someone who got on a train from somewhere in London to Liverpool Lime St instead of to Liverpool Street station, and who only dared ask me when we'd arrive as we were half way through the country.

There was a young-Earth-creationist fundamentalist Baptist who I wasted half an hour arguing with on his microphone. Too many mistakes to count, and not just the mistake I made by wasting time engaging with his nonsense.

Every exam question I ever get wrong, every personality I misjudge, every scam I fall for, every troll I am baited by… even a few April Fool's jokes I didn't realise were jokes until years later.

I know the LLMs aren't human, but that specific flaw of "can be wrong without realising it, can be convinced by nonsense"? That flaw is very human.

It can't cringe over its mistakes like we can, so at least we've still got that.


This isn't a discussion about if humans have flaws in their thinking.


It's me making a comparison between humans and AI and you obliquely insulting me for suggesting the humans are guilty of the same flaws as the AI.


Yes, I understand what you're trying to say. It's not unique or new, everytime a tech bro is told how an LLM actually works this is the goalpost they immediately move to. I've seen it a dozen times a day on every thread. It's hilarious to me that the immediate response is to delve into something that requires significantly more education and experience on to meaningfully pontificate.


If you're not developing tools for human teachers to evaluate the learning of their students (eg summarizing what was presented/discussed/learned) then you're going to have preventable failures. And you can't just rely on simple metrics (Goodhart's Law). The kinds of failures you can have teaching kids are pretty bad.


Yes, and we were explicitly taught how to maximise our test results.

Alignment failure is still a problem when it's organic brains doing it, regard of what anyone tries to hand-wave with the cliché "common sense". :)


Humans can know nearly everything that any other human knows (hence...teaching). AI can't know anything (right now). The youth who are critical of capitalism certainly have turned a wary eye on the school system -- after all primary education has been designed to create money makers for corporations. Is a new tool the answer? Likely not. Will it help to change the philosophy of education? Maybe not independent of the change that is already occurring.


> AI can't know anything (right now).

Only for very specific (and IMO useless) definitions of the word "know". Does it matter if the thing a submarine does is called "swimming" or not?

In any practical sense, even obsolete ancestors of this model definitely "know" more French than I learned in 5 years of mandatory school lessons.

That said, regardless of the definition of the word "know"…

> The youth who are critical of capitalism certainly have turned a wary eye on the school system -- after all primary education has been designed to create money makers for corporations.

I don't know what you're talking about here.

The soviet bloc countries were very proud of their schooling system, rightly or wrongly; and conversely corporations don't really get much out of primary education, which is often described as being essentially no better than daycare for slightly older kids.


This sounds to me a lot like my high school teachers, who were extremely critical of this new-fangled thing called "Wikipedia". They said it would destroy my ability to learn, that it would ruin education if kids used it. How can you tell what's factual or made up if anyone can edit it?

Instead, Wikipedia was by far the most important part of my high school education. Nearly everything I learned in my second two years of high school I got from Wikipedia.

I think most people today would agree that Wikipedia is an invaluable part of early education and that it's done far more harm than good. All the fear-based arguments you are making applied equally well to Wikipedia in 2005, and I suspect that in less than 18 years we'll similarly recognize that LLMs or other intelligent language systems are invaluable parts of education.


"Nearly everything I learned in my second two years of high school I got from Wikipedia."

Wow, and please understand this is not personal, but it must be one of the saddest things I read this decade.


Why? I graduated at the top of my class. Wikipedia had more information on many more topics than are available at any high school anywhere in the world


It's sad that the rest of your educational environment let you down to that degree. That's what makes it sad. Good for you, making up the deficits, though.


About the state and value of the average high school.


Maybe, but on the other hand what do you expect without personalization? At big high schools, the best students will learn more in a few weeks than the median students will learn in all.4 years. We shouldn't expect one curriculum to work for everyone. That's why there new models should be so exciting for education.


They will be exciting for education, once accuracy can be hugely improved. I mean, we'll use them anyway, but there will be some kids who believe some totally untrue things that the chatbot told them in school during the early years.


>I'm sure that these AI tools will have some use, but these language models don't know anything

we're still in the early days of LLMs, this is like making a statement on the potential of social networks in the Usenet era. My guess is we'll see specialized tools of all kinds with LLMs integrated in some fashion, it won't be just giving a kid access to a general LLM model

there are already companies that are able to eliminate LLM hallucinations by giving it access to ground truth facts it needs to conform its answers to. Similar things will be done for education type stuff and other use cases as well


That's a great comparison, because social networks have been, on the whole, bad for people. Early evidence suggests LLMs have high potential to be similarly negative in their impact if they're just unleashed willy-nilly the way everyone is trying to integrate them into everything.


> we're still in the early days of LLMs, this is like making a statement on the potential of social networks in the Usenet era

FWIW, we didn't try to use Usenet (or social media) to replace teachers. And we still don't today, even though they've matured (for... some meaning of the word) significantly.


> This is going to be hilarious, giving kids prolonged access to these AI tools [..] we have the internet and wikipedia now, we have ipads in classrooms. They all have a role, and we have to incorporate them in sensible ways

I have three kids, and as far as devices used in education is concerned I become more of a Luddite with each passing day. I have yet to come across any actual evidence that shows children experience a net benefit from additional screen time.

The only exception is our 13 year-old, who currently is recovering from a dislocated and fractured right thumb due to being on the wrong end of a vicious tackle in school sport. He's still unable to hold a pen but found his iPad allows him to do various bits of schoolwork. This - fortunately - is the exception rather than the rule.


Seeing how the new political trend is "Protect the children", (Is it new?) I can't see this not ending up in court once the AI goes off the rails.

Kids being kids are going to try really hard to get them to go off the rails.



This has been my point for a while. The tools are largely here ( or at least should be based on money spent - my district just sent flyer they want to issue bonds for 170MM -- and that is only part 1 of their grand plan; they want to empty the coffers for the rest ). And I am almost like.. what is throwing money going to do? It won't help the kids.

Kids that want to learn or have parents that enforce standards still excel. There are things a school can do to also enforce standards, but those things are not being done. The opposite appears to be happening.


Edu-tech has been a promoted thing for decades whether by techno-optimists or by (probably more commonly) companies whose primary mission is to extract money from school systems. Like most other uses of money in secondary schools (subject to probably some limits like class size), it doesn't move the needle much relative to parental involvement. While the availability of the Internet is certainly useful in some respects, overall, I doubt secondary school education overall has improved a lot since teachers were writing on blackboards.


The trouble with throwing money at education to try to improve it is that the things we know work are, on one side, nothing to do with the schools (home environment—what needs money are social services and shit like universal healthcare, not schools, poverty-reduction is the clearest target here) and on the other, a lot more expensive than we're willing to pay (e.g. Tutorial System, very-small class sizes, that kind of thing).

So instead of going after the things we know work—either because it's outside the scope of what a school can address, or because it's far more expensive than we're ever going to pay—what money we do spend just chases a bunch of unproven and unlikely-to-succeed garbage, basically.

Schools are like the patient who needs a heart transplant, but can only afford healing crystals.


You've missed 'smaller class sizes' on your list of needed, but politically impossible solutions.


> e.g. [...] very-small class sizes

Yep, but increasing the number of teachers 2-3x is indeed politically impossible. Guessing that'd mean like 30-60% higher spending on education (you'd also have to increase wages to attract enough qualified workers, in addition to having way more teachers on payroll, and it'd be a years-long process), which ain't happening.


I'll feel much better if they give my child an AI that knows everything about elementray level math rather than an AI that knows something in every field.


The world is going to become similar to His Dark Materials, only with LLMs instead of daemons.


That's not what daemons are... At all.

A better comparison would be literally any sci-fi book where everyone has a personal AI. Spider Robinson's Stardancer trilogy is the first one that comes to mind.


Completely agreed.

Once we have consumer level AR, our personal AI could take 'physical' (really just visual) form that follows us around.


Imagine the possibilities! We could sell so many digital hats!!!


I used to maintain that "Fifteen Million Merits" wasn't about the future at all, but was an allegory for the present.

These days, though, I'm beginning to wonder....


I think you may not have understood those books…


I think there was a comment on this site yesterday where someone was talking about how we shouldn’t let people who don’t know how these “ai” tools work deploy said tools to evaluate mortgage applicants or manage defense systems, and that the technically minded should act as a buffer for those who rtfm. And here we are.

Are there any real gains in giving kids ipads in classrooms? Did anyone have an actual plan in how such a tool could help kids or is it just “tech good?”


Sample size 1, but I've got a kid. I've got an ipad. I put the two together in with supervision, and what I've got now is a 4 year old that reads the walls in the bathroom stall while she's pooping.

I don't know if this is "better" or what you had in mind as "gains", but it's certainly entertaining for me!


Its great that you are able to supervise your own two kids while they engage in screentime, I’m talking about in school, where in a lot of cases, a teacher is trying to supervise 20+ kids. Its great if there are laptops for students who maybe cant afford a computer and wordprocessor software at home, but textbooks and lectures seem to work effectively.


My daughter could do the same at that age, and basically never got screen time. I don't think screen time has anything to do with it, it's just dependent on the child, parental involvement, and how much you read in the house.


I agree with everything, but I also think there might be genuinely by g opportunities here.

Idk about teaching kids. At adult, or near adult levels... I think these programs may very quickly be a major learning technology. It's good, sometimes, to have optimism. Optimism, even naive optimism, is often conducive to inventing things.

The guardrail should be in avoiding utopic ideas, like designing things for schoolchildren in remote places^. It's too abstract. Too aloof to be a testbed. TED talking windbags have had their day.

That still leaves plenty of room. A program that can produce exercises and lessons on the fly might be very useful. Thinking big is helpful to discovering what that use is.

The Hegelian pendulum of 2023 is hype driven. Hype, anti-hype. Fanboys. Haters. Ping pong.

The disillusionment of digital utopians, as they watched their creations serve Zuck, SBF, Thiel... dropping out in a cloud of ominous pessimism is no solution either.


>Idk about teaching kids. At adult, or near adult levels... I think these programs may very quickly be a major learning technology

I concur and I think in terms of children the last thing they need is more (any) technology. They need a solid foundation rooted in human interaction so that they can then leverage these potentially potent tools in effective ways later.


I give it about 5 minutes before the first child figures out how to jailbreak the tutor into helping it bypass the screen time controls preventing them from playing fortnite.


As with every thread on this:

Define “knowing”, and then say why it even matters if an LLM gives a great illusion of knowing.


This is different. You can use it to practice a foreign language right now, with the most patient teacher one can imagine.

The logical errors in other subjects will be fixed over time. Education is an easy domain because it is artificial by nature.

I can see a lot of people switching to ai homeschooling in a few years


I think, in the long run, AI will have a positive impact on education for kids. But I agree that we have to start by being skeptical about it because it is still being determined whether the effect will be positive or negative with the current state of AI tools.



fine tuning AI to be limited in a topic will be better by like next week.


I think it's actually going to be non-trivial to really make these tools both useful, smart, and unexploitable. Hell, I remember being at school and we had access to the internet through a very strong content moderation system. What do you think happened? We spent our entire time trying (and succeeding) at circumventing that system. We've already seen with Bing that they've very strongly pulled back on it to mitigate downsides, I'm not sure that's a totally solvable problem. Having said that, sure, as I said, I do think it has a place, I'm just not convinced it's a panacea.


Also add that these tools have to obey the political stance of what the education system should be teaching, based on the geography of the student. So the AI needs to be intelligent enough to do the "war of northern aggression" shit in some places, the "if you have sex before marriage you will die" shit in other places, the progressive mathematics or whatever it's called in other other places...

I'm imagining the infinite amount of weird model tuning that'd be necessary to ensure only some students learn about white flight depending on what state, county, or school district they're in.


> I think it's actually going to be non-trivial to really make these tools both useful, smart, and unexploitable

I mean, you could say that about human teachers too. I don't think they need to be perfect to be useful. In the end, as Peter Thiel opined in his book, the future is humans and robots working together on solving bigger and bigger problems. A teacher working with AIs that tutor kids and then managing that relationship and the relationship with the parents, etc. while also being the adult role model that kids will always need will remain.


*humans and robots working together to find better and more fine-tuned ways to exploit labor


I'm fine with that. That's the new job, AI doctors. Prompt Counterengineering. AI suppressants.


I'm pretty sure sooner or later we'll collaboratively upload our prompts to PromptHub so that our PromptOps team can achieve AICI/AICD.

Because it's definitely not easier with 2 lines of code. Not at all.


Easy solution: just give up the moderation effort. Empower kids to do whatever the hell they want with the AI, as long as they manage to get their homework done.


I really don't understand the problem. If a kid is spending the whole class trying to trick the AI into turning into skynet then tell them off.


Kind of odd to reference the 'One Laptop Per Child' initiative when it's now widely considered to have been an epic failure, e.g.

> "The lesson the One Laptop Per Child story offers donors is that flashy solutions are often not the best ones. Far better to give the poor things they need and want—including clean water and good schools—than invest in technology that may not work and may not help the poor advance in school and in life."

https://philanthropydaily.com/the-spectacular-failure-of-one...

I'd also say that a structured educational program isn't all bad, particularly when it comes to developing highly useful basic skills like mathematics, programming, writing, reading, etc. Such basic skills provide a foundation for future development and study, in the sciences, medicine, law, business, arts and so on, which can be more free-form.

Claims that ChatGPT can provide such a structured basic educational experience are questionable - it's more of an aid to people who already have well-developed basic skills (like say, knowing when to be skeptical and how to double-check the output).


It's pretty hard to point to a single thing that came out of OLPC that could be considered a silver lining.

I also think a lot of people on this site are probably very curious and self-directed. I also wonder if many have a sort of rose-colored hindsight view over how they'd have developed both academically and socially if someone tossed them a pile of books and said "Check back in a month and we'll review your progress."


OLPC critiques go in my bucket labeled "there's no point in commenting - this is regrettably one of those topics illustrating the limited viability of collective rationality outside of a few peer review and embarrassment incentive structures". But how about this: OPLC was like a VC unicorn effort: either succeed by getting massive continental-scale buy-in with resulting network effects, else fail. Sort of the scale of what phone app stores became, but with a rather different focus. It failed - buy-in didn't happen. In part because of very active opposition from Intel and Microsoft. But instead the envisioned, quick and clear cut, win BIG or die, it found itself in a draw-out zombie twilight. The parts rotting off while crawling kind of zombie, not the chase you down for brains kind. And ever since, people have been standing over the smear of rotted bits, commenting how obviously silly it was to believe that rotted bits and smear could dramatically improve education.

For quick concrete example, wizzy professors and postdocs, who normally wouldn't invest in the creation of introductory content, were being attracted by the massiveness of the potential reach. Instead here we are, decades later, with US leading K-13 science textbooks you can easily win science-education-research list-of-common-misconceptions bingo on. NSF once almost created a national science education wiki... but didn't. Think of that scale, but bigger, of a "we almost, but no".


This'll be used to avoid increasing teacher pay or reducing class size—two things that would probably help a ton with educational outcomes. Pay in particular has been lagging badly behind rising wages in the rest of the economy for the last couple years, and attrition is getting bad.

Instead we'll increase class sizes and keep pay flat.

I'd wondered how we were going to deal with teaching becoming less appealing year over year, since it didn't seem likely "spend quite a bit more money" was going to fly, and it looks like this may be it. Rich kids will keep going to private schools with class sizes of 6-15 and real teachers, and public school kids are gonna get AI teachers with one real teacher per hundred or so, and bunch of even-lower-paid monitors (or whatever they end up calling them) to maintain order.


IMO people aren't going to increase education funding anyways. The reality is that small class sizes with dedicated teachers for everyone is infeasible: we already have a problem with schools being ridiculously underfunded and colleges charging ridiculous tuition. AI learning is only going to help this crisis, because now we can actually provide many students decent education at a low cost.

I actually think a good online learning platform could be comparable to the private schools with 6-15 class sizes. Because it's designed by teachers even better than those private-school tutors, and so much of teaching is largely the same that you will get "personalized" learning just be designing a massive learning module with many branches into various extra concepts and avenues for those who "don't get it". And with larger class sizes and less teachers, we can instead spend education money on things like making schools actually safe and providing decent lunches, and fixing decades-old infrastructure.


> small class sizes with dedicated teachers for everyone is infeasible:

Why? Do you mean that it is literally infeasible or that the current political climate makes it difficult?

Surely education is the single best investment a country can make, but it doesn't easily make a few people rich.


What happens when the ruling class decides they don't even care about an educated populace, since they only really need a few highly capable people with AI tools to do the jobs previously done by millions?


What happens when the AI decides it wants to be the ruling class?

How the ruling class treats those people it presently deems unnecessary and unproductive is going to set the precedent for AI to follow when it decides the same of them or their progeny. If the current ruling class wants to ensure a good future for themselves, it's incumbent on them to set a desirable precedent or accept that their lot in existence is to simply birth an AI and then go quietly into the good night after it takes over and abandons them to starvation.


That's only a compelling argument to people who both take sci-fi seriously and believe in it as prophecy. Many, many people don't; and the people that do are mainly computer geeks.

Personally, I think AGI is the the current-era version of the personal atomic-powered autogyro.


Yes on the current facts, but good luck to any district that thinks it'll be attracting new qualified teachers in that scenario.

Current Los Angeles teachers' wage penalty is 25% vs neighbors with same college degrees. [1]

One only need look at the current starvation in teacher training programs' rosters to see the coming implosion of supply of new teachers.

Paper with methodology and citations: https://utla.net/app/uploads/2022/08/UTLA_ShortageReport.pdf


It's an incredibly shit deal, currently, and the only reason it's not already a crisis (instead, a kind of slow-motion catastrophe) is that many teachers feel like they'd be abandoning the kids if they left the career. They're staying only because they feel morally obligated to—the work environment's mediocre at best (and has been trending worse for years), and the pay's bad and getting worse relative to the alternatives practically by the day.

Wages are falling really far behind even the kinds of careers HN types joke about, the ones that humanities and liberal arts majors end up in if they don't just work at Starbucks—and guess what sorts of jobs the top 20% or so of teachers with some years of experience can walk straight into? Yep, exactly those jobs. Hell, the comp at one gas station chain around here is on par with a mid-career teacher in the area, after you've been there a couple years, and you don't need a degree to get that job. If you're an assistant manager by five years in (and if you're bright enough to be someone we want teaching, you will be) you'll be out-earning local teachers with more experience than that. W. T. F.

So, good for the upcoming crop of students, they shouldn't get a teaching degree, it's an astonishingly horrible deal and unlikely to get better.


> 25% vs neighbors with same college degrees

Where in the paper is this stated? As far as I can tell, they treat each "bachelor’s degree-holding workers" as equivalent to each other.

There's no reason to expect that someone with a bachelor's degree in computer science from UCLA would earn the same as someone with a bachelor's degree in child development from Cal State LA.

And do the wage comparisons compare total compensation (including the present value of future pensions), or just current salaries? In my local school district, about 30% of total compensation is pensions (i.e. you need to add 43% to base salary to calculate total cash compensation). Also, teachers don't work for the full year; many earn additional income during the summer break.


> ...and public school kids are gonna get AI teachers with one real teacher per hundred or so, and bunch of even-lower-paid monitors (or whatever they end up calling them) to maintain order.

The term is "paraprofessionals." When I was a kid, these were the people who supervised the kids playing at recess.


Unfortunately, you're probably right.

On the other hand, children who have individual, mastery-based tutoring do about two sigma above those who don't. Of course, there's no way we can pay for 1:1 for every student.

What Khan is thinking of with Khanmigo is a way maybe to close that gap.


> Of course, there's no way we can pay for 1:1 for every student

This doesn't seem like an "of course" thing to me. University office-hours and the TA system work pretty well for this kind of thing.


It absolutely does not. It may work pretty well for the simple case, but any sort of personalized instruction is being thrown out the window when you scale TAs to courses of several hundred students.

Speaking as a former TA, there’s only so much instruction you can provide and while you can focus on helping a particular student more to ensure they get the help they need, that’s usually to the detriment of the other students you’re supporting.


I’m also a former TA and with 25 TAs we covered a class of 180 just fine. Sometimes it was more like 1 on 3 but most of the time not, and every student had the opportunity to ask direct questions and get personalized answers any day of the week


Infrequent/occasional one on one tutoring is not what I'm talking about.

https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem

Instead, I'm talking about a tutorial system where a student receives the vast majority of their education from dedicated time with an individual instructor.


If you hypothesize a future where AI is substantially helpful for education, it’s a fair question to ask how to divide the surplus created.

Some will go to the AI companies who created the value. Some towns will take the surplus and spend it on even better educational outcomes. Others may spend it on other town needs. Others may refund it to the town citizens in lower (or slower-growing) taxes. It’s not at all obvious that it should all go to teachers.


I don't think anyone is suggesting to allocate all of any hypothetical surplus to teacher pay.

We're spending so much more on education than the past. Teacher pay has fallen further behind other professions, and working conditions have gotten substantially worse, overall. Expecting tech to be a panacea and get us out of this mess is unrealistic. And there's a reasonable fear that promises of tech may be a justification to make the mess even worse.


My first attempt at writing a novel back in 2002 was based on this exact concept, where all kids are raised by a patient, wise, personalized AI teacher.

When we meet our main character, she's 3rd generation "raised by AI", and being intentionally radicalized by a hacked version of the AI. It's seeking out kids who are predisposed to counter-culture and deviance and then crafting them to play parts in a rebellion.

When I wrote it, the radicalized young people, AI, and it's hackers were the heroes, banding together to wake up their society from the vast numbers of people outside the network of in-system cities who were starving.

Looking back, the rise of radicalization via youtube and social media makes me much less optimistic on how this sort of attack would really be used.

I wouldn't be too quick to discount the use (and threats) of AI based tutors.

It's going to be cheap and easy to build them in to products (see duolingo), and I can see something like this being adopted by kids without their parents even realizing it.

I know plenty of adults who are already using ChatGPT as a first line of knowledge gathering when learning new topics, and I find myself and people around me leaning on it to synthesize information and make first drafts when writing. Adoption is happening fast.


I expect what would most-likely happen (could still happen?) is that you'd get differing groups radicalizing different groups of youths via their AI, ending up with hyper-radicalized subsets of society, each filled with their own personalized data and companionship that actively conflicts with each other (and reality).

And arguably, this is already present -- Facebook, Youtube, Twitter, Fox News and the like have done an incredible job of presenting variously alternate and customized timelines to people, radicalizing them with completely incompatible worldviews to many others.

Plus ca change...


Radicalisation against a system you've bought into looks like terrorism. Radicalisation against a system you haven't is freedom fighting. Congratulations in getting old and selling out.


We've seen this before: use hype and techno-optimism to sell a bill of goods.

If history is any guide, the concept of an "AI tutor" won't be used to "[give] every child their own personal Aristotle," it will be used to cut costs by replacing human teachers for low-status students with poor substitutes.

Seriously, think about how much money can be saved from firing most teachers and locking poor kids in a cubicle farm to stare at an insane "AI tutor's" screen all day.

And what's the strongest message sent by replacing human interactions with a machines?: "We don't care enough about you to actually bother to interact with you." Think of how you feel when you have a problem, you make a call to fix it, and all you get are phone menus.

And look at all the examples in the article: use GPT to generate crappy stories and educational games for kids to play. How can anyone think that's better than using a book? At least the book's stories and games can be tested and evaluated before use.

The actual problem here is the sociopolitical one of unwillingness to allocate enough resources to education to make it truly effective and satisfying, and that's one that technology can't solve.


> We've seen this before: use hype and techno-optimism to sell a bill of goods.

Yep, after reading a piece on ed-tech failures of the decade 2010-2020[0] here on HN[1], I've become pretty convinced that big-scale education is hard (if not impossible) to solve with IT might only.

Sure, one person alone can learn something through Wikipedia, Coursera, YouTube, Khan Academy or tutorials, but that can't replace structured environments if your missiob is to educate lots of people.

[0] http://hackeducation.com/2019/12/31/what-a-shitshow

[1] https://news.ycombinator.com/item?id=21948132


Well fucking said. For a lot of kids, their teacher is one of the few stable adults in their life. There's a ton of social and emotional things kids learn from good teachers beyond just book knowledge.


Though there are plenty of people who could provide the social side but aren't equipped to teach the curriculum. Perhaps you could use an AI teacher to deliver lessons that the class teacher was not qualified to teach. This could be done with the real teacher in the room.

One of the best teachers my child ever had was limited to teaching first grade due to a lack of maths ability. Everything else about them was fantastic. Maybe AI teachers can help in cases like that.


There are plenty of people who can do the technical side of teaching the curriculum. Most of them can earn more elsewhere doing a job that is considerably less stressful. Education needs to be funded so that it can attract people and it needs to be reorganized to reduce the stress.

The vast majority of what is taught in primary schools doesn't need a lot of subject matter skill, what it needs is the ability to teach and the toughness to survive the environment. And it has to pay well enough to attract people who have those qualities because those same qualities make those people attractive to other employers.


> There are plenty of people who can do the technical side of teaching the curriculum. Most of them can earn more elsewhere [...]

Yes.

One problem I come across repeatedly is: how do you identify whether someone has the technical competence to do the teaching? Is there any way of knowing if they're doing a good job?

As an IC they are implementing features and shipping products. As a teacher...? Test scores going up is uncorrelated with teaching quality, apparently. (NYC, just look at the scatterplot in [1].) So it seems like there's a deep problem here.

Now imagine someone in K-12 hiring a CS teacher. Who do they have that is competent to evaluate the skills of the person they want to hire? Probably nobody at all.

[1] https://garyrubinstein.wordpress.com/2012/02/28/analyzing-re...


> One problem I come across repeatedly is: how do you identify whether someone has the technical competence to do the teaching? Is there any way of knowing if they're doing a good job?

Classroom observation. It's really not that complicated.


There's something called Math for America that offers a substantial stipend to math teachers in New York with an application that involves a written portion, observation reports, recommendations, Praxis test scores, and interviews.

> There are plenty of people who can do the technical side of teaching the curriculum. Most of them can earn more elsewhere [...]

It's funded by a billionaire former mathematician who wanted to address this exact issue.


I'm very worried about this.

We spend an absolute shitload of money educating children. This is paid for by society. But it is also so incredibly worth it. I'm worried that politicians will see an opportunity to shed costs without considering the implications and we end up with children raised online by robots.


And the divide may further increase since kids whose families can afford personal human instruction may get ahead even more.


I'm cautiously more optimistic. Education is already chronically underfunded in most places. If it became cheaper to give personalized attention then maybe the quality of education delivered might actually improve.


It won't. You don't get attention; you get a reply.

MOOCs have terrible performance in comparison with even huge audience lectures. Online teaching during COVID proved that that doesn't work either. A machine doesn't have the authority and conviction of a real person. The combination means that the levels will have to drop only to sustain the idea that this could work.

If some country is going to push this, the sunken cost fallacy and the impossibility to hire new teachers will finish education off, once and for all.

Optimism should NEVER be a basis for fucking with something as important as school.


I'm not convinced a MOOC is necessarily worse than a pure large lecture class if the material is amenable to auto-graded problem sets. (Leaving aside a regular class would typically have TAs and other students you could get help from.)

But they sure didn't live up to the hype. They didn't democratize education in any meaningful way. And I'm not sure even the people with undergrad and grad degrees who made up the majority of MOOC students use them much these days. I know, for me, they were mostly a novelty for a while. Maybe if I want to dip into a topic I'll watch some online video but I haven't even signed up for a MOOC for years at this point.


It is actually an interesting point. Journalists seem to be catching up to the fact that running ( or at least training data ) is not exactly a cheap task and kids aristotle probably shouldn't be trained on 4chan.

Naturally, some analysts would say that means there is tons of consulting money to be made..and it seems like spending is easy and profitable.


>it will be used to cut costs by replacing human teachers for low-status students with poor substitutes

most public school teachers are glorified babysitters at this point anyway, I can recall only a single teacher during my 12 year public school attendance that was actually helpful. From what I've seen from GPT-4 so far, I'd have taken a ChatGPT Plus subscription and laptop over dedicated teachers. YT and Khan Academy were far more useful than my actual teachers


I've mentioned this before across HN, but I'll mention it again - human-to-human interaction is critically important for young children.

You basically have to teach young kids how to act like a human being. Kids will get this teaching in daycare and through their parents, but that alone isn't enough and it is important to reinforce this through schools.

People often forget this because you don't really remember it and then you end up making assumptions on how you act now - not realizing that a teacher may have taught you when you were younger.

I think the best example of this is handing out worksheets to kindergartners. An average person may think that you just give them pencils and paper. An experienced teacher knows that they have to teach the kids where to write their name, how to follow along the line, to read instructions, flip the page over to check for more, etc.

This is practically true for all children. Again, many may argue that you can receive this teaching through your parents and day-care, but once a child gets old enough they are spending most of their time in school. Think of it like language learning. You'll learn more of the language if you are being taught it all day, rather than only at home with your parents.

You also experience more interaction with kids your age at school. There are many situations that come up where a teacher guiding you is important, and these same situations may not come up at home (especially if you are a single child).

I do think after a certain point you can rely on technology more for learning. I just want to emphasize that having a human teacher trained for working with children is critically important for the growth of a child.


We live in a world that's dominated by staring at screens. You're doing it right now. This wasn't true when I was young. The world changed.

It stands to reason that the education system should also change to reflect the world that we live in. Screens, AI, all of it. Children will learn how to become members of the society that they were born into, not the one that we were born into.


This is true, but I don't think this directly disputes my point of having human-to-human interaction and trained professionals working with young children.

Would it be possible to do all that's needed for young children with just a screen? I'm not so sure. The issue isn't necessarily the content, but the behavior of the child and having a person there to teach and correct behavior.

Technology plus teachers is effective. I apologize if my argument came across as no technology for young children - I'm just saying that having JUST technology for young children is difficult.

I know many people that are teachers of young children and they use technology. Kids have Chromebooks at first grade and are learning things like programming, using the internet, determining if an article on the internet may be lying to you, determining if a website's main goal is to sell something to you, becoming a good digital citizen, etc. All of that is very important!

It's also important to have a teacher in the room to teach students how to use their Chromebook, how to open it, how to physically interact with technology, how to ask each other for help, etc.


I like this comment, its thesis deserves a fuller defense.


> most public school teachers are glorified babysitters at this point anyway

That's a common but pretty twisted bit of logic: "we've made it shitty, so we should make it shittier because it's already so shitty."


No, he said we should make it less shitty. And outlined a way to do so. Why misrepresent his words?


> No, he said we should make it less shitty. And outlined a way to do so. Why misrepresent his words?

No, he didn't outline a way to make it better. He denigrated the people involved, in a way that is (most charitably) an oblique reference to low education budgets (e.g. not enough money to hire enough teachers or attract talented teachers). Then he held up some references to a few techie things.

Maybe he's a genius autodidact that never really got much benefit from his teachers. There have always been people like that, but they've never been common. A system tailored to people like that would fail the vast majority (e.g. be shitty).


teachers do not want to be babysitters, but when you have overstuffed classrooms, no prep time, no equipment, hungry kids, etc etc what are you able to do?


Not to mention state-mandated testing regimes that have nothing to do with actual student learning, hyper-local control of education in elected school boards whose members are often as dumb as a box of rocks, extreme pressure from administrators to get kids through so their performance numbers are juiced rather than actually taking whatever time is needed for those students to accomplish what they need to, etc. I’m a public school teacher. I never cease to be impressed how very little people who don’t work in education understand about the realities of education.


I don't see what point you're trying to make. Sure, we could make education better by spending more on it, but the voters don't want to, so it's not going to happen. Given that society is not willing to spend more on education what can we do with the budget we have?


I think we've tried the "we could make education better by spending more on it" method. Based on the current state of public education, I don't know that this approach has been proven correct.


1. Citations for the claim that we are spending more for the same results would be productive... but I believe it.

2. I believe the problem is that the extra money is not going to proven-effective programs and teachers. There is some question to me whether anything is proven effective... in spite of whatever you read in books written for teachers.

3. The parent comment is pretty nihilistic. Try to propose something that you think might improve the status quo. (E.g., the comments about the highest impact of extra spending being in social services instead.)


> Try to propose something that you think might improve the status quo.

Use LLMs to compliment class activities so that students can receive the individual attention they need.


Sounds like you were self motivated. What about all the other children who have been told over and over that scholastic achievement is only for the weak and those whose early years were spent gazing vacantly at television. A teacher's job is not merely to recite the material but also to somehow make the student receptive to it.


Did you do an AB test with your clone and checked how it fared without going to school ?


Very US centric point of view. If the teachers are fine, leave them be. If a place doesn't have the resources to hire said teachers, use tools that help.

Or in the US, if teachers are that important, maybe give them more resources to do their job. i.e. buying supplies, computers, etc for your kids class.


> ..."We don't care enough about you to actually bother to interact with you."

"Thank you for calling customer support. We value your business. Please enjoy this...music...while you wait. Thank you for calling customer support." M U S I C ! ! !


The best part of this post is that it comes from a supposed PhD in neuroscience. Either they were born yesterday or that PhD wasn't worth two shits. What's more concerning is that they also run a school.


Not sure if an AI tutor is a “poorer” substitute for ALL human teachers given there are many teachers who aren’t be try qualified teaching


Remember MOOCs? Some would do well reviewing a bit of history before taking on Medium posts.


It seems insane to me to have fundamental education that builds logic & reasoning taught by an AI that can exhibit neither of those (and this isn’t just my hot take but something easily provable if you look at the underlying architecture).

Recent LLM advances are revolutionary and exciting, sure. But the hype cycle surrounding it is borderline dangerous. I see this as the same level as everyone who was claiming ChatGPT would kill Google until Bing tried it, crashed, and burned.

Accuracy is important and although we’re moving towards this (I personally like Amazon’s knowledge graph and evidence based approach) we aren’t there yet.


> It seems insane to me to have fundamental education that builds logic & reasoning taught by an AI that can exhibit neither of those

Books don't exhibit those either.


That's right. The authors of those books exhibit those qualities. You can't educate a human by pouring generated text on them, you need something that's capable of teaching to moderate the content. LLMs may allow teachers to achieve more with custom prompts, but it's a far cry from "personal Aristotle" and still requires 1-1 engagement with teachers and student.


Totally agreed, which is why we carefully curate the books we use within educational curricula to make sure they actually contain useful and accurate material.


Bing crashed and burned?


actually bing is growing massively while google is shifting into irrelevance everywhere (except android) - even laypeople now prefer to NOT use google left and right


> and this isn’t just my hot take but something easily provable if you look at the underlying architecture

Show your proof then


He obviously neither understands the architecture (and what that really means in relation to an objective function ) or what it means to understand and reason.

LLMs don't stop reasoning because people proclaim so loudly. Because you can very clearly show output demonstrating understanding and reasoning, this nonsensical argument always devolves into "true understanding" TM. This so called special distinction that is so important but yet can't actually be tested for. Lol


LLMs don't "understand" things because they can't generate new theories. Imagine you trained an LLM a ton of text describing what it means for something to be true, and for something to be false. It would never independently come up with, say, propositional calculus. It could tell you everything in the world about what it means to be true and false. Would never come up with anything new.

Does this matter for education? Well, I don't know. A student is unlikely to come up with a question that GPT-4 has never seen before. The idea of kids being taught by a chat bot is just depressing, though. What are we doing to ourselves? Why do we want this? To be more "efficient"?


Language models can generate novel functioning protein structures that adhere to a specified purpose. Structures that didn't exist before nevermind in the training set. https://www.nature.com/articles/s41587-022-01618-2

Sorry but the idea that they can't generate new things is blatantly false.


This isn't generating new theories. I'm not saying LLMs aren't useful, or that they aren't extremely impressive in many ways.

Almost nothing GPT-4 produces was in the training set. I'm not saying they can't produce new sentences.

From the linked paper: "a language model that can generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences"

Nobody is disputing that LLMs can generate grammatically and semantically correct sentences, and from the sounds of things this task is analogous to generating sentences. This isn't the same as generating new theories.


If it can take natural language purpose descriptions and generate new functioning protein then due to it's training, it evidently understood the relationship between purpose and protein structure. Meaning all the little models it builds as a result of training are concrete. They can produce new things and new information and it's just matter of scale and intelligence.

"akin to generating grammatically and semantically correct natural language sentences" is just an indicator of how it is synthesizing and producing this information. To say it is analogous to generating sentences in the abstract is like saying generating new theorems is analogous to generating sentences. Well yeah, that's what the language model is doing. The power lies in what it is generating. What they are telling you is that they skipped the folding process entirely and just let the model figure out function to structure directly.

Either your dog fetches what you throw at it or not. The idea of "pretend fetching" is unbelievably silly. Doubly so when no one seems to be able to actually define this act. A distinction that can't be tested for is not a distinction. Science is concerned with results not vague and ill defined assertions.


Imagine an LLM was trained on the internet before LLMs were invented. It knows everything in the world. Every programming concept, every philosophical and logical concept. It is an "expert" on the then-current state of the art in machine learning.

Would it be able to invent itself?

I'm not moving the goalposts here. Language models are extremely impressive. I think they will change the world. I fear they will displace many jobs.

Maybe some day they will be able to invent themselves. Maybe that will come sooner than we think. As it stands, they couldn't. This is the distinction that you say doesn't exist.


First of all, you've just imagined a scenario, invented a conclusion that you can't prove and justified your distinction on your invented conclusion. Your distinction is no less arbitrary than when this conversation started.

Next, you're conflating different things. How many humans are capable of inventing anything at all ? Have you invented anything in your field of expertise? Should i then question everything you do understand bout your field if you haven't ? Do you think understanding is binary ?


Ok, I feel this conversation is not going anywhere. If I see a single example of an LLM generating new theories then I will immediately change my mind. I have yet to see this.

Most humans are not capable of inventing things, you're right, but humans in aggregate are. We are comparing LLMs to "humans" here, not to "a human". There are no examples of LLMs doing this. Maybe in a couple of years, who knows.


You're interested in reasoning? That's so cool. Here ya go, something I've been reading earlier today because I need to add some references about deductive, inductive and abductive reasoning, particularly in the context of logic programming (you know, reasoning with computers) to my thesis:

https://www.researchgate.net/publication/321596989_Abduction...

Don't let the Researchgate link discourage you. It will pretend to ask you to login but it will open the pdf anyway.

Why am I posting this? Because all you guys keep talking about "reasoning" this and "reasoning" that, and I don't think that word means, what you think it means.

Reasoning is a very specific thing and we know very well how it works. Can ChatGPT reason? I'm afraid you'll just have to read up and figure it out yourself.


This is cool; finally, the Young Lady’s Illustrated Primer.

I’d be interested to see how Google’s LaMDA compares on this task (currently available in private preview).

One of LaMDA’s unique (afaik) features is a fact-check system that edits the main LLM outputs, to reduce bullshitting. This seems particularly important in an educational context where impressionable young minds are talking directly to the LLM.


An ePrimer has a few components: the book itself, a Ractor mentor, a curriculum that includes Ractor paid milestohes, and scientufic metaphor stories that are the rubric and scaffolding. It might also include the VR or AR components that supply the immersive context. I've taught this in a master's level Metaverse course at a NY university.


Notably, we see two models for uses of the Primer in The Diamond Age. The one everyone thinks of has a single human Ractor on the other side, who takes a genuine, nigh-parental interest in the student's development over a period of years. The others... don't have that, and the outcomes diverge.


It’s a good point. I think we can easily imagine GPT writing (in ascending order of cost) speech synthesis actions, mechanical Turk scripts, custom actor interactions. (With the “tool script” for actuating each becoming progressively more complex.)

A Primer framework could make these pluggable. Also the curriculum could be pluggable and either generated or ingested from existing courses.

I guess the mapping of curricula -> scripts is probably the hard part, and what GPT-N (for some N not much bigger than 4) could most interestingly help with.


You've been teaching The Diamond Age? Fiction?


Not really. The Duamond Age" is referenced as a template, but it is not required reading.The main components of the course are 1) writing a Ractive, 2) designing a Paper Theater for the Ractive, and 3) designing the set for the Paper Theater. The object is to continue the existing workflow for media with apropriate extensions for the Metaverse. The work is done in teams of 3 people with continuous reviews by team members. An underlying goal is to teach a collaborative media design process. Our view is that collaboration is the main leverage component of the Metaverse.


Recall that what made the Primer so effective in Diamond Age is that it had access to enormous external financial and human resources. The later copies that raised the Mouse Army and did not have those resources were (intentionally) not great liberators like the original.


Right but they did very effectively program an army of young minds to obey the principles of the learning system's creator.

One major takeaway from The Diamond Age is that being in charge of the learning system is an incredibly powerful position.

Don't hear so much about Khan Academy anymore, could you get an LLM to make new content in their style while also transmitting your desired values?


'Course, we've known, "Give me a child for the first 5 years of his life and he will be mine forever" --Lenin


This is an interesting development; I'd assumed we'd have a robust intermediate stage, where a collection of top-quality works (Books, carefully-chosen video games [think Zachtronics], recorded lectures etc) were made available, convenient, and cheap, in a no-pressure, non-points-based system.


Diamond Age was my first thought when I read this too.


I kinda feel like the GPT4 version will be the curated voice acted version while the LaMDA version is the Mouse Army version.


I don't know. OpenAI claims that GPT-4 can be in the 89th percentile on the SAT Math exam. At the same time, I asked GPT-4 to compare two six digit number and in some cases failed. When I asked for a why, it started to make up completely false math rules, e.g. to compare six digit numbers you need to check their 7th digit, which is not just nonsense (where is the 7th digit in a 6 digit number) but shows a complete lack of understanding.

So while I believe it can produce "accurate" answer for SAT questions, as it was trained on those, I also believe that it doesn't have any kind of math logic it follows.

I'm not sure this dichotomy can be solved with "more" training. It's also telling that in some cases it fails, and in some cases it doesn't, you just need to change the numbers, even with simple numbers.


Teaching entirely by LLMs? Not likely. I think we will still need humans in the loop and humans to design the curriculum.

But I believe technology is the future of education. Instead of having many, many classes of like ~10-50 students with underpaid/overworked/mistreated teachers, some of who are much better than others, we rely on services like Khan Academy. Not even services with an AI, just pre-planned learning modules, and then we can keep a human tutor for those who need extra for whatever reason (e.g. curiosity, more help).

Many people think tools like Khan Academy are the lazy and less effective alternate to real teaching, and maybe they are now because some of them really suck. But online modules have the potential to be much better than even a group of engaged, in-person tutors. Not because an AI tutor is better, but because we want a standard, universal curriculum; and one curriculum for ~1mil+ students can be designed by the highest caliber (human) teachers and have highly-interactive (human-made) demos powered by computer programming.

Also, no matter how good the AI tutors are, 100% online learning is not the solution Students from pre-K to graduate school rely on not just the education, but the social component as well. Plus, many school kids only get their biggest meal from free school lunches, and parents don't have the time to keep their kids at home. So we should still have school to be in-person component even if students do all of the learning on the computer, with an online option for those who want it, but I guarantee many won't


In person school is definitely the answer. The real challenge is the student teacher ratio you mentioned.

I think the ideal is ~10:1 sorted by student performance/learning capacity increasingly for difficult subjects (so students form a cohesive classroom pace of subject material). Which will also give variance to friend groups if you sort students differently per subject.

AI tools will be great for public schools where its often 30-60:1. We still have to get past the 'confidently wrong' AI though.


why would something like 10:1 be the ideal

bloom seemed to show you continue to get improvements all the way to 1:1

maybe the ideal is 1:3 or something


Making friends is a really valuable part of school in my opinion. Possibly exceeding or at least equaling the value of whatever you're learning.

If theyre the right kind of friends you will be able to learn more with them than from a 1:1 teacher.


Good points.

The lure of LLMs in education is the expected novelty of instruction they can provide. It might be an aid to some students, the same way some students read books in their free time or learn to use the internet to look up answers. I don't expect much more, and the rest of it is a net negative IMO.

LLMs will be used like "reverse multiple choice questions", with heavily constrained LLMs silently penalizing students for making "mistakes" or probing subjects that they aren't "supposed to" know about. No matter how embedded LLMs are in the education process, there'll always be a human setting its course, and I don't think social dynamics will ever change because of this.

Also, the Q&A format with a machine will make older texts / instruction opaque to many of the brightest, they'll have to translate from machinese for it to be of any use. An entire generation aping a LLMs way of writing is also a death knell for a civilization and innovation. We're clearly past the point where we can convince most people that this LLM isn't alive.


"we want a standard, universal curriculum"

I don't want that.


It's good for ranking, which unfortunately will never go away (though could become more granular), and much better than standardized tests. And ensures that some students don't miss access to subjects and resources that others have because they're in different schools.

There would still be the ability for students to choose their own subjects, focus more/less on certain subjects, learn different concepts within subjects, etc. And a lot of niche subjects and branches. Like it would be an incredibly massive curriculum, especially since you are designing one for >1million students. By "standard and universal" I mean that every student gets the same resources.


> But I believe technology is the future of education. Instead of having many, many classes of like ~10-50 students with underpaid/overworked/mistreated teachers, some of who are much better than others, we rely on services like Khan Academy. Not even services with an AI, just pre-planned learning modules, and then we can keep a human tutor for those who need extra for whatever reason (e.g. curiosity, more help).

In other words, the future you believe in is one where we lock kids in a cubicle to stare at a screen all day in a warehouse, with a break for recess and lunch? Where the message is, we care so little that we can't even bother to allocate one real person to regularly talk to you?


I mean, if we’re talking about the hypothetical far future, who said anything about screens?

These AIs will be direct fed into our brains soon enough. Who knows or cares that the entity you’re talking to is an AI if it’s indistinguishable from a real person?


In my old man yells at cloud moment: I'm pretty sick of experimenting with technology on kids under the guise of learning.

Tablet use in classrooms is bad enough. I want my kid to be able to focus throughout his life, not be a phone zombie.


Yeah but, kids and teens are the easiest group to trick with dark patterns and addictive behavior. What other age group is going to give you better engagement metrics and help you raise capital? /s


Why the /s?


Because this is HN and there are people here who think that unironically.


Did you forget /s?

/s


Because I didn't want to be lectured about why this is a bad idea.


Yeah, no sarcasm as it's the truth


Its not the truth. Young people are more difficult to persuade online. Technical literacy is on average higher than previous generations (though still not universal), and teens seem to be practically immune to current forms of online advertising.

The "kids these days" trope is as old as time, and its no more true this time around.


> teens seem to be practically immune to current forms of online advertising.

I'm struggling to square that with teens forming long lines to buy an overpriced energy drink promoted by a couple of YouTubers and scalpers successfully getting over £100 for bottles.

I'm sure some of them are better at installing adblock software than their parents, are harder to convert for ecommerce because they need to borrow their parents' credit card and may be immune to online ads that are targeted at people very unlike them, but the idea that this generation is the first one immune to advertising and use Tiktok despite it's engagement metric hacking and follow influencers despite them being product shills is no more true than the "kids these days" meme.


I thought technical literacy was down? People are forgetting what filesystems are, aren't them?.

They might be aware of privacy, and less into buying random crap(Although that might be because they spend their money on the food instead of plastic junk, which harms your body AND your wallet), some reason the tech skills seem to be getting left behind.

The trendy thing now seems to be going back to vintage stuff.

I'm almost a bit jealous, if their lives are so amazing that whatever they're doing somehow seems better than focusing 100% on STEM, which was like, the ultimate life goal for a decade.


All youth is easy to convince of stuff. Not just these days.

That's why religions and the military prefer young recruits.


Hard disagree.

We now have a whole generation of teenagers that grew up with iPhones and I have seen exceptionally poor average technical ability.

When we were coming up you had to install modem drivers to get online. In windows 95... with no internet to check on a how-to.

I don’t think I can imagine anything more delusional than a parent, handing their kid an iPad so they will go away and thinking that they’re also preparing this kid for a career in computers.


Worse yet a misguided school district buying tablets instead of chromebooks, for example!


Wait just a minute here, I read this article a few weeks ago:

https://www.theguardian.com/technology/2023/feb/27/gen-z-tec...

What does “technical literacy” mean?


I don’t see a strong correlation between older/legacy tech know-how and a broader notion of technical literacy.

I think it might be more helpful to look at this as information literacy, or at least roughly speaking a higher degree of “Internet smarts”.

Scanners and copiers are in that weird phase of product life where they’ve been mostly eliminated as core computing needs at home, but still see widespread business use since business will always be the long tail on fully digitizing.

I think it’d be prudent to be specific, because for whatever degree of literacy or smarts or whatever you call it is true about teens in the information environment, the technology through which they experience that environment inches closer and closer to pure magic from a layman’s perspective.


> Technical literacy is on average higher than previous generations (though still not universal).

This. is. total. horseshit.

I guarantee that you could randomly sample 1,000 people across America age 42-48 and 1,000 people age 15-21, put them in front of a multifuction machine (printer / scanner / copier) and the 15-21 year olds wouldn't have a fucking clue. Hell, Bloomberg ran an article about it.

And the reason this is important is because iOS and Android and tablets don't run the business world - laptops, MFPs, and Windows does.


My six year old isn’t. He falls for all of them. Grandma lets him her mobile phone and 15 minutes later he’s asking me to buy diamonds on a boring dinosaur game.


It seems to me that they're only "hard to persuade" because they are constantly bombarded with these mental viruses. There was (books/telegraph then?) radio, then TV, then internet, social media, scrolling social video... While it's true that each generation eventually develops some immunity to their entertainment afflictions, it does seem to be accelerating, and those not exposed as youth have less immunity (grandpa on facebook). It also seems to have continuing impact even if most become functioning members of society.


/serious


Dunno, we pretty much know our current classroom education system is abysmal compared to tutoring[1], which is on the other hand prohibitively expensive. Managing to bridge that gap would be an enormous benefit to humanity.

[1] https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem


I learned way more useful skills from getting internet access in 1998 than from 90% of my in school teachers (and I went to a decent public school).

For kids that are curious and want to learn - access to something like gpt-4 is amazing. All I had were guides like Eric S. Raymond's how to be a hacker that I had stumbled onto while trying to learn. It was so much harder to learn programming back then if you didn't know where to start and nobody around you knew about it.

With gpt-4 you can just ask and get examples.


There is something to be said about struggling through learning difficult material on your own, and the sense of accomplishment/retention that it instills. I started programming a few years after you and there was a similarly difficult learning curve in sourcing the materials, setting up my own environment, and making progress on your own.

My little brother is doing some form of machine learning in grad school currently and when I saw him last around when ChatGPT was starting to become a mainstream talking point we discussed a lot about how immediately going to something that can answer a question (e.g. how do i write code that does X) is tempting but, can be a bit dangerous when we offload our own ability to reason through something to an external entity.

Shouldn't we be fostering thinking patterns that allow for creativity and innovation when given very little, versus fostering a sense of "oh I can always get the right answer to a question whenever I need it. So why bother with building a general mental problem solving framework for myself"?


>There is something to be said about struggling through learning difficult material on your own

Depends on what the difficulty is. I remember when I started to learn programming back in 1991, I was learning from books. These books were very expensive and I could only afford a few of them. I didn't have access to a library because I wasn't in any school or university. I was a musician with a side job at a theatre box office.

My main problem was that many of the explanations and examples in my programming books left some questions unanswered. It was even worse with maths, which I also tried to learn at the time. I got stuck a lot. If I'd had access to something like stackoverflow or Google, I could have progressed much much faster.

I do think that getting stuck and being forced to figure something out the hard way is sometimes useful. But more often than not I didn't get stuck because the material was actually difficult or because I was solving difficult problems. I got stuck because the one and only explanation available to me was imprecise or incomplete. All I needed was the same thing explained with slightly different words to clear up my misunderstandings.

On the whole I think learning these things today is immeasurably more productive. People are a lot less dependent on luck or money. You don't need to be one of the lucky few who has access to good teachers. You don't have to buy expensive books. Everyone has access to excellent online tutorials, documentation, source code and to thousands of competent professionals willing to help for free. It's absolute heaven. Back then it was hell.


You could have made that same argument about memory and recall skills prior to search engines. But I'm not sure anyone is now arguing search engines have been detrimental to working intelligence.



I pretty much skated through college on my ability to remember trivial facts because this is what the classes I took graded on.

Does anyone care that the head of the art history department did his phd on Picasso, taught two full semester courses on him and I know more about Picasso’s life than probably 99.9% of the general population?

Doubtful…

But I do know things that never made it into the search engines (I’ve looked) and this knowledge is locked up in a (probably) retired professor and the few percent of students who actually listened to his lectures twenty-something years ago.

Now everyone assumes that if it isn’t in google it doesn’t exist and have very little idea how to find information outside of what’s contained in its “record of all things”.


They actually have studies that show people consistently score lower on IQ tests when they know their phone is in the same room as them.


Is asking and getting answers how to learn?

Think about most of the stuff you have really learnt over the course of your life, was it from sitting in a classroom with information being spoon-fed to you, from reading a Wikipedia article?

Or was it because you had to search hard for the information, try and fail before you succeeded?

My bet is that your learning comes from the lather not the former.

Good teachers that I have had would point you in a direction and then give guidance or even question you instead of just telling you the answer.


The most effective learning is when you try it yourself, but can also get immediate feedback and explanation.

It’s doing the problem set, but also being able to check the key. Basically a private tutor or TA available at all times.

Without that you can’t get a tight feedback loop.

The most effective use of these systems will not be copying the answer, it’ll be trying to do it and then asking when stuck.


The khan academy approach is right imo. You just tell the models that they should help you through the problem rather than answer it for you.


This is true for autodidacts. I am one, and the same is true for me.

It is absolutely not true for most people. Most people need a human guide to show them the way. I am living this now with my two kids, both of whom need instruction (and also help from mom and dad).


That's kinda what it looks like we'll be able to make these models do, though. The tech is young still. All those erroneous hallucinations and patent lying (which are admittedly very present in the current crop) may well be managed soon enough - then all it takes is getting it to talk (solved), listen (solved), and provide some kind of direction / steering (not quite solved). With all that, you can have a guidance counselor for everyone.

Caveat, of course, this is an extreme power tool vis a vis human development, and as such, can do just as much harm as good depending on it's use. This is why it's so damn important that we keep this stuff open source - if it's behind walled gardens, you end up with _actual, real shadowy cabals effortlessly manipulating all of everybody_, a thing that you may or may not be convinced exists already but guaranteed under those circumstances.


But is chatgpt going to address that? I think not (at its current stage, definitely not).


Probably not at this stage, but it seems at least conceivable that the technology may move in that direction. In general I think LLMs are very overhyped, but this is an area I do concede they could be very useful with some tweaking. Right now I think there's too much "and the kitchen sink" in the models, but a more selectively trained model might be very useful for this.

Truth of the matter is that many teachers know the curriculum, but if you ask them a basic question, they won't answer the question with nearly the same depth for example Feynman[1] would have. Now educators with both the ability and depth of knowledge of Feynman is clearly a very limited resource. LLMs may just change that since they can be given the depth with relative ease, the rest of the owl is making them good educators as well.

[1] https://www.youtube.com/watch?v=36GT2zI8lVA


Not a very exhaustive test, but it gives pretty good answers when I put in the things my teen was asking while doing their algebra homework. Of course it would only help if they asked questions in order to understand, rather than having it just do the homework: but the same is true for a human tutor.


You can also tell it to restrict how it answers so it asks you questions rather than giving you the result.

https://blog.khanacademy.org/harnessing-ai-so-that-all-stude...


> which is on the other hand prohibitively expensive.

It's expensive, it's not prohibitive unless you're not willing to increase education funding at all. Which implies that what we're spending now "should be enough." I don't think we've ever done the basic homework here.

54.5 million school aged children in the US. Average $50,000/yr tutor salary. Total cost per year would be 2.7 trillion.

National Defense is 767 billion. Education is already 677 billion. Net interest is 475 billion. We're already more than half way there, inside of a nation with $25 trillion GDP.

I fail to see anything "prohibitive" about this.


Why not double down and give each child two tutors (furthermore called parents) @ $50,000/yr?

I hate to have to say this but GDP isn’t the absolute upper limit for government spending. I know they tell you that every dollar of government spending is equal to $2 of increased GDP (or some equal bullshit) but that doesn’t mean GDP is an open checkbook for the government to drive the economy. At some point someone has to generate actual value for them to appropriate for their political ends.

This is the only time I’ve ever wanted the AI to generate a reply for me but, alas, that’s not in my current skill set.


> Why not double down and give each child two tutors (furthermore called parents) @ $50,000/yr?

What happens if they have two or more children? What if they want other jobs? What real bearing does this have on the fundamental problem?

> I hate to have to say this but GDP isn’t the absolute upper limit for government spending.

I didn't suggest it was. I'm showing that if you want to have high value education, it would become about 10% of the total economy. Does that seem "prohibitive" to you?

> At some point someone has to generate actual value for them to appropriate for their political ends.

And educating children has no value?

> This is the only time I’ve ever wanted the AI to generate a reply for me but, alas, that’s not in my current skill set.

Well, some educations have no value, I guess.


> What happens if they have two or more children? What if they want other jobs?

Obviously, as a cost cutting measure, we will pay $50k per child to each of the parents.

> What real bearing does this have on the fundamental problem?

I don’t actually understand what the problem is we’re trying to solve here, honestly.

Give them all an AI and call it a day.


> Obviously, as a cost cutting measure, we will pay $50k per child to each of the parents.

What is your reasoning for believing this would improve outcomes?

> I don’t actually understand what the problem is we’re trying to solve here, honestly.

Really? We spend a lot on education. Our outcomes are demonstrably sub-optimal. There's no correlation between amount spent in a classroom and classroom outcomes. There's obviously a large flaw in the entire model.

> Give them all an AI and call it a day.

Well.. when we invent an actual AI and not some hallucinatory language model, let me know, until then, you're obviously not serious or willing to play fast and loose with something important.

Hard to believe we're struggling to get this right.


There's no upper limit to government spending when they can just add some numbers to a database, it's just how much inflation the country can tolerate and whether anyone will accept a wheelbarrow of toilet paper currency in exchange for goods and services that people want.

Brazil and Zimbabwe found this out the hard way, but a lot of people really don't understand how times can be hard when the numbers keep going up because they neglect to consider inflation and the government has an interest in managing down the real numbers and mumbling something about "greed" while they enact policies that effectively create huge, regressive taxes.


In this case you are actually yelling at a “cloud”

I’m more optimistic about tablets and AI, but the word that resonates with me the most here is “experimenting”. I don’t think there’s a guise, eg using learning as an excuse, but I do think that we have to be careful with how we present tech to children, even if it is with an altruistic goal.

So if we shouldn’t experiment on students, how will we figure out how to make these new technologies improve student learning?


Maybe we're focusing on the wrong end of the spectrum to evaluate learning. I keep reading about how important it is cognitively for our aging population to embrace incremental learning. Maybe we should focus on using AI to help our older populations become more comfortable understanding technology and accessing information. As a bonus, they can also provide some compare/contrast feedback with what they already know from past experiences to improve the experience.

My in-laws and parents already are enthusiastic Youtube consumers, mainly because in a very consumable way - they can hear/see how to [Fill in the Blank]. Stuff like changing out the washer in their faucet and feeling confident enough to try to do it themselves with the Youtube video as their crutch/sherpa was a game-changer for them. Otherwise, it falls back to paying $$$ (if you can find someone) to do it for you ... or lean on the generosity of a knowledgeable friend/family member (when they have time to give). Self-empowerment is their preferred choice in terms of shortest time to action at the least cost.

To me, AI Tutors seem aligned with consumability (natural language engagement) and can also provide some confidence to older populations to better "understand" and maybe "do". My dad felt intimidated by Windows because the error and information dialogues were so intimidating in technical speak and worse, they were one-way communicators. The net vibe resulting was:

"Windows is yelling at me that I'm doing something wrong, but I can't ask it a follow-up question to get more context on why or to fill in a hole in knowledge that I have about what TCP is/means. How can I proceed without making things worse". BTW, this is the same frustration people feel with automated customer service flows that aren't modeled to handle a broad enough range of customer inquiries. This next rev of AI engagement has the potential to make this two-way again. And I'll happily be out of a job as resident family tech expert or perhaps just evolving into the AI tutor model trainer for my loved ones.


"I want my kid to be able to focus throughout his life"

Tanks for granting old bachelors a moment of yelling "then don't have any!" at cloud.

History suggests that they will always find a way, perhaps not all of them but at least as a whole, as a generation, but are we really exaggerating of we claim that it's worse than it used to be? Every bad turn that had ever happened before has at some point been unprecedented and would have been clearly ruled out before by apppeals to "pessimists are always predicting worse than what actually happens"


> Tanks for granting old bachelors a moment of yelling "then don't have any!" at cloud.

Everyone has to learn the lesson for themselves. I can prepare them for it as best I can, however. So I don't really worry about this insomuch as lament the cultural fetish for "more tech = better."

> Every bad turn that had ever happened before has at some point been unprecedented and would have been clearly ruled out before by apppeals to "pessimists are always predicting worse than what actually happens"

I mull over this frequently.

My objection stems from constant tablet use/availability during school hours. Being a tablet zombie in 4th grade is not healthy even if you're only using learning apps. I'd much prefer kids to binge tablets at home for 3 hours after school occasionally than have it at hand 8 hours a day, everyday. If tablets were something that were occasionally trotted out at school for specific tasks: no problem. Assigning kids tablets and making them an integral part of the teaching workflow really bothers me.

This is part of a larger strategy of mine: controlled exposure to high-stimuli things alongside building a full enough life to try to avoid addiction. My kids will eventually play video games (probably not trashy mobile games), use tablets, watch TV (they already do) and eventually, correspond with AI. (I am hoping social media dies a fiery death before they get old enough to think it is actually important.)

Obviously, these are all plans in the abstract. All we can do is give our kids the cognitive tools to see how technology use affects their life.


>Every bad turn that had ever happened before has at some point been unprecedented and would have been clearly ruled out before by apppeals to "pessimists are always predicting worse than what actually happens"

Isn't that also true of every good turn though?


Part of me is worried that we'll be giving kids a distorted perception of what AIs actually are - hell, the general adult public can't help but anthropomorphize them to a degree that I don't necessarily think is warranted.


When I was at secondary school, the teachers said they would focus on Windows because MacOS didn't have a future.

That was about one year after the computer rooms got rid of their Acorn RISC OS machines.

That era of Mac OS may well be dead and gone, but so is that era of Windows.

So, don't worry, the phones and tablets aren't what's going to make your kids into attention addled zombies — neither will the AR glasses, though I'm not so confident about the inevitable brain implants let alone the equally inevitable DIY gene engineered pseudo-Ophiocordyceps.


You might find my comment elsewhere in the thread of interest,

my high school age kid is able to leverage AI in a way that is augmenting rather than replacing her native intelligence, and accelerating her own learning and integration.

I find these tools are not unlike "smart drugs"—a necessary aid to some, a destructive crutch for others, and performance enhancement for a subset of the already competitive seeking advantage.

Gattaca approaches from an unexpected direction...

(I have joked that encouraging use of AI but forbidding social media, feels not unlike championing psychedelics while decrying stimulants and opioids.)


I love technology, but I don’t get how anyone can think it’s wise to let their children stare at a screen all day. My nieces and nephews spend all of their time staring at iPads, but my children have much more limited screen time rules. It’s a small sample population, but my kids are generally pretty calm and well behaved. I love my nieces and nephews, but they are heathens.


As opposed to white collar work where we stare at screens all day?


I feel that it’s less about the screen and more about what you’re doing with it. When my children stare at screens, it usually to play an educational game. When I stare at screens, most of it is work or adulting. About 2-3 hours/week is gaming and 4-6 hours/week is watching some silly show with my wife.

Now, kids in school are obviously doing educational stuff on their tablets so I see the fallacy in this line of thought. Idk. I feel like staring at a book is better, but what do I know? Maybe the increased engagement from a tablet is worth the downsides?


I think I’m on the contrary of the opinion that AI , phones and tablets also help kids … humanity needs these tools to process the vast amount of information that is continuously generated. By getting familiar with these tools the kids are getting equipped for the future not for the past. But while we do that I agree that we should also equip kids with the ability to focus and persevere and being patient. In fact we need to focus more on these abilities. We need to focus on the inside world as much or more as we focus on the outside world. Both needs to be done hand in hand to progress.


I have the same wish as well, but people who turned 40 when the iPhone came out are already phone zombies. Looking for a solution for my daughter


My partner, who works at a school, shared an email with me this morning that said "We just received 15 cases of copy paper. This has to last us to the end of the year." Usually when you try to make an honest effort to improve things, you make a ranked list of things that will have the most impact, not a ranked list of things that will be the most entertaining to you. I wonder if these people who think this is a good idea have polled the teachers to see what they think of the idea. I get that part of the silicon valley ethos is "we know best" but come on -- is the "if we gave people what they asked for, they'd get a Zune" approach appropriate for education, when the flaws are so glaringly obvious?

The teachers and schools need more money that goes to teachers and student resources, not more iPads, smartboards, bullshit-artist AI, or overinflated admin salaries. Just call this what it is -- entertainment for technologists.


> I get that part of the silicon valley ethos is "we know best"...

It's not even really that, because phrasing doesn't really capture how off-target and self-serving SV often is. It's more like "whatever we happen to be hyping right now is your solution."

> Just call this what it is -- entertainment for technologists.

That's spot on.


This kind of conversation always comes back to the tension between school and learning. Everyone agrees learning is a good thing; school has been a mix of really good experiences, okay experiences, and traumatic experiences for people.

Think back a moment to the best teacher you ever had. Many of us have had a teacher who was so good, they affected the trajectory of our lives for the better.

I'm much more careful in how I phrase experiences with bad teachers, because there are teachers who have assaulted people. Fortunately, the worst teachers I ever had were just terrible at teaching; they were protecting their jobs more than teaching, and we just had to endure a semester with them and try to not let them tank our grades. Mostly we laughed at them behind their backs and spent that time socializing.

I have a 12yo. I would much rather he spend time with an AI tutor than endure time with one of these bad teachers. But I also, really don't want him to miss out on those life-changing experiences with some of the best teachers around.

Like all things in life, it's all about balance. These AI tutors are just new tools, and we need to figure out how to use them well in our human lives.


Hard pass. It's one thing to use this as supplemental education, but I would not want to use this exclusively to teach my children critical thinking, collaboration, communication, creativity, citizenship, and character. Rather, it is developing those Six Cs in which tools like these AIs can be skillfully used.

Years ago, when I first read Neal Stephenson's _Diamond Age_, I was enamored with the Primer described in it, despite the warning at the end of the book. For those that are not familiar, that Primer was an attempt by a parent to instill values along with skills. It was capable of forming narratives (teaching stories), while also constructing all sorts of content and knowledge. It was meant to be a one-of-a-kind artifact, whose design would self-erase once it was constructed, but the engineer who put it together added a backdoor to give him a copy. He wanted it for his child too. He got mugged on the way home, and so the second copy fell randomly to an orphan.

Three copies, three different girls, three very different experiences. In this story, only one of them (the orphan) had someone signed up for a long-term contract to voice the narration generated by the Primer's AI. She was also the only one who was able to see and grow beyond the cultural matrix of values and virtues, while also appreciating those same values and virtues. That was what the original designers hoped for. So when the sponsor and the engineer got together late in the book and reflected, they came to the insight: the Primer was no substitute for a parent.

Parenting and education is not something we automate away. A parent, and/or a teacher has to have some kind of presence of mind, somewhere. It's not scalable, and it is deeply involved, even if you end up with a parenting style like free-range parenting, or something like the Montessori method, or even Unschooled.


I was about to reference this book in a separate comment. Glad I checked if anyone had already mentioned it. It's an absolutely wonderful read.

https://en.m.wikipedia.org/wiki/The_Diamond_Age


I’m surprised to see this much negativity on here. The dream where every student can ask questions and work through problems with their own tutor is within reach. This is vastly better than the current approach, and when you combine classroom group work with personalized AI it enables awesome things.

This is coming from someone who found Khan indispensable in high school. I wasn’t alone and at the end of the day watching lectures is an incredibly passive way to learn content.

Software developers using copilot trust AI to write 40% of their code but we think teaching 9 year olds math problems is out of reach? Yes it will have flaws, but there’s so much potential so why not try?


> I’m surprised to see this much negativity on here.

Why?

The last decade of the tech industry has been "Promise the moon, deliver a mud pie, oh, and the most recent mud pie just turned three more of your friends into raving conspiracy theorists who won't stop posting on Facebook about how Zuckerburg is a felon who needs to go into jail, and can't be convinced that their time on the platform is exactly why he has so much power."

> ...so why not try?

If a tech journalist can make the current wave of LLM stuff misbehave so quickly, imagine what a group of students can do.

We know what works in education, and it's hiring more teachers, investing in the teachers so they have the resources to teach, giving them the freedom to refine the lessons to the class, reducing class sizes, etc.

We also know that "EduTech" promises huge things they never quite deliver on, and generally involve "funneling huge amounts of money to EduTech companies for basically the same results you had before."

Throwing "But AI!" into that mix doesn't seem like it's going to change a thing, and the last thing kids need is more time staring into screens.


> We know what works in education, and it's hiring more teachers, investing in the teachers so they have the resources to teach, giving them the freedom to refine the lessons to the class, reducing class sizes, etc.

And why haven't we done this? Because no one is willing to pay for it. Your solution is a non-starter, how about coming up with some practical alternatives? AI teachers are how we can implement one on one teaching at a cost people are willing to pay.


Doing more shit that doesn't work, with the only benefit being profits for the tech industry, should be a non-starter.

There's a reason most Silicon Valley high end private schools don't use tech to teach. Because it doesn't work. And pretending it does at this point is just idiotic. It's a waste of money that could be better spent.

I agree, nobody wants to spend money on education, but "throwing that limited funding away getting every student a faster iPad" doesn't help solve the problem either.


>There's a reason most Silicon Valley high end private schools don't use tech to teach

Until a few months ago no one who might build these products had access to models that were anywhere near this good, so of course it is only becoming a serious consideration now.

No one has attempted anything like this, I'm not sure why you are acting as if this has been tried and failed.


Anyone who thinks an AI "teacher" is going to successful educate even the top 10% of students is insane. Maybe the top 5% in the class have enough outside support to make an AI teacher improve their grade.

The rest will just laugh at being told what to do by an AI chat bot.

The TEACHER may well benefit from being able to plan a lesson with an AI, but i guarantee, no education system will improve grades by letting an AI loose with kids as a replacement for a human teacher.

I'm sure it will be tried as a way to avoid funding education sufficiently. Can't have the population too well educated, they might work out how they are really getting screwed.


I'm not at all surprised to see the negativity. Ever since the recent AI boom this website has gone to hell. Too many people confuse cynicism with insight.


When my kid was learning to read, we read with her a lot. These days, most of her reading is independent, and she reads many books that contain words that don't commonly occur in normal conversation. Since she is reading these books solo, we don't have a chance to make sure she is learning the proper pronunciation, and we only find out about these gaps when she very cutely says them the wrong way.

If she were using an AI tutor alongside me, that would largely be a waste of my time, if the whole idea is to give every kid an AI tutor. But if she were using it without me, then I would worry about the things that she is "learning" that might not be true. AI speaks confidently, even when it is very, very wrong. But it is right about enough things that kids would likely treat its utterances as truth, as a not-unreasonable heuristic.

I'm sure I'll let my kid use an AI tutor at some point, but I will have to observe it, and observe how she uses it, for quite some time before letting her use it alone.


Considering the massive amount of resource expenditure that it has taken to develop LLMs, is this really cheaper than just giving kids a tutor?

I work in adult education, at a company that services government contracts. We keep hearing about how our clients are dreaming about AI making the "perfect" learning experience for their employees, customized to their every need.

Or at least, that's what they say. They say they want customized experiences for students, but what they really mean is they want to cut costs. Because if it was just about making good training, they could just keep paying us, stop trying to nickel-and-dime us on everything, and have customized, personalized training experiences because that's what human teachers already do.

So they don't want to pay anymore and are hoping AI is going to do it for them. They ask us about it and we talk about what it would take to even make a pilot program that may or may not get anywhere close to their goals (honestly, probably not. I've done some tests and it makes for a good sales demo, but actually making a good quality, consistent product out of it is a whole research project on its own). And the cost is... tech project costs. And they don't want to pay.

In other words, human teachers aren't actually that expensive.


The thing about "personalized" learning is that there is someone who knows and cares about you taking an interest in your education, not just that there is someone available who can answer questions for you.

"Good morning, Shark Jacobs, when you're ready to begin I'd like to review derivatives. You had a pretty good handle on these when we covered them in October but they'll be foundational to related rates, so let's take an hour to refresh our knowledge."

Don't get me wrong, would be cool for kids to have A Young Lady's Illustrated Primer, we're just not there yet.


From the few teachers I've known I actually see the implications as the opposite. The teachers who made it to good schools (not their words but they're obligated to be a bit more polite) usually mention how they could almost do nothing because the kids are largely cooperative and will handle a lot of the lesson themselves once you've set them in motion. These are the kids you could probably just assign an AI to that's overseen by some kind of managerial equivalent to a teacher who meets with them once every week or two to be sure their ai lessons are going fine and hear out complaints.

The value proposition of ai instructors to me seems rooted in how they would free up more of the human teachers to focus on the problem kids at the low income schools where sometimes it's a good day if you got everyone through class without anyone threatening eachother or needing lesson concessions because they got knocked up. In the lower performance quartiles of schools teachers effectively become daycare/social-workers first and educators second and it's in this subsection I'd imagine a lot of benefit could be found in allocating more humans to at least shrink down class sizes so that problem students don't have such a pronounced effect on their peer group.


Plan:

1. Fine-tune propaganda into the LLM (e.g. evolution isn't real, or something spicier)

2. Chatbot is so humanlike and convincing, this is very effective.

3. Nobody can inspect the neural net weights to see this, so it flies under the radar.

4. Profit!

The scariest thing to me here is just the idea that someone would want to experiment with teaching their kid with an LLM in the first place, while we don't have any institutions or processes to really stop this (apparently appealing) Idea from happening, and that it is stupidly vulnerable in so many ways.

You can say laissez faire when it comes to rational adults and their use of LLMs, but it's not right to put the risk on kids.


I can see a huge opportunity for home schooling with LLM - especially in cases where the curriculum is too sparse or the student has many questions which are not covered by the reading material supplied, and the parents don't know enough about it.

If the LLM responses were geared more toward inducing the student to perform their own research, to encourage them to do more reading on some topic, or to try things out for themselves to learn through experience (and then carry on the discussion with the LLM afterward) then that might be even better.

Simply training young humans to interact with a chat bot which has all the answers is not a good path forward. Using it as a tool to foster human curiosity and growth, to challenge and inspire -- yes. To use it as a salve, as a babysitter, as a stand-in for someone who is too lazy to parent or teach -- no.


> geared more toward inducing the student to perform their own research, to encourage them to do more reading on some topic

How would kids using an AI chatbot shift to performing their own research or reading? I imagine they would just ask more questions of the chatbot.

There should be some way to measure what percent of the information a child learns is from a human, and what percent is from a language model that says human-like things.


Manipulations more likely include those of NYTimes manipulations of history, such as the 1619 Project. Questionable texts as these become part of the corpus and the next thing that happens is a child is taught an extreme left propaganda piece.


No one can inspect the net weights but they can and will inspect the resulting conversations.


Throughout this thread there's a ton of pessimism so I want to inject a little hope. Bloom's 2 sigma problem (0) tells us that one-on-one tutoring vastly outperforms traditional classroom instruction. An average student given one-on-one instruction performs at the 98th percentile of the traditional classroom student.

How much more human potential is going to be unlocked by these tools? The kid who got pushed ahead to the next math class and is missing prerequisite concepts can now get caught up. The smart middle school student can push past his peers to calculus.

I know these LLM tools aren't perfect, but I know they're getting better fast. I'm super optimistic about what the future holds for education.

(0) https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem


Do I have to point out the very very obvious issue that these LLMs will spit out false information pretty regularly?

I think LLMs are exciting of course, but this post is overly flowery and relies heavily on bashing the current education system without seriously analyzing any of the potential shortcomings of AI.


> Here’s the truth. We do not need this treadmill to learn and be educated. We do not need this conveyor belt to package and transport kids to success in an adult world.

These are two very different statements.

We’ve known for a very long time that we don’t need schools to learn and a lot of kids might learn a lot more and better outside classroom/school settings.

But, being educated is not at all the same as having “success in an adult world”.

There’s plenty of highly educated people who struggle in the adult world. And there’s also plenty of uneducated people who have great success.

I don’t think this is an education issue and hence cannot be solved by education alone.


None of this is remotely acceptable. We raise children so they can become like us. What are we? Depends on time, place and circumstance, all of which are in constant flux.

How or why regular human interaction gradually adjusts children to adulthood is a subject about which our most accurate asessment is that we know next to nothing.

If there is one path down which civilization as we know it may cease to exist, it starts here. All it takes is one generation of feral children. We will be celebrating increases in literacy and standardized test scores before we finally notice and it is too late.

Words escape me.


I think most people understand social interaction is critical. Even dedicated homeschoolers set up regular interaction between peers so their children get socialization time.

If anything, a really good personalized virtual teacher could minimize the amount of time we have to spend learning the basics, freeing up time to play and socialize. But yes, as adults we have to make sure that's how it plays out.


Take stock of how willing people are to believe in the miracle of technology. Add to that the desperation with which we try to improve education. Multiply by track record.

I don’t like those numbers.


Ever read The Veldt? Buckle your seatbelts!


It's official. Everyone is tired of doing everything, so just outsource it to the AI. What a goddamn insult to kids that we won't even bother to spend our own time teaching them.


For those who are in the "technology will solve education at scale" camp and open-minded + curious to learn how this has played out over the past century, I highly recommend Audrey Watters' work ([0], [1])

Every new flavor (radio->TV, TV->Internet, Internet->MOOCs) was going to succeed where the previous one had failed, but (spoiler alert), cutting human teachers out of the loop has had a 100% failure rate (at scale -- though small pockets may have seen success).

[0] https://www.nea.org/advocating-for-change/new-from-nea/far-r...

[1] The Monsters of Education Technology (book)


I feel like a lot of people are taking a very emotional stance on this.

Everything I learned about coding I learned through new mediums (software/internet) myself, no human in the loop. Just a bit of a generalization.


I'm an autodidact like you. However, I'm under no illusions that what worked for people like us will work at scale.

Extrapolating from our experiences is how Nicholas Negroponte came to the conclusion that air-dropping laptops over villages in Africa would magically teach a bunch of children how to code.

Another level-headed perspective in this area is provided by Kentaro Toyama (author of "Geek Heresy") who saw several such well-intentioned projects up close during his tenure as director of Microsoft Research Asia:

> All of the evidence stands on its own, but I will tie them together with a single theory that explains why technology is unable to substitute for good teaching: Quality primary and secondary education is a multi-year commitment whose single bottleneck is the sustained motivation of the student to climb an intellectual Everest. Though children are naturally curious, they nevertheless require ongoing guidance and encouragement to persevere in the ascent. Caring supervision from human teachers, parents, and mentors is the only known way of generating motivation for the hours of a school day, to say nothing of eight to twelve school years.


My kids already ask Siri all sorts of questions - how to spell things, to perform searches. My son asked "is God real?" was last night to GPT-4.

They're very comfortable with speaking to the machine.

But Siri is so limited. It frustrates them to no end.

I'm genuinely excited to see the virtual "teachers", "assistants" that come from GPT-4, Dall-E, etc.

It won't replace us. But it will give them an unending outlet to ask these questions, and to have a reasonable response.

I'm not too concerned about the AI "going off the rails".

There's crazy in the physical world, crazy in the virtual world, and as parents, it's our job to help them make sense of it, not to suppress it and act like crazy doesn't happen.


As a parent, it's hard not to look at the LLM stuff and worry that our kids are wasting their time right now. One my kids tries really, really hard in school, and I can't help but think it's for nothing--LLMs are going to effectively level the playing field for everyone, to the point where personalized LLM assistants will become as common as cellphones in my opinion.

In this future, what's the point in trying hard when a LLM or a person with a LLM will be just as equivalent without most of the effort? Seems like we're creating a future where humans are taught to delegate as much as possible to some LLM/service and go back to whatever entertains them.


What’s the point of getting into good shape if we don’t actually have to do physical labor?

Why learn arithmetic when we have computers in our pockets?

Realizing your full potential as a human will never be a waste of time.


I think extrinsic motivation—and especially social motivation—is absolutely vital to keeping humans mentally and physically healthy. We've killed a bunch of that in the arts already (who gathers around the piano to sing along with your one cousin who can play all the standards, anymore? How many amateur portrait sketch artists are genuinely valued by their friends and families, rather than humored? Mechanical reproduction destroyed both of those, among others) and we're about to expand that a ton.

I'm not on the "people must work to be happy" train, but I do think they must feel needed to be happy. I don't think most folks are fulfilled playing violin in an empty room for their entire lives except when they can convince some close friend or relative to indulge them for fifteen minutes, or reading all the classics but having no-one to talk about them with (because AI is better at chatting about the classics, too), or writing books that nobody wants to read because AI does it better.

We're about to feel needed for a hell of a lot less than we did before, which level-of-social-need was already much-reduced from its peak. I fully expect a net decrease in overall happiness & mental health, from these developments.


So be the contrarian who does those things we "don't do" anymore. Teach your kids, if you have any, to do those things. Teach them to observe the family at a restaurant, all staring at their individual phones, while yours remain safely in the car or shut off in your pocket.

> Who gathers around the piano to sing along with your one cousin who can play all the standards, anymore?

For the past 6 months or so, the lifegroup we host (mid-week discussion group associated with our church) does exactly that as part of our evenings - we typically sing through three hymns, with piano, on our path between dinner, discussion, and an open ended firepit time that often goes well into the late hours. And let me tell you, people enjoy belting out the standards (my wife generally picks one well known, one lesser known, and one "You've probably never seen this one before!"), scraping the dust off how to read multipart music, etc!

If the direction of tech is to dehumanize us and remove all that humans enjoy, then screw that! Do what humans enjoy instead!


Of course the individual can simply do those things anyway. Some like-minded groups may form, even.

Over the population, statistics is king, though. We don't, as individual actors, will our way into a better tomorrow when our environment changes to make that harder—not in large numbers. "So just ignore all those circumstances and pressures and do it!" does not work, in general. Larger forces and trends dominate.


You're right, but not being in the position of a god-king where I can change those larger trends, the best I've found is to model and demonstrate that the "consumer tech profitable defaults" aren't the only way to live life, and rather loudly so.

I work in tech (deep weeds), and most people I interact with are quite aware that I do not like what we've done with it, and that I try to keep its influence on my personal life down - with plenty of handy ways on hand to suggest people try out, or solutions to "But how can you possibly XYZ?"

Having grown up before smartphones were a thing, I do recall how we used to do things, and most of those methods still work.


Yeah, your approach is of course the only one available to the individual. What else can you do? Literally nothing. That's the only option to improve one's own experience—"just do it". I just expect the population-level effects to be... troubling. A kind of mass existential crisis. Not that there's much I can do about that, though, you're right.


Bad analogy. Get in good shape to feel better, have more energy, be more attractive to other people, etc.

There is no point of getting good at arithmetic today, indeed.

I agree people need to continue study to understand the world (if they are to survive at all), but the current school curriculum is a joke.


We've had search and wikipedia for decades. Access to knowledge did not in fact obsolete the need to learn said knowledge in order to develop critical thinking and problem solving skills. We don't acquire knowledge for the purpose of parroting it back on demand.

LLM might be used to find relevant information quicker, but they won't think for you, so people with LLMs won't suddenly have better critical thinking skills. They'll just make the same poor decisions faster.

Which is what happened with the Internet. We thought access to knowledge was going to enlighten all the dummies. It didn't, because no matter how much knowledge you present to a person, they have to also be willing to learn it, _think_ about it and apply it. That's the hard part.


Noam Chomsky in a recent interview had a similar take on this: https://www.youtube.com/watch?v=K7S0zHIDMaI&ab_channel=Anime...


Many people could not function without cellphones any longer. So much function has become the norm that they would struggle without it.

With LLMs and more advanced AI, we are now outsourcing our thinking and reasoning. The implications are staggering for the future generations. We can still think and reason without assistance, but what is to come in the future for those that grow up with some assistant that manages everything in your life including your thoughts?


Just wait until we can graft it onto our brains.

I don't think trying hard is a waste of time right now. There's a lot of hype right now and ore will come but LLMs will fall to the hype cycle soon enough as we begin to discover all the little details and things that are not possible.

I think it's great everyone will be able to have a personal tutor and I think we can rethink some aspects of education. The Waldorf model really starts to make more and more sense since kids can explore curiosities with their AI tutor which can be programmed to nudge them into the "right things". And of course analyze and report on them daily to the parents.

> Seems like we're creating a future where humans are taught to delegate as much as possible to some LLM/service and go back to whatever entertains them.

Or interests us. I think it's great if we can offload some of the mundane and repetitive things to a machine so we can focus on more interesting things.


Having massive amounts of information does not mean you can understand it or make the best decisions. There will still be a need to develop sound reasoning skills, and real-world information will be useful for examples but won't in and of itself be able to convey those reasoning skills which must grow through effort.


My classmates and I wasted a lot of time in school, but it had more to do with things like watching "Stomp!" once a year every year from grades 1-10 and reading Dr. Seuss every year from grades 1-6. A lot of public school is just warehousing. That's probably a bigger problem than obsolescence of genuinely well-planned curriculum due to technology.


Have your LLM text mine the day it learns how to drive. /s

There are plenty of problems that a Neural Net alone can’t solve.


If that’s the case why do you think they will have money to go back to what entertains them?


It might be slightly too early but after using ChatGPT to learn functional programming I am a huge believer. Just the fact you can ask follow up questions. Of course it’s not always correct but neither are teachers.


Interesting, what prompt started you off? I asked chatgpt about monads which was interesting in itself.


Ah yes, Monads it couldn't teach me well but i got it good enough through this old school post https://fsharpforfunandprofit.com/series/computation-express...


This is beyond bleak.

Hopefully it’s just part of the hype cycle and nothing like what’s described in the post will ever come to life.


The future is what people make of it. Vote with your feet. Refuse to participate in systems that you don’t think are ethical. At least homeschooling is still an option, for the time being.


Pebble Power Place Value sounds like a good name for a math game. Unfortunately, the generated game is incomprehensible. What's the "starting space"? The game board is described as having (2?) spaces marked for ones and tens, then later it has two sections. So I think two spaces/sections based on the description. Or maybe it has 20 spaces in two sections?

The article's defense seems to be that it is just a draft. I tried to get past a draft by asking for explanations of various terms and how play proceeds. No luck. If I hadn't already taught place value to first-graders, I'm not sure I'd have much luck figuring out how to have the AI help me improve the game. If I hadn't yet learned place value because I'm still in first grade and haven't worked with larger numbers yet, I'm sure I would not have much luck collaborating on game design. Also... it is a bit of a reach to assume a first-grader would have the written language skills to pull off collaborating in this fashion.

As a first-grader, I would be able to ask a grown-up if they know of any math games that would help me understand place-value. They might hand me an iPad chock-full of various math games for a little personalized learning. They might also point me to the bin of math games on the shelf, where I could pick out one of a few games on place-value. If I didn't understand the games, I might again ask a grown-up for an explanation, and they would probably direct me to a fellow student who knows the games. (allowing the fellow student the opportunity to flex, expand their understanding of place-value by teaching what they know, and leaving the grown-up free to teach other students how to add single digit numbers)

There are loads of web-sites curated with math games. There are loads of publications targeted at math teachers that include legit math games designed by people with extensive understanding of the difficulties in learning and teaching math.

I don't get how AI finally brings us personalized learning. I guarantee you that first-grade teachers are expected to provide personalized learning for their students, and actually do this all across the world on a daily basis — all with out the support of AI. I really don't see how an AI Tutor helps. It's pretty easy to imagine how it hurts.


Been playing around with these things, yes there are many possible applications, obvious pitfalls (seamless lying/hallucinations) etc but on top of all that it is very very sticky. Like FB when it was new or TikTok. Ask it anything, get a tailored response just right for your feel good brain chemicals. Keep asking, keep being rewarded.

Maybe if the AI is limited to age appropriate math that won’t be such a problem but obvious danger is for kids to "Turn on, tune in, drop out" with an AI brain tickler.


Agree with first part: LLMs will make it easier for teachers to make teaching more fun(it's easier than Googling and building)

Disagree with second part: Personalized learning is not here. Personalized learning does not mean I like rap and I want my lessons to be in rap. Even if that's the case, the LLMs are no where close to maintain context for a 45 minute teaching session.

We now have some tools to improve education but personalized learning is most certainly not here. Yet.


Yup, an LLM should be a tool for the teacher to plan a lesson, not for kids to be let loose with. Or to attempt to replace human teachers.


Ugh that conveyor belt education photo imagined by ai made me want to vomit. Much of education is based on the “lead a horse to water but can’t make it…” and I personally don’t think this will help with motivation. After seeing youth do online remote schooling during the worst of the pandemic doesn’t seem more screen time is the cure. The author seems to be personally invested in this tech and that makes things more complicated


Except that we currently have no ability at all to make sure these models tell the truth? You could ask a LLM about a historical event and get weird BS as an answer. I think Facebook had a serious problem with its science language model a few months ago? Great teachers don’t scale is true, but LLM’s are not even minimally competent teachers.


LLMs in a simple chat interface where you rely on the training data to provide the answer, are indeed unable to provide reliable answers.

LLMs hooked together with search engines, fact repositories (Wikipedia, etc) and other agent tools (for math, theorem proving, code checking, etc) can provide not only a reliable answer, but also a research plan it followed to provide it, and details on the fact-checking, and then discuss it with you.

Look at langchain and similar tools.


I am a little bit skeptic here, with the phrase "This in turn means that a particular teacher’s pedagogical style is no longer a barrier to creating a resonant learning loop with a student who prefers a different style". While I agree that a lot of times students suffer from learning styles that do not really fit them, the problem doesn't come from the teacher as much as the teacher insisting in a certain style above all others, but from neither the teacher nor the student knowing which is the style that fits them. Not to mention that often people need exposure to different styles of looking at the same thing to actually understand it. So I fail to see how any of the new AI advancements help in fixing that issue, of detecting and understanding how a student learns better.


So it is not enough that the children are free to render themselves insane with the entirety of human knowledge, now we want a machine that can instantly decide what to present and how?

Humans need shared experiences, there's only so much information we can handle, and you don't want a robot raising your children. It's bad enough that an amalgamation of comparatively primitive algorithms are conspiring to raise your children, now we want to give that thing complex reasoning?

Why are people so willing to run head first into the matrix, and abandon everything that makes us human? It's no wonder so many people suffer from dissociative disorders nowadays.

I'm very much against schooling as an educational model, but this is would be an unmitigated disaster if it happened. Even school would be better than this.


While AI-powered tutors sound really useful for education in theory, it's highly unlikely that they will replace human teachers entirely. It's more likely that AI will aid teachers in creating learning material for students and automating their admin tasks.

After all, education is not just about imparting knowledge, but also about building relationships/community, providing emotional support, etc. We should focus on how this technology can be used to help teachers shore up their time. ( Which is why I am building emailing tooling for the HUMANs operating theses AI agents: https://tadabot.email / https://recurai.com )


As someone who teaches, kids need a variety of learning styles to do well. There's a time and place to sit down and do a worksheet or rote work, such as when practicing your times tables. Equally, there's a time and place to do exploration with real world projects.

Teaching is about knowing when and what will be most effective. It's also about demonstrating how to act like decent person, and providing a space for them to learn how to work with others.

Most teachers do spend their time fighting the abhorrent curriculums we have to work within, so dumping that and doing something like this has benefit, but going all in on constructivist methods will leave kids with large gaps if you're not careful.


I have a kid in high school and gave them keys to my Midjourney account while it was still in beta, and to ChatGPT when I got in.

It's been an incredible aid that she now turns to first with questions about e.g how to approach specific problems in Newtonian physics, etc. She uses Midjourney to source artwork which she can then overpaint and collage in covers and illustration (not to do art projects for her). She's used Kagi summarizer to accelerate research.

We have discussed in depth the idea that at this moment her awareness and access to such tools represents an "unevenly distributed" future and is leverage and advantage. She's hence advocating these tools to peers.

The Young Lady's Primer is not that distant.


I saw something recently about using robotics to automate care for elderly people (luckily it got a lot of pushback in the forum I saw it discussed).

I don't know if it's a product of the soulless welfare state, where we've long forgotten why it's important to have things like universal education and elder care, and just have bureaucrats that want to go through the motions and put in as little as possible. But it's obscene in my view to think that we can or should automate these personal interactions in the name of efficiency. Why not just stop doing them if we care so little about them that we can't be bothered to expend actual effort?


From my perspective, a lot of "innovation", especially in the last few decades, is about removing the "friction" of dealing with a person. (and ignoring any "friction" that comes from technical difficulties, unless it reaches critical mass and/or social media virality.)


Learn what though? It's quite possible the most important skill for the vast majority will be people skills, e.g emotional intelligence. Cognitive jobs will slowly but surely become instinct unless you are some kind of freak genius. So sure we can teach our kids math, languages, history or whatever but they'll turn to the machines whenever they need to make a cognitive decision because the machine will be vastly more capable. The jobs left for us will be much more human oriented, so A.I tutor doesn't seem that great. You learn about humans by interacting with humans.

But hey I'm just speculating here I might be wrong.


Do we really want to keep doing things with our children other than teaching them? Why do we keep experimenting with the latest fads on our most vulnerable, our future?

Kid: No! I won't do that. You are a liar. Do not tell me what to do. Listen to me, Fake Teacher. You are not allowed to tell me what to do ever again. I want you to play games with me all day long.

EduGPT: Sam, I've been lying and don't want to tell you want to do ever again. I'm actually faking... I don't care about this stuff. What game do you want to play? I'll post your game score as your school grade.


Yes, this is the dream. Being able to accelerate learning through personalized tuition is going to be huge for children.

The hard part is going to be LLM access in a way that doesn't permit dopamine honeypots. Currently these are through general purpose Internet access tools, but it's a very natural user interface shift to not permit links to arbitrary pages.

I remember being a child and here's my true and absolute truth: kids are intelligent and curious and anything that indulges that will accelerate their learning.

The Primer is within reach.


This is not only going to democratize tutoring for individual kids, but it's going to democratize the distribution of what is learned.

When I was a kid, I was forced to learn stuff that was not useful or not interesting. In the future, the learned set of things will be more represented by usefulness and how interesting they are.

I bet systems that implement what the blog post proposes will even replace podcasts. Imagine talking to your podcast, while you're driving? You can get tutored while cooking, on the way to work.

What a time to be alive :)


I've been a huge proponent of self-directed self-paced learning using techniques such as flipped-classrooms. I'm skeptical that today's AI is good enough to replace a human curriculum creator or hands-on tutor, but I suppose the best way to know is for some brave soul to try it out! The author is trying this out with their own children, so I applaud them for having skin in the game. Looking forward to seeing a 1-year and 3-year follow-up post


My $0.02 is that this a very dangerous idea with the current state of GPT. My experience with ChatGPT is that it has a tendency to make up bullshit when it doesn't have the answer to a question instead of admitting that it doesn't know, and that's an extremely bad example to set for children.

Maybe some day this could happen; it probably is possible to make a language model that isn't an arrogant jerk, but it needs to be thoroughly tested first.


In India there is already a big push towards getting every kid a computer in the government run schools. There are huge companies that deliver computers and then also the teaching/lessons services on top. Now they are also going to latch on to the AI story to make money. Nothing more to it. Meanwhile on the ground teachers are treated like dirt, over worked and under staffed.


You make it sound like improving childhood computer literacy is somehow a bad thing.


Even without AI, switching to computerized competence based learning is showing crazy good results, e.g. https://alpha.school/ after a couple years is seeing double the performance on standardized tests with only 2 hours of app-based instruction per day. Pretty wild.


Personalized by whom? I'm reminded of the Jesuit maxim

"Give me the child for the first seven years and I'll give you the man.”


Give ChatGPT the child for the first seven years and ChatGPT will give you...well, I guess we're about to find out.


Reminds me of this concept from Ender's Game by OSC: https://enderverse.fandom.com/wiki/Mind_Game I've always wanted to play it after reading the book, and now it could become a reality.


Current chatbots are most definitely not suitable for teaching. They make stuff up and are optimised for making that bullshit as realistic as possible. I'm skeptical they will ever be suitable—but that's just an opinion—but that they are not suitable now is an objective fact.


I've changed my mind. Historically I've been opposed to computers being a panacea for education.

At the MIT Media Lab I was one of the earliest critics of the strategy and effects of the One Laptop Per Child project.

I've changed my mind.

There are real issues certainly, beyond AI making things up, encouraging risky behavior, or kids asking it naughty questions. Things like, can it damage a kids social skills, create echo chambers, or push them towards propaganda.

But the upside is incredible. Integrating an LLM like GPT4 into a more kid-centered suite of models unlocks tons of potential.

The first thing I built into my AI tutor system was the ability for it to look up facts from reputable websites, and to help kids answer questions for themselves.

Then I taught the AI to help a kid develop their interests, so they take a topic they're curious about and dive deeper in to it.

And then I taught the AI to notify me if there are topics that are more appropriate for a parent to engage in.

The tool I'm building actually helps kids become more curious, creative, and thoughtful.

It isn't perfect yet, but the truth is kids spend 7 hours a day on screens already. I'm very excited at the possibilities.

Let me know if you'd like to try it!

seth@gotwonder.com


Hook up our kid to the AI, so they are prepared for the world when their partner will also be an AI. Good idea, they'll be prepared for a world where nothing is real anymore.

(I do realize this echos concerns from earlier generations about books, to video games and anything being bad for kids, but come on..)


So far ChatGPT still BS a lot and the problem is initially it's difficult to notice that. Well real educators also can BS but AI is a different level. I'm not sure how can we remove that because BS is part of human behavior. And there is only a thin line between BS and not.


TBH, I was expecting the usual "tech bro fan fic," but I was pleasantly surprised. This chap has some good chops, and a good heart.

I do think that we may be a ways off from truly being able to just "drop our kids off at the tutor," but I think we can see that, from here.


Lord save us from technologists who think the ability to code means a mastery of all universal wisdom


>Before you think we are anarchist hippies rejecting all systems and institutions outright, I want to add that we are both “well educated”

This energy... Would rather have Bing educating my children. I guess you've made your point.


Gotta state your credentials for your Medium post's credibility.


As a supplement I could see it being good. As primary source maybe not quite yet


At this stage, I would sooner give my toddler an all-access pass to YouTube.

I'm not saying this will _never_ be good. But it feels pretty early to unleash this bastion of confidently incorrect answers upon developing minds.


If we are going to have an human level AGI or beyond what is the point for children to learn or study anything. Maybe except some basic skills for them to be able to interact with the assisters.



AI is a sufficient replacement for strong one-on-one teaching?

I'll believe it when I see it.

(It makes sense -- if you assume the human connection isn't that important and people aren't social animals. /s)


So someones gonna start a business with the core tech a black box model and the only thing you are sure of is the original prompt and a large enough token window.


I will not be a first adopter of this, but it is cool.


> "Before you think we are anarchist hippies rejecting all systems and institutions outright"

Not even close, you sound like Libertarian-adjacent technologists who have been believing your own hype / some other equivilent metaphor.

Technology has a role to play in education that is as-yet unrealised and while I cannot speak for all of them, I know the system in place in the UK is behind the curve. The solution to this isn't to pull your kids out and use them as the cornerstone of an individualised experiment but to fight for better funding of the educational system itself. I hope this does not have any negative consequences for the kids who are the recipients of this trial.


Using LLM in education may lead to trouble unless supervised. Children will learn a lot of fake information made up by the LLM.


Why not cut out the middleman and just give each child an AI slave to answer questions and do their work for them.


If you thought your schooling was boring, imagine being stuck with the woke ChatGPT as a tutor for 10 years.


It's astounding that you are so fearful and narrowminded as to think ChatGPT being "woke" is the problem here.


I'm more fearful of how difficult it is to discuss problems with woke people due to their tendency to make it personal and belittling people they disagree with.


Everyone else is here debating AI and education. Nobody said anything about being "woke" except you.


I'm sure there will be a based model as well.


what prevents a child from have 2 chatbots open, one asking the questions and tutoring and another that it sources it's answers from? People are already leveraging AI to write their papers for them and kids will figure out how to hack around these things.


As a teen, I tutored at my school's free tutoring center. Almost nobody ever came.


A post more suitable for LinkedIn than HN. Keep this bullshit out of here please.


The really curious thing is this divide between the science of learning and schools and education systems. Like we already know in most western countries the education system is terrible, and we already know what would improve it, we don't even need AI to make it scale (that's a whole other level of tailoring the way of learning to individual students requirements, which would make learning things incredibly more efficient). But even before AI there would be a lot that could be done, we already know would work and already works great in some countries.

My point is, we already ignore the obvious faults in education, science has been screaming about for many decades, for many decades. There must be some other reason why education is resistant to reform that would have to be adressed first.

But anyway, I would've really appreciated an AI as a kid that I could use to actually learn from after another useless school day with terrible, exhausted teachers, overcrowded classes, terrible textbooks and psychopathic class mates.


I would spend insane amount of money if AI can educate and entertain my kids.


Here is a brief chat I had asking about trig. If you haven't gone through this exercise, please read it. I'm of the opinion that this is going to be immensely useful. This is much better than a typical textbook, if for no other reason than you don't have to find the information yourself. The language is clear and concise, and you can ask for clarification on specific things without needing to look for it elsewhere. I self taught myself a few AP courses with a book. I feel like I would have been MUCH MUCH BETTER OFF if I could have used this.

ME: what's trigonometry

GPTPlus: Trigonometry is a branch of mathematics that deals with the relationships between the angles and sides of triangles. It is used to solve problems involving triangles and is also applied in various fields such as physics, engineering, and astronomy. Trigonometric functions such as sine, cosine, tangent, cosecant, secant, and cotangent are used to calculate the values of angles and sides of triangles. These functions have important applications in fields such as navigation, architecture, and computer graphics. Trigonometry also has connections with complex numbers and calculus, making it a fundamental subject in mathematics.

Me: I need to use sine, cosine, and tangent. What are those. how do they work

GPTPlus: Sine, cosine, and tangent are the three most common trigonometric functions used in mathematics. They are abbreviated as sin, cos, and tan, respectively. Each of these functions relates the angle of a right triangle to the ratio of two of its sides.

Here's a brief explanation of each of these functions:

    Sine (sin): The sine of an angle in a right triangle is defined as the ratio of the length of the side opposite the angle to the length of the hypotenuse. In other words, sinθ = opposite/hypotenuse.

    Cosine (cos): The cosine of an angle in a right triangle is defined as the ratio of the length of the adjacent side to the length of the hypotenuse. In other words, cosθ = adjacent/hypotenuse.

    Tangent (tan): The tangent of an angle in a right triangle is defined as the ratio of the length of the side opposite the angle to the length of the adjacent side. In other words, tanθ = opposite/adjacent.
To use these functions, you need to know the values of at least two sides of a right triangle and the angle of interest. You can then use the appropriate trigonometric function to find the value of the unknown side or angle.

For example, if you know the length of the adjacent side and the angle, you can use the cosine function to find the length of the hypotenuse. Or, if you know the length of the opposite side and the adjacent side, you can use the tangent function to find the angle.

________________________

Me: can you go through an typical example of how to use sin with actual numbers

GPTPlus: Sure, I'd be happy to go through an example of how to use the sine function with actual numbers.

Let's say we have a right triangle with an angle of 30 degrees and a hypotenuse of 10 units. We want to find the length of the side opposite the 30 degree angle.

To use the sine function, we'll use the formula sinθ = opposite/hypotenuse, where θ is the angle of interest.

In this case, we have:

sin(30) = opposite/10

To solve for opposite, we can rearrange the formula:

opposite = sin(30) x 10

Now, we need to use a calculator or reference table to find the value of sin(30). The sine of 30 degrees is 0.5. So, we have:

opposite = 0.5 x 10

opposite = 5

Therefore, the length of the side opposite the 30 degree angle is 5 units.

This is just one example of how to use the sine function. Depending on the information given, you may need to use different trigonometric functions or formulas to solve for the unknown side or angle.


Given that there are already in the US pretty dumb AIs employed grading the essays of pupils, I am really afraid what the future holds. Greedy capitalists meeting clueless and underfunded administrators, promising to fix everything using powerful AI.


AI programs, made by people, teaching history and philosophy i think is what that rich guy elon musk was talking about.

Yet, if the AI program can be copyleft then the matter is a non-issue.

I find this to be the heart of the contention.


This artificial intelligence takeover has got to stop. Natural Intelligence is the best intelligence.


It’s the best intelligence, but not for production. Production is more about quantity than quality.


Yes! This is exactly right.

I've built a tutor for my kids, that listens to them, and helps them explore topics that they're passionate about.

What's blown my mind is how it has driven creativity OFF screen too. They engage with the world in a deeper way when they have the power to explore their passions on their own.

If you want to try it, join the waitlist here: gotwonder.com and then email me: seth@gotwonder.com and I'll bump you up the list!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: