Hacker News new | past | comments | ask | show | jobs | submit login

Imagine a world where children have grown up, relying on ChatGPT for each and every question.



A world where children ask questions to unreliable entities who guess when they don't know the answer?

Pretty sure we just called it the 90s.


cf. that Calvin and Hobbes strip where Calvin asks his dad how they know what weight limit to put on bridge signs: https://www.gocomics.com/calvinandhobbes/1986/11/26


GPT-5: “I don’t know. Quit asking these dumb questions and go and play outside.”


I would have killed to have ChatGPT growing up. It's amazing to have a patient teacher answer any question you can think of. GPT-4 is already far better than the answers you'll get on Quora or Reddit, and it's instant. So it's wrong sometimes. My teachers and parents were wrong plenty of times, too.


There's a difference between being wrong sometimes, and having no concept of objective reality at all.

I really don't understand how anyone can have such a positive impression. I refuse to register an account just to try it out myself, but that isn't necessary to form an opinion when people are spamming ChatGPT output which they think is impressive all over the Internet.

The best of that output might not always be possible to distinguish from what a human could write, but not the kind of human I'd like to spend time with. It has a certain style that - for me - evokes instant distrust and dislike for the "person" behind it. Something about the bland, corporate tone of helpfulness and political correctness. The complete absence of reflection, nuance, doubt, or curiosity with which it delivers "facts". Its refusal to consider any contradictions feels aggressive to me even - or especially - when delivered in the most non-judgemental kind of language.

It is like the text equivalent of nails on a chalkboard!


If you haven't bothered to try using it yourself, I don't know why you think anyone would care what you think about it.


I'd argue that most children would kill for an automatic translator like DEEPL (or the much worse Google Translate) - because it would help them with their English / German / other language homework.

English speaks will probably never realize this, that most kids need to say learn English first, then programming.


Apropos this, was tempted to submit https://www.youtube.com/watch?v=KfWVdXyPvWQ [1] after watching it last night, but maybe it's better to just leave it here, instead..

1: How A.I Will Self Destruct The Human Race (Camera Conspiracies channel)


YouTube videos of stock imagery, memes, anecdotes, and speculation don't seem much better.


Imagine that in five years from now, ChatGPT or one of its competitors will reach 98% factual accuracy in responses. Would you not like to rely on it for answering your questions?


Saying this in a discussion about Citogenesis is funny to me. How would you even determine "factual accuracy"? Just look at the list. There are many instances where "reliable sources" repeated false information which was then used to "prove" that the information is reliable.

As far as I am concerned AI responses will never be reliable without verification. Same as any human responses, but there you can at least verify credentials.


Imagine that in five years, we will have cold fusion, world peace and FTL travel. ChatGPT told me so it must be true!


Scroll down TFA to the section called "terms that became real". When trolls or adversaries can use citogenesis to boostrap facts into the mainstream from a cold start, what does "98% factual accuracy" mean? At some point, you'll have to include the "formerly known as BS" facts.


It all depends on the distribution of the questions asked. I would hazard a guess that given the silly stuff average people ask ChatGPT in practice, it's already at over 98% factual accuracy.


Outside of maths and physics there is no such thing as factual truths


> Outside of maths and physics there is no such thing as factual truths

But that statement is neither, so it must be false…


Isn't there? I didn't attend MIT. That is a factual truth.


Isn't MIT known for its math and physics?


Well, technically, attending is about an object being in a particular position at a particular time, so physics.


Maths has no factual truths, only logical truths. Physics has no more or less factual truths than any other branch of science.


chatcpt outputs everything just so confident since it's basically just a bullshit generator. it's markov chain word bots on steroids.


There are people who do this too; I don't think that's a sufficient property to be a threat to humanity at large


i'm not so sure of that. this is likely the start of the sigmoid inflection curve of ai right now. the progress being made is crazy. look at that picture of the pope that got posted and got a bunch of people to believe that he was wearing some fancy parka. and that's just the now.


It’ll be a world where it’s important to know the right question to ask


Even then, you have to know how to recognize that ChatGPT is feeding you made up information. In the case of these Citogenesis Incidents, 99% of the Wikipedia articles are legitimate. The trick is knowing what is the false 1%. How do you distinguish between the ChatGPT output that is true versus made up?


It’s not very different from asking every question you have on stack overflow




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: