I would have killed to have ChatGPT growing up. It's amazing to have a patient teacher answer any question you can think of. GPT-4 is already far better than the answers you'll get on Quora or Reddit, and it's instant. So it's wrong sometimes. My teachers and parents were wrong plenty of times, too.
There's a difference between being wrong sometimes, and having no concept of objective reality at all.
I really don't understand how anyone can have such a positive impression. I refuse to register an account just to try it out myself, but that isn't necessary to form an opinion when people are spamming ChatGPT output which they think is impressive all over the Internet.
The best of that output might not always be possible to distinguish from what a human could write, but not the kind of human I'd like to spend time with. It has a certain style that - for me - evokes instant distrust and dislike for the "person" behind it. Something about the bland, corporate tone of helpfulness and political correctness. The complete absence of reflection, nuance, doubt, or curiosity with which it delivers "facts". Its refusal to consider any contradictions feels aggressive to me even - or especially - when delivered in the most non-judgemental kind of language.
It is like the text equivalent of nails on a chalkboard!
I'd argue that most children would kill for an automatic translator like DEEPL (or the much worse Google Translate) - because it would help them with their English / German / other language homework.
English speaks will probably never realize this, that most kids need to say learn English first, then programming.
Imagine that in five years from now, ChatGPT or one of its competitors will reach 98% factual accuracy in responses. Would you not like to rely on it for answering your questions?
Saying this in a discussion about Citogenesis is funny to me. How would you even determine "factual accuracy"? Just look at the list. There are many instances where "reliable sources" repeated false information which was then used to "prove" that the information is reliable.
As far as I am concerned AI responses will never be reliable without verification. Same as any human responses, but there you can at least verify credentials.
Scroll down TFA to the section called "terms that became real". When trolls or adversaries can use citogenesis to boostrap facts into the mainstream from a cold start, what does "98% factual accuracy" mean? At some point, you'll have to include the "formerly known as BS" facts.
It all depends on the distribution of the questions asked. I would hazard a guess that given the silly stuff average people ask ChatGPT in practice, it's already at over 98% factual accuracy.
i'm not so sure of that. this is likely the start of the sigmoid inflection curve of ai right now. the progress being made is crazy. look at that picture of the pope that got posted and got a bunch of people to believe that he was wearing some fancy parka. and that's just the now.
Even then, you have to know how to recognize that ChatGPT is feeding you made up information. In the case of these Citogenesis Incidents, 99% of the Wikipedia articles are legitimate. The trick is knowing what is the false 1%. How do you distinguish between the ChatGPT output that is true versus made up?