"This stuff" has been around for ages. Forty years ago in my teens, I read a book with a very memorable anecdote about locals insisting canabalism was "a local custom" and defending their right to practice it. The British official in charge replied "It's our custom to shoot cannibals."
When I was homeless, I asked around on the internet for a source for who said that. There is a real incident where a British official said something like "We shoot people who do that" but it wasn't about cannibalism.
But the toxic classist forum where I asked initially replied to me with basically "You're just a stupid homeless person misremembering that." No, I read it in my teens when I had a near photographic memory and was one of the top students in my high school class and everyone respected me as one of the smart people, long before the world decided I was some loser making things up. I'm quite clear the anecdote in the book was about cannibalism.
There is a real historical incident similar to it, but the book got the details wrong.
I have also read some crazy accounts of how Einstein's Theory was proved because the solar eclipse bent light so much, you could see a star that our Sun should have been obscuring.
Humans tend to believe things we read. If it's in writing, it has some kind of authority in our minds.
This is often not the case and we need to get better about recognizing that a lot of "writing" on the internet is just modern chit chat and not reliable.
> I have also read some crazy accounts of how Einstein's Theory was proved
FWIW, this was the observations during the solar eclipse of 1919, by Eddington and a bunch of less famous people. It apparently made headlines at the time.
It wasn't about bending of light though. The orbit of Mercury is slightly different from what Newton's theory would predict, and this was first observed during the eclipse.
The British officer story happened in Korea IIRC, the custom at the time wasn't cannibalism, but that women who lost their husbands would be killed to join them in the afterlife.
The GP is correct that Eddington's observations during the 1919 eclipse had to do with the deflection of starlight. General relativity predicted that starlight passing by the Sun would be deflected by a certain amount, and Eddington's observations showed that general relativity's prediction was correct.
The discrepancy between Mercury's perihelion precession and Newtonian gravity had been observed long before Einstein developed general relativity. Calculations under general relativity correctly determined Mercury's perihelion advance, but general relativity did not predict the perihelion advance, since it was already known.
>According to the maxim of relation (or relevance), a cooperative speaker should not convey any information that is not relevant in the context of the utterance
It's something that effects a different perspective of the things being discussed. It's no different from someone commenting from their perspective as someone with adhd, or from a country that isn't the USA, etc
It'll probably be a little bit like life before the web. You'd hear something and have no immediate way to verify the veracity of the claim. It's one of the reasons teachers pushed students to go to the library and use encyclopedias for citations when writing research papers.
Our species has obviously managed to make it pretty far without facts for the longest time. But we've comfortably lived with easily verified facts for 20-30 years and are now faced with a return to uncertainty.
If I had to guess, we'll see stricter controls on institutions such as Wikipedia that rely on credentialism and frequent auditing as a means to counter the new at-volume information creation capacity. But I don't really have the faintest idea of how this will turn out yet. It's wild to think about how much things are changing.
My teachers were always clear that encyclopedias were not to be used as a primary source either. They're not even a secondary source. Encyclopedias are tertiary sources.
They're better than Wikipedia... but only barely.
In the end you use Wikipedia and an encyclopedia the same way: to get a broad understanding of a topic as a mental framework, then look at the article's citations as a starting point to find actual, citable primary sources. (Plus the rest of the library's catalog/databases.)
exactly. information literacy starts with evaluating the sources. I have had numerous chats over the last few years where it's evident that people do not do due dillagence in their information gathering. it seems that either people aren't being taught this anymore or that they have given in to sloppy thinking.
I would have killed to have ChatGPT growing up. It's amazing to have a patient teacher answer any question you can think of. GPT-4 is already far better than the answers you'll get on Quora or Reddit, and it's instant. So it's wrong sometimes. My teachers and parents were wrong plenty of times, too.
There's a difference between being wrong sometimes, and having no concept of objective reality at all.
I really don't understand how anyone can have such a positive impression. I refuse to register an account just to try it out myself, but that isn't necessary to form an opinion when people are spamming ChatGPT output which they think is impressive all over the Internet.
The best of that output might not always be possible to distinguish from what a human could write, but not the kind of human I'd like to spend time with. It has a certain style that - for me - evokes instant distrust and dislike for the "person" behind it. Something about the bland, corporate tone of helpfulness and political correctness. The complete absence of reflection, nuance, doubt, or curiosity with which it delivers "facts". Its refusal to consider any contradictions feels aggressive to me even - or especially - when delivered in the most non-judgemental kind of language.
It is like the text equivalent of nails on a chalkboard!
I'd argue that most children would kill for an automatic translator like DEEPL (or the much worse Google Translate) - because it would help them with their English / German / other language homework.
English speaks will probably never realize this, that most kids need to say learn English first, then programming.
Imagine that in five years from now, ChatGPT or one of its competitors will reach 98% factual accuracy in responses. Would you not like to rely on it for answering your questions?
Saying this in a discussion about Citogenesis is funny to me. How would you even determine "factual accuracy"? Just look at the list. There are many instances where "reliable sources" repeated false information which was then used to "prove" that the information is reliable.
As far as I am concerned AI responses will never be reliable without verification. Same as any human responses, but there you can at least verify credentials.
Scroll down TFA to the section called "terms that became real". When trolls or adversaries can use citogenesis to boostrap facts into the mainstream from a cold start, what does "98% factual accuracy" mean? At some point, you'll have to include the "formerly known as BS" facts.
It all depends on the distribution of the questions asked. I would hazard a guess that given the silly stuff average people ask ChatGPT in practice, it's already at over 98% factual accuracy.
i'm not so sure of that. this is likely the start of the sigmoid inflection curve of ai right now. the progress being made is crazy. look at that picture of the pope that got posted and got a bunch of people to believe that he was wearing some fancy parka. and that's just the now.
Even then, you have to know how to recognize that ChatGPT is feeding you made up information. In the case of these Citogenesis Incidents, 99% of the Wikipedia articles are legitimate. The trick is knowing what is the false 1%. How do you distinguish between the ChatGPT output that is true versus made up?
How? As can be seen from these Citogenesis Incidents, humans cannot even tell when other humans are making up stuff that sounds like it could be real. How will ChatGPT, et al do it?
Dystopia? Idiocracy? I don't know, but I don't like it.