Hacker News new | past | comments | ask | show | jobs | submit login

Imagine that in five years from now, ChatGPT or one of its competitors will reach 98% factual accuracy in responses. Would you not like to rely on it for answering your questions?



Saying this in a discussion about Citogenesis is funny to me. How would you even determine "factual accuracy"? Just look at the list. There are many instances where "reliable sources" repeated false information which was then used to "prove" that the information is reliable.

As far as I am concerned AI responses will never be reliable without verification. Same as any human responses, but there you can at least verify credentials.


Imagine that in five years, we will have cold fusion, world peace and FTL travel. ChatGPT told me so it must be true!


Scroll down TFA to the section called "terms that became real". When trolls or adversaries can use citogenesis to boostrap facts into the mainstream from a cold start, what does "98% factual accuracy" mean? At some point, you'll have to include the "formerly known as BS" facts.


It all depends on the distribution of the questions asked. I would hazard a guess that given the silly stuff average people ask ChatGPT in practice, it's already at over 98% factual accuracy.


Outside of maths and physics there is no such thing as factual truths


> Outside of maths and physics there is no such thing as factual truths

But that statement is neither, so it must be false…


Isn't there? I didn't attend MIT. That is a factual truth.


Isn't MIT known for its math and physics?


Well, technically, attending is about an object being in a particular position at a particular time, so physics.


Maths has no factual truths, only logical truths. Physics has no more or less factual truths than any other branch of science.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: