Hacker News new | past | comments | ask | show | jobs | submit login

> This is dangerous because people that has no knowledge of the science would blindly trust whatever it summarizes.

This says more about you than the hypothetical "people" you are talking about.




I don't think so. The average person already thinks (Chat)GPT is an all-knowing AGI homework-solver, and the problem only worsens if you add the airs of "science" to the situation.


I think we're in uncharted waters to some extent and should have some restraint about predictions.


If you have used the summary function of Gpt, it can be out right wrong but sound very plausible. With the amount of disinformation out there people that are interested in science but wants it the easy way is could make things worse. Imagine the summarized states the result show that certain meds give good results but without the right statistic context, it could be just marginally good, or even not statistically significant to the trained reader. Now they pass it off and start validating their own biases


Realy? The “You are projecting” replies. That’s cute


You're assuming, not projecting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: