I don't think so. The average person already thinks (Chat)GPT is an all-knowing AGI homework-solver, and the problem only worsens if you add the airs of "science" to the situation.
If you have used the summary function of Gpt, it can be out right wrong but sound very plausible. With the amount of disinformation out there people that are interested in science but wants it the easy way is could make things worse. Imagine the summarized states the result show that certain meds give good results but without the right statistic context, it could be just marginally good, or even not statistically significant to the trained reader. Now they pass it off and start validating their own biases
This says more about you than the hypothetical "people" you are talking about.