Then maybe the problem is that the public is expecting results to be actually true, exacerbated by "hey what peer-reviewed literature are you supporting your argument with". If peer-reviewed literature is just a scratchpad for ad hoc ideas that may turn into something legit later, then the current standard is good enough, and we shouldn't be worrying that most of them are false.
OTOH, it's a problem if people are basing real-world decisions on stuff that hasn't reached textbook level certainty. That's pretty much what happened with dietary advice and sugars. "Two scratchpads say a high-carb low-fat diet is good? Okay, then plaster it all over the public schools."
I think this is the case, and it has been mentioned elsewhere in this thread. When I see a paper published that I am interested in, I have to fit it into the context what I already know about a field, the standards of the the journal its published in, sometimes the origin of the paper (some labs are much less sloppy than others), and other factors.
For a recent personal example, a company published a paper saying that if you give a pre-treat with 2 doses of a particular drug, you can avoid some genetic markers of inflammation that are in the bloodstream and kidneys. Well, I looked at the stimulus and ordered some of my own from a different manufacturer that was easier to obtain and gave it to some mice with and without pretreatment by their compound. Instead of looking at the genes they looked at, I looked at an uptick in a protein expected to be one step removed from the genes they showed a change in. Well I haven't exactly replicated their study, but I've replicated the core points: stimulus with a the same cytokine gives a response in a particular pathway and it either is or isn't mitigated by the drug or class of drugs they showed. Now, my study took 2 days less than theirs, but it worked well enough that I don't need to fret the particular details I did differently from them. If my study didn't work, I could either decide that the study isn't important to me if it didn't work my way or go back a step and try to match their exact reagents and methods.
So yes, I do think the news industry picks up stuff too quickly sometimes, but depending on the outlet, they tend to couch things in appropriate wiggle words (may show, might prove, could lead to, add evidence, etc).
Yes, the problem seems to be that the general public is expecting journals/conferences that are essentially implemented by a research community as a tool for their ongoing research workflow, to fit to the goals of informing the general public - but there reasonably would/should/must be a gap of something like a year (or many years) between the finding must be initially published so that others can work on that research, and the time when the finding is reasonably verified by other teams (which necessarily happens a significant time after it's been published) and thus is ready to be used for informing public policy.
It's like the stages of clinical research - there we have general standards on when it is considered acceptable to use findings for actually treating people (e.g. phase 3 studies), but it's obviously clear that we need to publish and discuss the initial findings since that's required to actually get to the phase 3 studies. However, the effects seen in phase 1 studies often won't generalize to something that actually works in clinical practice, so if general public reads them then often they'll assume predictions that won't come true.
OTOH, it's a problem if people are basing real-world decisions on stuff that hasn't reached textbook level certainty. That's pretty much what happened with dietary advice and sugars. "Two scratchpads say a high-carb low-fat diet is good? Okay, then plaster it all over the public schools."