As a counterpoint, I'm a scientist whose work occasionally gets media attention.
I was definitely worried about my name being featured in the middle of absolute nonsense, but the experience has usually been good: very few of the journalists completely missed the mark. The "tone" didn't always match what I was trying to get across, but it was usually close to someone in the field's attitude.
There were some minor errors and a lot of them come from a surprising source: many journals believe it's unethical to show "copy" (the complete article) to a source--or sometimes anyone outside the newsroom. Some scientific terms have nuances that aren't immediately obvious to people outside the field. For example, mine distinguishes between "inhibition" of neural activity, which involves specific molecular mechanisms (GABA, mostly) and suppressing it, which could be anything. This distinction probably isn't obvious to even attentive "general" fact-checker.
many journals believe it's unethical to show "copy" (the complete article) to a source--or sometimes anyone outside the newsroom
It seems like you didn't quite complete this thought. It sounds like your point is that inaccuracies creep in because the journalists won't give a chance to vet an article's accuracy to the people who have the expertise to do so?
I'm not the same commenter, but I completely agree from previous experience in journalism. Many journalists who write about research findings are in the same publications who publish general news (e.g. national politics). So, many science journalists are held to newsroom policies where they can't share drafts with sources before publication, to avoid bias. This is highly relevant for sharing drafts with a politician, but much less relevant for sharing articles with a scientist.
Some newsrooms do have exceptions for scientific expertise, or have wiggle room saying that experts can verify whether quotes or sections of the article are accurate, versus the whole draft. This is a decent compromise if a publication allows it, though I'm personally in favor of having a more trusting relationship between journalists and scientists for typical articles on research findings (unless the article is investigative).
Well-funded magazines (e.g. The New Yorker) also get around this by having fact-checkers with strong scientific backgrounds. This is probably the best solution for editorial independence that avoids sharing drafts, but there's not a lot of money in media and writing as-is, so it's not a realistic solution for the vast majority of publications (especially when even big magazines have been cutting funding for their fact-checking teams, shifting more responsibility to the editors/journalists for accuracy).
My experience when my work was covered is that the articles are generally decent, but the titles or initial claims are overblown ("clickbait"). I was often directly contacted by more reputable organizations to comment and explain and they didn't simply regurgitate the press-release. There were many many websites that are just carbon copies of press releases, most of which I had never heard of previously and didn't really understand their purpose.
I was definitely worried about my name being featured in the middle of absolute nonsense, but the experience has usually been good: very few of the journalists completely missed the mark. The "tone" didn't always match what I was trying to get across, but it was usually close to someone in the field's attitude.
There were some minor errors and a lot of them come from a surprising source: many journals believe it's unethical to show "copy" (the complete article) to a source--or sometimes anyone outside the newsroom. Some scientific terms have nuances that aren't immediately obvious to people outside the field. For example, mine distinguishes between "inhibition" of neural activity, which involves specific molecular mechanisms (GABA, mostly) and suppressing it, which could be anything. This distinction probably isn't obvious to even attentive "general" fact-checker.