I don't know if you've ever been subject to scientific peer review or not. One of the most frustrating things is reviewers who misunderstand the paper, but are convinced they've found some sort of problem with it. Admittedly, these sorts of issues usually point to places where the paper could be more clear. However, the point still stands that it is far easier to misunderstand something, than it is to understand it and find a flaw.
I would be very, very suspicious of any scientist claiming to have found a flaw in a work they've only been given a superficial description of. It may be accurate to say they have a hunch of where a flaw might be. But without really spending time with a paper it would foolish to claim any degree of certainty. Only very low quality papers can be rejected so quickly.
There was an interesting study recently that showed prediction markets actually did a better job of determining 'fake' papers than journals in social sciences. [1][2] A group of researchers took some 21 papers published in Nature and Science from 2010 to 2015. 13 were able to be replicated. The prediction market accurately determined all 13 predictable papers, and 5 of the 'fake' papers. They gave the remaining 3 about 50/50 odds. Though one thing to be said there is that even in studies that they managed to replicate, the effect size was found to be, on average, 50% of the stated effect size.
I think there's long been a perception that social sciences are heavily influenced by people who take whatever their biases are, create some experiment specifically designed to show them, and then play with the numbers or experiment's parameters until they manage to do so. This goes all the way back to (and certainly before) Zimbardo's Stanford Prison Experiment. There was nothing organic there. Participants, both prisoner and guard, were heavily coached on how to act and, in their own words, saw the experiment more as an acting role than emergent normal behavior. It seems to be that this perception is accurate.
In a society where people are increasingly radicalizing on social views, we ought expect social sciences to become even more dysfunctional in the years to come. This sort of stuff is, in turn, casting a very dangerous cloud over the rest of science since people tend to extrapolate these actions and behaviors in the social sciences, to science as a whole. In my opinion we need to start creating a strong distinction between science driven by science that yields falsifiability, predictability, and is driven exclusively by direct experimental result -- as compared to not-quite-science that is based on models, abstract experimentation, is not falsifiable, and does not provide meaningful predictions. What I mean by meaningful is that the whole point of predictability is not to have something to encourage political action on as is often the case in social science, but to use as a litmus test for the accuracy of a hypothesis. If it's true then that provides strength to the hypothesis, if it's false then the hypothesis is false. Without falsifiability, predictions are worthless.
The replication crisis is larger than the social "sciences". Microbiology is also affected, and I've even seen evidence that occasionally computer science is affected, though to nowhere near the same extent and for different reasons.
I would be very, very suspicious of any scientist claiming to have found a flaw in a work they've only been given a superficial description of. It may be accurate to say they have a hunch of where a flaw might be. But without really spending time with a paper it would foolish to claim any degree of certainty. Only very low quality papers can be rejected so quickly.