Hacker News new | past | comments | ask | show | jobs | submit login

This is a problem with the journalism and politics, it's not really about science. No scientist would trust a result that depends on a single small sample paper. Those are just stepping stones that may justify further research for more robust evidence. This fact is quite clear to scientist and it's why most would discourage the general public (including smart engineers) from reading academic articles.

But in general, I agree with you. It's ridiculous when someone pretends to shut down a complex issue by citing a random paper. However, an expert can still analyze the whole academic literature on a topic and determine what the scientific consensus is and how confident we are about it.




> No scientist would trust a result that depends on a single small sample paper.

Unfortunately, they would. There are papers with thousands of citations that don't even have data samples, just models based on assumptions.


Citations alone doesn't indicate how it is used.

Maybe a lot of scientist really liked the proposed 'model' and/or some discussion on the assumptions.

and cited it in their paper proposing some additions or follow on work.

That is also fine.


When I checked how people were citing these useless papers, almost invariably it would be in a sentence like this:

"Computational modelling is a useful technique for predicting the course of epidemics [1][2][3][4][5]"

The cited papers wouldn't actually support the statement because they'd all be unvalidated models, but citing documents that don't support the claim is super common and doesn't seem to bother anyone :( Having demonstrated a "consensus" that publishing unvalidated simulations is "useful", they would then go ahead and do another one, which would then be cited in the same way ad infinitum.


I disagree. A scientist could read a single paper and find out n is small, or identify a flaw.

But there are loads of papers like this. Then you have some literature studies which look at all these papers together and get result aggregates.

Then you get some “proper” studies which link to these aggregates, and several small studies, and you’re going to read these “proper” studies which are quoted often and deemed decent or good quality.

And at no point will you realise it’s all based on shoddy foundations.

This is for example what recently happened in social psychology




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: