Hacker News new | past | comments | ask | show | jobs | submit login

> No scientist would trust a result that depends on a single small sample paper.

Unfortunately, they would. There are papers with thousands of citations that don't even have data samples, just models based on assumptions.




Citations alone doesn't indicate how it is used.

Maybe a lot of scientist really liked the proposed 'model' and/or some discussion on the assumptions.

and cited it in their paper proposing some additions or follow on work.

That is also fine.


When I checked how people were citing these useless papers, almost invariably it would be in a sentence like this:

"Computational modelling is a useful technique for predicting the course of epidemics [1][2][3][4][5]"

The cited papers wouldn't actually support the statement because they'd all be unvalidated models, but citing documents that don't support the claim is super common and doesn't seem to bother anyone :( Having demonstrated a "consensus" that publishing unvalidated simulations is "useful", they would then go ahead and do another one, which would then be cited in the same way ad infinitum.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: