I think one of the points that the article tries to make, is the point that proving something in a soft science using models is often based on many assumptions, be it implicit or explicit, and that this is problematic. To that I agree to some degree.
In hard sciences all inputs to a proof are either verifiable theorems known to be absolutely true, conjectures/hypotheses (in which case the proof becones a conjecture) and seldomly axioms. In soft sciences on the other hand, it is common to construct models quite arbitrarily, in order to try and match empirical results. If however, we would like these models to have any indication of "absolute" truth, similar to the hard sciences, currently we can't or don't.
To achieve this I believe we could do an input analysis of ALL assumptions and try and quantify the aggregated certainty of the model's correctness, even before matching it with empirical data. In this way we could say for example: we have used a model with a predicted input accuracy of 0.82, that matches our empirical results 0.97,p < 0.05. This would then further strengthen and quantify the "standing on the shoulders of a giant" principle.
Of course this is easier said than done and I know this is a bit naïve. Currently no techniques exist to do this as far as I know. There is also discussion to be had about how to interpret model outputs (we now have three variables, how do we relate them? How to calculate this model's output accuracy?) and how to calculate subsequent model's accuracy based on different input accuracies and their inter-relations. This would also require re-building soft sciences all the way from the bottom up (from the most easily verifiable facts first) to be useful and a new science on hypthesized model accuracy calculation.
Anyway, enough hypothesizing thought experiments for the day. Any thoughts?
In hard sciences all inputs to a proof are either verifiable theorems known to be absolutely true, conjectures/hypotheses (in which case the proof becones a conjecture) and seldomly axioms. In soft sciences on the other hand, it is common to construct models quite arbitrarily, in order to try and match empirical results. If however, we would like these models to have any indication of "absolute" truth, similar to the hard sciences, currently we can't or don't.
To achieve this I believe we could do an input analysis of ALL assumptions and try and quantify the aggregated certainty of the model's correctness, even before matching it with empirical data. In this way we could say for example: we have used a model with a predicted input accuracy of 0.82, that matches our empirical results 0.97,p < 0.05. This would then further strengthen and quantify the "standing on the shoulders of a giant" principle.
Of course this is easier said than done and I know this is a bit naïve. Currently no techniques exist to do this as far as I know. There is also discussion to be had about how to interpret model outputs (we now have three variables, how do we relate them? How to calculate this model's output accuracy?) and how to calculate subsequent model's accuracy based on different input accuracies and their inter-relations. This would also require re-building soft sciences all the way from the bottom up (from the most easily verifiable facts first) to be useful and a new science on hypthesized model accuracy calculation.
Anyway, enough hypothesizing thought experiments for the day. Any thoughts?