We can't predict how one person will react to a situation. But sometimes we can estimate, on average, how a large group of people will react.
Even a lot of engineering deals with similar types of uncertainty. We build huge structures out of metal, concrete, and wood on top of soils and other geological materials, often with the assumption they they are all homogeneous with constant material properties through the structure (they aren't). However, we can do this because the variations in properties tend to average out as the material sample gets larger. If we try to make the same assumption on very small scales, we find that there is a lot more uncertainty and our predictions aren't going to be as accurate. We see this in physics and many other scientific fields as well.
I think the real problem is, is that the humanities -- the social sciences in particular -- have proven themselves to consistently build bad models with unproven assumptions. These assumptions have been leveraged to directly influence policy (instead of, say, trying a lot of different things and seeing what works).
Here's a concrete example of a paper [1] that just came out, attempting to impact COVID19 policy, from a very "respectable" set of academics at Yale -- that is based on a flat out fabricated economic model:
We focus primarily on the moderate scenario. That is, our baseline assumption is that diminishing returns play a larger role than accelerating returns (so that α ≤ 1) but not so large that they lead to α < 0. We stress that U depends both on the variation in economic value attached to different activities and on the model governing the disease transmission
Translation: we made some equations that makes the BAD thing BAD and the GOOD thing GOOD.
> I think the real problem is, is that the humanities -- the social sciences in particular
This construction makes it seem like you view social sciences as a subset of humanities. I was taught that neither is a subset of the other.
Where did you get your impression from? I assume it’s not the university in your username, since I’m certain they don’t share this view. (Source: I’m married to a humanities professor at that university. Also: https://exploredegrees.stanford.edu/schoolofhumanitiesandsci...)
They make assumptions and then foreground them so the reader can understand them. What's wrong with that? Have you discovered a way of drawing conclusions without assumptions? Or do you think you can "just observe" without being guided by theory? What would a non-"fabricated" model look like?
The problem is, that the assumption is not rooted in any empirical phenomena. They have neither historical data to fit to (which itself would be tenuous) nor do they have any sort of first order justification for it.
It's like doing a car crash safety simulation and stating that the airbag provides protective force "linearly proportional to the force of impact, with alpha > 1" ... without ever measuring it or doing any calculations on why this would be the case. Would you drive in a car that was tested this way?
I'm too damn lazy to read the paper, so you may be right. But normally, we practice a division of labour. Theoreticians build models in the expectation that empiricists will (a) test them and (b) estimate key parameters. If these guys have said "we have the answer, this is what you should do" then maybe you are right.
I'm not saying either that this phenomenon never happens. I just think it's rather rare, especially among sensible and skilled economists. See also the great Bob Sugden's paper "Credible Worlds", which is available here at the moment.
It's a paper on epidemiology, that utilizes econometric thinking. I understand the theoretician / empiricist divide very well. But I chose this paper to illustrate because it is very clearly aiming to impact immediate policy (it states this in the abstract). Furthermore it is not trying to create fundamental new theoretical models... but rather provide a justification for why lockdown is economically justified.
I hate to be the one to say this -- but most of modern economics is crap, given how reflexive it is. People know how the fed acts -- they know how policy makers act. The models don't take into account this reflexivity.
Someone like George Soros is a thousand times more savvy on how the economy works than post-docs publishing econometrics papers for $40k a year.
Which fields are you referring to? I think of psychology as perhaps more rigorous, but none of the others (sociology, anthropology, political science).
Sociologists often complains that economists don't bother reading established peer reviewed sociology and opt for own guesswork based on own intuitive observations whenever they attempt to incorporate sociology-lite into their writings.
This is a problem I find tiring about most studies, even in the sciences (excluding most physics and all mathematics). It seems as if most researchers are looking for a novel or interesting correlation or apply an interesting model to a new domain. Generally, I've found this puts insufficient effort into any kind of disconfirmation.
It makes sense, though, with the incentives most academic journals put into study authors. Nothing's sexy about disproving some random model fits.
Replication and publishing negative results do not get the respect they deserve. They should be given equal attention and funding as novel results. That would fix the replication crisis.
Are they not published in the major journals, or are they not published at all? The latter would seem insane. "We've tried this, it didn't work, but nobody will know and somebody will try it again next week to find out that it doesn't work."
They get published in lower impact journals because it's not as sexy. They get less funding for similar reasons.
I think a change of bureaucratic structure might be needed, like funding for every study should include funding for at least one independent replication.
Not only would the replication itself cull some of the false results, but knowing a replication was coming might make researchers more open and honest, ie. less p hacking, document more of their methods and in greater detail, etc.
>We can't predict how one person will react to a situation. But sometimes we can estimate, on average, how a large group of people will react.
I think the issue here is very similar to the one in predicting the weather. The uncertainty of our models increases so quickly that we can only predict a relatively short amount of time ahead (in the case of weather it's about 1 week [0]). We can see patterns in human behavior, but the uncertainty just grows too quickly for long-term predictions. Small things that are hard to account for can make a big difference in where even large groups of people end up at.
Even a lot of engineering deals with similar types of uncertainty. We build huge structures out of metal, concrete, and wood on top of soils and other geological materials, often with the assumption they they are all homogeneous with constant material properties through the structure (they aren't). However, we can do this because the variations in properties tend to average out as the material sample gets larger. If we try to make the same assumption on very small scales, we find that there is a lot more uncertainty and our predictions aren't going to be as accurate. We see this in physics and many other scientific fields as well.