Let's say you are measuring IQ (whatever it indicates) in some giant between-subjects design or meta-analysis which also includes some number of "independent variables" outside your experimental control. Then you do some kind of multiple regression.
What you are doing is building a very high-level, black-box model finding maximum likelihood parameters to fit some observed data.
Each underlying data point is a slice of a snapshot of the behavior of a very complex system.
This is worthwhile insofar as we can account for variance that we might see in future samples and can therefore predict and confirm that we have an understanding of the system. It doesn't actually give us visibility into the complex system because regression terms rarely correspond to anything concrete. This offers only the vaguest possible constraint on attempts to decompose the overall effect into causal factors.
The weights in the regression do not, individually, have a meaningful interpretation outside of the model. (The implication and meaning of a term in a theory depends on the meanings of other terms in the theory).
But for some reason we decide to disregard this, and are primarily interested in which weight is bigger. And the reason we are interested is as a way of showing which ideological school is right or wrong: where the ideological schools are some version of "DNA is so important" and "environment is so important". If one school is vindicated - what? Everything they say is true?
We are testing hypotheses, not people. To badly paraphrase Popper: we send our theories to die in our stead...
Are you getting an idea of what I am saying about nature vs. nurture here or are we just talking past each other?
The value of such a simple model is not what it directly tells you anything but when it stops working you have a strong hint that something interesting is going on. Suppose you find a town where the IQ is 7 points above what you would expect after accounting for the children's socioeconomic situation AND their parents IQ scores. Sounds interesting right, except without the proper weights you can't really do that analysis in the first place.
Let's say you are measuring IQ (whatever it indicates) in some giant between-subjects design or meta-analysis which also includes some number of "independent variables" outside your experimental control. Then you do some kind of multiple regression.
What you are doing is building a very high-level, black-box model finding maximum likelihood parameters to fit some observed data. Each underlying data point is a slice of a snapshot of the behavior of a very complex system.
This is worthwhile insofar as we can account for variance that we might see in future samples and can therefore predict and confirm that we have an understanding of the system. It doesn't actually give us visibility into the complex system because regression terms rarely correspond to anything concrete. This offers only the vaguest possible constraint on attempts to decompose the overall effect into causal factors.
The weights in the regression do not, individually, have a meaningful interpretation outside of the model. (The implication and meaning of a term in a theory depends on the meanings of other terms in the theory).
But for some reason we decide to disregard this, and are primarily interested in which weight is bigger. And the reason we are interested is as a way of showing which ideological school is right or wrong: where the ideological schools are some version of "DNA is so important" and "environment is so important". If one school is vindicated - what? Everything they say is true?
We are testing hypotheses, not people. To badly paraphrase Popper: we send our theories to die in our stead...
Are you getting an idea of what I am saying about nature vs. nurture here or are we just talking past each other?