Hacker News new | past | comments | ask | show | jobs | submit login

I've taken PhD Econometrics where we touched zero data and was 100% theoretical over the 9 months, so I don't think we are talking about the same thing. I don't like spewing out economically technical non-sense on HN, so I wasn't going to go into the methods of moments part of that comment as it applies to the current state of econometric theory.

In econometric theory, method of moments comes up in the largest way in the form of generalized methods of moments (or GMM). The idea behind GMM is one that is in competition with maximum likelihood, with the point being that GMM doesn't force you to make arbitrary assumptions about the true probability distribution of the data, purely for the implementation of the model. This seems obviously attractive, because then the results of our model won't be jeopardized just because one of our assumptions was false. In other words, GMM provides a way to estimate the parameters of a model with out making assumptions about the population.

Oh, and the topic of rational agents is not relevant here. This is a purely statistical/philosophical argument.

But there goes the economist in me again... I'm sorry.




I've taken just as many stats classes that don't touch data; I don't think either of us want to argue pedagogy of teaching on HN (fuck, I didn't even want to spell it and it's likely I didn't), but I don't think that there's a huge difference in how Econ and stats departments teach the same material or use terminology. (there are big differences in the material selected, obviously).

This statement can't be true: "GMM provides a way to estimate the parameters of a model with out making assumptions about the population.". You need assumptions, just different assumptions. Frequently those assumptions involve agent rationality, but not always (after all, MLE is a special case of GMM).


"MLE is a special case of GMM"

You have it backwards. MLE is actually a special case of GMM.

I'm done here.


You seem to have just repeated the quote? Presumably you mean "GMM is a special case of MLE"?


Mistake is on my side, I apologize. I read what the poster said backwards myself. :) The repeated quote is a true statement: MLE is a special case of GMM.


GMM requires a weights matrix, which implicitly does the same thing as distributional assumptions do in MLE (and you can replicate most MLE models in GMM by appropriate choice of weights matrix)


As a Bayesian, this sounds very intriguing. Could you expand upon GMM/provide a good reference?


Good reference: For most things in econometrics, the publicly available lecture notes by Jeff Wooldridge and Guido Imbens are excellent. I don't recall their GMM notes specifically, but that'd be a good place to start (just google for it).

Super-quick explanation: Write down a model, and a bunch of conditions that should be true at the correct parameter values.

As a trivial example, the conditions might be that the errors/residuals are orthogonal to the explanatory variables. Put mathematically, the expected value of

X*epsilon=0

where X is the matrix of explanatory data, and epsilon is your vector of errors. Since the errors are a function of the parameters (beta), a can find the optimal parameter values by solving your moment conditions.

You frequently have more conditions than parameters, so the conditions can't all be true at once. Then the computer tries to get the conditions as close to true as possible... where closeness is defined by how you weigh the importance of each individual condition.

Ideally the conditions arise transparently from the model. Executed poorly, it can seem ad-hoc.

A quick google search should gives better explanations than my response in comments :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: