Sunday, January 24, 2016

Economists should start calling them approximations

Dani Rodrik has put out a set of commandments (for economists and non-economists); I won't discuss them all, but one from each category made me think. I couldn't find a text copy, but a graphic was posted on twitter.

Here are the two (econ and non-econ, respectively):
4. Unrealistic assumptions are OK; unrealistic critical assumptions are not OK.

2. Do not criticize an economist's model because of its assumptions; ask how the results would change if certain problematic assumptions were more realistic.
In a sense, these are two sides of the same coin. Number 2 is how you'd go about determining if an assumption was critical in number 4. Though they have their own internal logic, I have a problem with how they are phrased. However, if I rephrase these with approximation in place of assumption, I'd actually have only a small problem with it (that I'll discuss more below). We'd have:
4. Unrealistic approximations are OK; unrealistic critical approximations are not OK.

2. Do not criticize an economist's model because of its approximations; ask how the results would change if certain problematic approximations were more realistic.

What is the difference between assumption and approximation?
assumption: something taken for granted; a supposition
approximation: a result that is not necessarily exact, but is within the limits of accuracy required for a given purpose.

The latter definition is really what economists like Dani Rodrik are going for. It's what's behind chemists' zero-volume atoms and physicists' frictionless planes. To describe an ideal gas, you don't need to know the volume of an atom or molecule to a first approximation.

But here is where I come to the small problem. The reason we know we don't need to know the volume of atoms in an ideal gas is that the ideal gas equation is empirically valid when the thermal wavelength of atoms is small relative to the density. We can make the unrealistic "assumption" of atoms being 0-dimensional points because we tested it and it worked.

In economics, we don't have a lot of empirically accurate theories [1] -- regardless of their "assumptions". Since macroeconomic data is uninformative, only the simplest macroeconomic models have any chance of being empirically accurate without over-fitting. This could be the reason that economists use the word assumption instead of approximation -- there are no positive results of empirical tests of how good approximations are ... so they really are just assumptions.

Which means that although economists may know how the theoretical results would change (in number 2, above), they have no strong empirical case for whether their approximation is necessary or not. I would probably rephrase number 2 as: Do not criticize an economist's model because of its approximations; criticize them for a lack of even order of magnitude empirical accuracy.

This is not to say everything economists do is wrong. Far from it. For example, on this blog I've tried to show how the unrealistic assumption of H. economicus can arise as a good approximation from irrational individual agents (as long as they don't coordinate e.g. in a panic). And a lot of crossing diagrams and simple macroeconomic models follow from the approximation that random draws from the distribution of supply and random draws from the distribution of demand reveal the same amount of information (in my preprint).



Noah Smith has a new review of Rodrik's book up on his blog that is relevant. I added a comment:
There seems to be a tension between:
In doing so, [Rodrik] basically says "The evidence shows that norms often matter, and economists pay attention to the evidence." This demonstrates Rodrik's deep respect for data and evidence.
And this: 
[Rodrik] says that economics, unlike science, doesn't replace bad models with better ones - it just makes new models, expanding the menu of models that policy advisors have to choose from. That seems very true in practice. You rarely hear economists talk about models being "disproven", "falsified", or "rejected".
Paying attention to evidence to add to models (or make new ones) is important, but so is using evidence to reject models. It's my opinion, but you need both to truly respect the data. 
It is related to Noah's refrain of uninformative data: there isn't enough data to reject macro models. But that is because models are too complex to be rejected given the paucity of data. Using data to increase the complexity of models (without rejecting simpler models) goes against George E. P. Box's advice "all models are wrong":
Since all models are wrong the scientist cannot obtain a "correct" one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. (1976)
Emphasis mine. I discuss this more in this earlier post.



[1] There are some fairly accurate "theory free" (though not really) econometric models over given periods of time.

No comments:

Post a Comment

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.