Tuesday, February 14, 2017

Qualitative economics done right, part 2a

When did insisting on comparing theory to data become anything other than incontrovertible? On my post Qualitative economics done right, part 2, I received some push back against this idea in comments. These comments are similar to comments I've seen elsewhere, and represent a major problem with macroeconomics embodied by the refrain the data rejects "too many good models":
But after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.
The "I"in that case was Tom Sargent. Now my series (here's Part 1) goes into detail about why comparison is necessary even for qualitative models. But let me address a list of arguments I've seen that are used against this fundamental tenet of science.

"It's just curve fitting."

I talked about a different aspect of this here. But the "curve fitting" critique seems to go much deeper than a critique of setting up a linear combination of a family of functions and fitting the coefficients (which does have some usefulness per the link).

Somehow any comparison of a theoretical model to data is dismissed as "curve fitting" under this broader critique. However this fundamentally misunderstands two distinct processes and I think represents a lack of understanding of function spaces. Let's say our data is some function of time d(t). Now because some functions fₐ(t) form complete bases, any function d(t) (with reasonable caveats) can be represented as a vector in that function space:

d(t) = Σ c f(t)

An example is a Fourier series, but given some level of acceptable error any finite set of terms {1, t, t², t³, t⁴ ...} can suffice (like a Taylor series, or linear, quadratic, etc regression). In this sense, and only in this sense, is this a valid critique. If you can reproduce any set of data, then you really haven't learned anything. However, as I talk about here, you can constrain the model complexity in an information-theoretic sense.

However, this is not the same situation as saying the data is given by a general function f(t) with parameters a, b, c, ...:

d(t) = f(t|a, b, c, ... )

where the theoretical function f is not a complete basis and where the parameters are fit to the data. This is the case of e.g. Planck's blackbody radiation model or Newton's universal gravity law, and in this case we do learn something. We learn that the theory that results in the function f is right, wrong or approximate.

In the example with Keen's model in Part 2 above, we learn that the model (as constructed) is wrong. This doesn't mean debt does not contribute to macroeconomic fluctuations, but it does mean that Keen's model is not the explanation if it does.

A simply way to put this is that there's a difference between parameter estimation (science) and fitting to a set of functions that comprise a function space (not science).

"It shows how to include X."

In the case of Keen, X = debt. There are two things you need to do in order to show that a theoretical model shows how to include X: it needs to fail to describe the data when X isn't included, and it needs to describe the data better when X is included.

A really easy way to do this with a single model is to take X → α X and fit α to the data (estimate the parameter per the above section on "curve fitting"). If α ≠ 0 (and the result looks qualitatively like the data overall), then you've shown a way to include X.

Keen does not do this. His ansatz for including debt D is Y + dD/dt. It should be Y + α dD/dt.

"It's just a toy model."

Sure that's fine. But toy models nearly always a) perform qualitatively well themselves when compared to data, or b) are  easy versions of much more complex models where the more complex model has been compared to data and performed reasonably well. It's fine if Keen's debt model is a toy model that doesn't perform well against the data, but then where is the model that performs really well that it's based on?

"It just demonstrates a principle."

This is similar to the defense that "it's just a toy model", but somewhat more specific. It is only useful for a model to demonstrate a principle if that principle has been shown to be important in explaining empirical data. Therefore the principle should have performed well when compared to data. I used the example of demonstrating renormalization using a scalar field theory (how it's done in some physics textbooks). This is only useful because a) renormalization was shown to be important in understanding empirical data with quantum electrodynamics (QED), and b) the basic story isn't ruined by going to a scalar field from a theory with spinors and a gauge field.

The key point to understand here is that the empirically inaccurate qualitative model is being used to teach something that has already demonstrated itself empirically. Let's put it this way:

After the churn of theory and data comes up with something that explains empirical reality, you can then produce models that capture the essence of the theory that captures reality. Or more simply: you can only use teaching tools after you have learned something.

In the above example, QED was the empirical success that lead to using scalar field theory to teach renormalization. You can't use Keen's models to teach principles because we haven't learned anything yet. As such, Keen's models are actually evidence against the principle (per the argument in curve fitting above). If you try to construct a theory using some principle and that theory looks nothing like the data, then that is an indication that either a) the principle is wrong or b) the way you constructed the model with the principle is wrong.

1. This comment has been removed by a blog administrator.

2. Well, in the last post you said private debt doesn't matter.

In your own analogy, it's not whether phi-fourth is just scalar and can't explain spin etc. It's that you are rejecting it for relativistic invariance, principles of quantum mechanics and so on.

1. Did not say that.

Said it has not been proven to matter and in fact said several times that it may end up mattering but we don't know.

No. Not rejecting phi-4 theory; it is toy model to explain wildly successful principles of QED.

Keen's model is not pedagogical/toy version of any wildly successful macro theory (because there aren't any!)

3. That looks kind of like what Steve Keen did.

Fisher has theory of debt deflation; Keen looks at the data and shows correlation between credit growth and house prices; Biggs/Meyer/Pick develop theory of Credit Impulse; Keen then refines theory to instead show correlation between credit acceleration and change in prices; builds toy model in his Minsky software.

1. Keen does not think it is a toy model in the paper I deal with at the Part 2 link above. He says it's the beginnings of the correct theory.

But even if it were the case, the toy model is coming before the correct macro theory. The correlations in credit/house prices may be useful finance model, but the link to GDP, unemployment, etc has not been made empirically.

As such Keen's model should still have to qualitatively look like the data. It does not.

4. Jason,

As far as I understand string theory has not been tested against data (is this correct?) but has been on the books for something like 50 years. Why is it that physicists have written mountains of papers on the subject and had untold conferences at which it was discussed yet it has not been validated empirically?

It seems physicists (well at least you) have one rule for physics and another for economics.

1. Also: used string theory to calculate black hole entropy.

There are only indirect tests because string scale well above what we can reach.

If economists wanted to build esoteric mathematical models about things that we could not collect data about, that would be fine. But we can see things like inflation and interest rates.