Tuesday, May 2, 2017

The reason for the proliferation of macro models?

Noah Smith wrote something that caught my eye:
One thing I still notice about macro, including the papers Reis cites, is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.
This is fascinating, as it's completely unheard of in physics. Nearly every theory or model in a physics paper would either be one of four things:

  1. It's compared to some kind of data
  2. It's predicting a new effect that could be measured by new data
  3. It's included for pedagogical reasons
  4. It reduces to existing theories that have been tested

I'll use some of my own papers to demonstrate this:

https://arxiv.org/abs/nucl-th/0202016
The paper above is compared to data. The model fails, but that was the point: we wanted to show that a particular approach would fail.
https://arxiv.org/abs/nucl-th/0505048
The two papers above predict new effects that would be measured at Jefferson Lab.
https://arxiv.org/abs/nucl-th/0509033
The two papers above contain pedagogical examples and math. The first has five different models, but only one is compared to data. The second is more about the math.
Finally in my thesis linked above, I show how the "new" theory I was using connects to existing chiral perturbation theory and lattice QCD.
Of course, the immediate cry will be: What about string theory! But then string theory is about new physics at scales that can't currently be measured. Most string theory papers fall under 2, 3, or 4. Maybe if all these macroeconomic models were supposed to be about quantities we couldn't measure yet, then you might have a point about string theory.

Even Einstein's paper on general relativity showed how it could be tested, explaining existing data, or how they reduced to existing theories:

Reducing to Newton's law of gravity

New effect: bending of light rays by massive objects.

Explaining Mercury's perihelion rotation

I'm sure there are probably exceptions out there, but the rule is that if you come up with a theory you have to show how it connects/how it could connect to data, other existing theories, or you say you're just working out some math.

In any case, if you have a new model that can or should be tested with empirical data, the original paper should have the first test. Additionally, it should pass that first test ‒ otherwise, why publish? "Here's model that's wrong" is not exactly something that warrants publication in a peer reviewed journal except under particular circumstances [1]. And those circumstances are basically the circumstances that occur in my first paper listed above: you are trying to show a particular model approach will not work. In that paper I was showing that a relativistic mean-field effective theory approach in terms of hadrons cannot show the type of effect that was being observed (motivating the quark level picture I would later work on).

The situation Noah describes is just baffling to me. You supposedly had some data you were looking at that gave you the idea for the model, right? Or do people just posit "what-if" models in macroeconomics ... and then continue to consider them as .... um, plausible descriptions of how the world works ... um, without testing them???

...

Footnote:

[1] This is not the same thing as saying don't publish negative results. Negative empirical results are useful. We are talking about papers with theory in them. Ostensibly, the point of theory is to explain data. If it fails in it's one job, then why are we publishing it?

[2] When I looked it up for this blog post, it looks like another paper demonstrates a similar result (about the Hugenholtz-van Hove theorem [pdf]) but was published three months later (in the same journal) that I didn't know about:

https://arxiv.org/abs/nucl-th/0204008

4 comments:

  1. "people just posit "what-if" models in macroeconomics ... and then continue to consider them as .... um, plausible descriptions of how the world works ... um, without testing them?"

    Well, I guess that sums up what mainstream economics is all about. (Maybe I could say economics in general?)

    It's all about a bunch of non tested models that heavly influences public opinion and political decisions...

    ReplyDelete
    Replies
    1. I think econ 101 influences public opinion much more than the latest DSGE model, but yeah.

      Delete
  2. This is from: Werner, Richard (2005), New Paradigm in Macroeconomics, Basingstoke: Palgrave Macmillan

    “The return of inductivism

    We found that the main macroeconomic theories have two features in common, apart from their insufficient empirical track record. Firstly, they share the deductive research methodology which does not primarily base the development of theories on empirical observation, but instead emphasizes axioms and theoretical postulates that may be far removed from reality. The predominance of this methodology is virtually unique among the academic disciplines. Secondly, they are based on the traditional quantity equation linking money to the economy. There are good reasons why the natural sciences are based on the inductive research method. Of course, the inductive research method does not exclude deductive processes. In fact, inductivism uses deductive logic, but it places priority on empirical data and has sequenced research tasks such that empirical work is allowed to lay the foundation for the development of theories, which are then also tested, suitably modified and applied to reality. Such an approach meets the criteria for gaining knowledge and wisdom far better than deductivism, which has dominated economics in the English
    158
    New Paradigm in Macroeconomics Applying the New Paradigm
    159
    language. Nevertheless, if deductive mainstream economics had been empirically successful, one might have wished to tolerate its unusual methodology. However, the fact that major challenges exist to the fundamental tenets of macroeconomics means that the deductive approach cannot be sustained. This is not to say that deductive, mainstream economics has not served any purpose. As we saw in the Prologue, it has taught us many things, including how highly unrealistic the theoretical environment must be in order to obtain market clearing and a situation where government intervention in markets will always be inefficient. Furthermore, mainstream economics has developed a rich tool-kit for the economic sciences, which will prove useful also for the new paradigm. Also, it has proven far more fruitful in microeconomics and applied disciplines, including finance. Thus there can be no doubt that mainstream economics has advanced knowledge. However, economists should aspire to acquiring a degree of wisdom. The empirical approach of a new paradigm requires data. By the time data becomes available, it is about the past. Thus history provides the data set upon which theories should be built. The solutions offered in this part of the book are the result of years of conducting empirical research according to the inductive methodology, and the desire to explain the true cause of things, as best as possible, without being beholden to any preconceived idea or ideological blinkers.”
    https://goo.gl/M9prsg

    ReplyDelete
    Replies
    1. Yeah, deductivism is a pretty good way to put it.

      Delete

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.