Multiverse, by Leo Villareal (National Gallery, DC) [my photo, 2011]. |
Update: TL;DR version
Theoretical frameworks organize well established empirical and theoretical results. Using that framework allows your model to be consistent with all of that prior art. Proceeding without a framework means you inevitably pick and choose models and empirical data based on implicit ad hoc theorizing ... basically feelings. Different models and empirical data = different universes.
Update, the second
I.
A response from John Handley, who seems to favor an ex ante judgment call approach, and who also comments below.
Regarding one of the comments, I am using "RBC" as a generic term for DSGE without sticky prices (which I think is what Wren-Lewis means). The first RBC model (Kydland-Prescott) was the first DSGE model.
II.
And a referral to Eric Lonergan on eclecticism (multiple models with different scopes). This is a great point:
Eric does say something else I think is "great, if true":
...
Original post
Simon Wren-Lewis reviews Dani Rodrik's Economics Rules and is pretty positive about it. It seems there is no change from the early interpretation, and I've written about some of the issues involved before (which I will reference alongside quotes from Wren-Lewis's latest below). The great thing is the specific example Wren-Lewis gives of the general idea he put forward which I talk about here. Here's Wren-Lewis:
Theoretical frameworks organize well established empirical and theoretical results. Using that framework allows your model to be consistent with all of that prior art. Proceeding without a framework means you inevitably pick and choose models and empirical data based on implicit ad hoc theorizing ... basically feelings. Different models and empirical data = different universes.
Update, the second
I.
A response from John Handley, who seems to favor an ex ante judgment call approach, and who also comments below.
Regarding one of the comments, I am using "RBC" as a generic term for DSGE without sticky prices (which I think is what Wren-Lewis means). The first RBC model (Kydland-Prescott) was the first DSGE model.
II.
And a referral to Eric Lonergan on eclecticism (multiple models with different scopes). This is a great point:
I think the ex post/ex ante problem [Noah Smith] describes is a much more general problem, particularly in areas like economics where you cannot assume the “uniformity of nature”, to borrow a phrase from Hume. ... Now [uniformity of nature] is definitively not true in much of economics, because the structure of the economy is changing. It is highly likely that a model which did explain wage behaviour in the 1970s – and had predictive power in the 1970s – is no longer valid.I would agree that this may truly be the case, which does render my favored "framework" approach moot. But a) if you only have a decade at a time macro model validity, you could never gather enough data to demonstrate its validity, b) is an implicit theory of how economies operate and c) I think this represents a kind of resignation. We don't have any specific evidence that you can never have a big theory/framework, or one that can operate across decades. We haven't really exhausted the set of possible frameworks to say let's stop trying for a big theory. A lack of imagination (or being difficult to find) is not proof that such things don't exist. I make a similar point in this post (just substitute "state-dependent" with "eclectic") using the example of quantum mechanics. If physicists had decided to just say physics was state dependent, we never would have had the new framework of quantum mechanics. But as Eric says, lack of state dependence is a more intuitive assumption in physics than economics which is a great counterpoint.
Eric does say something else I think is "great, if true":
In fact, that is precisely why an eclectic approach is more rigorous – it requires us to define the regime in which the theory is applicable (perhaps requiring valid micro-foundations) – we are making no claim to universal validity.That would be great! But economic models never seem to state these regimes (scope conditions). Models are presented as if they apply in any decade, for any country, for upwards of 20 years at a time (on that last one, check out Woodford's presentation [pdf] which has a graph that goes out 20 years and talks about expectations at infinity).
...
Original post
Simon Wren-Lewis reviews Dani Rodrik's Economics Rules and is pretty positive about it. It seems there is no change from the early interpretation, and I've written about some of the issues involved before (which I will reference alongside quotes from Wren-Lewis's latest below). The great thing is the specific example Wren-Lewis gives of the general idea he put forward which I talk about here. Here's Wren-Lewis:
Let me give you a simple example from macro. How do we know if most economic cycles are described by Real Business Cycles (RBC) or Keynesian dynamics. One big clue is layoffs: if employment is falling because workers are choosing not to work we could have an RBC mechanism, but if workers are being laid off (and are deeply unhappy about it) this is more characteristic of a Keynesian downturn. This simple test beats any amount of formal econometric comparison.
Noah Smith's point about ex ante versus ex post is relevant -- almost damning. This is literally figuring out which model to use ex post. The only reason the ex post reasoning is not completely beyond the pale is the quote discussed below: Wren-Lewis is a doctor, not a physicist. However, there is more to this; ask yourself some questions:
Q: What is the typical difference between RBC DSGE models and NK DSGE models?A: Sticky prices/wages.
Q: What are the reasons to use sticky prices/wages?A: Claimed to be observed experimentally and necessary to make monetary policy non-neutral in DSGE models.
Q: How are sticky prices implemented in NK DSGE models?A: Usually ad hoc Calvo pricing (aka the Calvo fairy), but also other models.
Q: What's wrong with the Calvo fairy (and other models)?A: They don't look anything like real price changes.
In fact, Martin Eichenbaum, Nir Jaimovich, and Sergio Rebelo, (2008) "argue that our evidence is inconsistent with the three most widely used pricing models in macroeconomics: flexible price models, standard menu cost models, and Calvo-style pricing models." This isn't just some study that will be overturned by some other study some day ... it's referenced in David Romer's graduate textbook Advanced Macroeconomics. It is well-known that these models are empirically unrealistic.
So a mechanism (Calvo pricing) that is inconsistent with empirical price data is either applied (NK) or not applied (RBC) based on which version of the model would consistent with some other empirical data on unemployment. In this case, empirical data on unemployment (whether unemployment is voluntary) is considered more important in setting scope of the theory than empirical data on prices because ... because why?
There is no good ex ante reason in this case for weighting some empirical results more than others, so we have not only an ex post reason to choose the model, but an ex post weighting of different empirical results. We haven't just decided the NK model works better for policy prescriptions; we've decided we live in a different theoretical and empirical universe. We can pick and choose different theories and different empirical results ex post depending on subjective judgments ("craft").
PS Now I'm sure someone is thinking of the graphs I reproduce here that show a big spike at zero nominal wage change. If you look at those graphs, however, only about 12% of the labor force is in the zero change bin, rising to about 16% during a recession. The remaining 84% of the work force has wages changing by as much as 20%. Much like in the case of prices above -- where sale prices can change the price of a good by 50% instantly -- this really can't be considered micro stickiness. Some workers are getting wage cuts of 20%. Why doesn't that head off involuntary unemployment? It's not part of any economic theory, but I have an answer (in the link at the beginning of this paragraph). It's called macro stickiness, and it stems from the fact that there are still wage increases of 20% and there is no coordination that allows more wage cuts than raises (the distribution of wages doesn't change, so its average doesn't fall). There's an entropic force (a pseudo-force) that arises from the resistance of the distribution to change from the most likely distribution. I go into this more in my paper. And it even can rehabilitate the Calvo fairy -- the Calvo fairy becomes an effective description of macro stickiness.
Here are some more posts from me on nominal rigidity:
Micro stickiness versus macro stickiness
Macro prices are sticky, not micro prices
Nominal rigidity is an entropic force
...
PPS Below are some additional quotes from Wren-Lewis alongside some additional commentary from me.
Rodrik spends a good part of the book describing how you ‘navigate among models’. He warns that these methods are as much a craft as a science. Many have picked up on that, presuming that this is something that a proper science would not do. But as I have often said, the best analogies for economics are with medicine rather than physics. When a doctor diagnoses an illness based on symptoms, they could also be said to be using craft rather than science.
There is much more to this than it just being "a proper science would not do". Without specific conditions for when one model applies or not means that when an empirical result invalidates a model, it should invalidate that model for any condition. Not having different rooms in your building (theory) well-separated by fire-walls and fire-doors (scope conditions) means that a fire in one part burns the entire building down. A case of a minimum wage hike in New Jersey not impacting employment (along with several others) means that the Econ 101 theory of supply and demand in an open market has been killed because the Econ 101 theory of supply and demand has no scope conditions.
Even doctors abide by this. If there is some theory of disease X in general and clinical trials or experiments show it doesn't work for disease X, doctors won't use it for disease X. Well, they might, but people tend to get angry and sue if they do ...
The entire theory of disease X is burned down. Since there are more diseases, limiting a theory to a specific disease is an example of a scope condition -- those empirical results about disease X don't necessarily impact disease Y or disease Z.
And doctors nearly always have scope conditions (because they are scientists). Your theory of how neurons work is limited to neurons. And it isn't presented as a theory of how all cells work. Macro models, like those DSGE models above, are nearly always presented as if they always apply. At least always apply in the short run (e.g.) ... which I talk about next.
We built this theory on scope conditions...
It is routine, for example, to split issues up by time: the famous short, medium and long run. A New Keynesian model is not going to tell you much about long run growth, but a Solow growth model does not tell you much about involuntary unemployment.
Simon gives us the only scope conditions economics appears to have. These appear to be so embedded that "growth economics" (i.e. the long run) is basically a separate field from business cycle macro. And this is great! A negative empirical result for a NK DSGE model doesn't burn down the Solow growth model. Ladies and gentlemen: We have a scale!
Except we don't really know what these scales are. They are ad hoc separations created by theoretical fiat. For example, the Solow growth model does not have in it any mention that it only applies in the long run. Actually, it makes certain assumptions about the short run -- effectively saying short run macro fluctuations (the business cycle) either don't happen or average to zero over a very short time scale (less than a couple years). There is probably some way to make sense of this, but economists don't seem to be very concerned with treating scales properly in order to take the long run or short run limits.
Some day my long run will come
On limits
...
Lots of people get hung up on the assumptions behind models: are they true or false, etc. An analogy I had not seen before but which I think is very illuminating is with experiments. Models are like experiments. Experiments are designed to abstract from all kinds of features of the real world, to focus on a particular process or mechanism (or set of the same). The assumptions of models are designed to do the same thing.
Models are not like experiments. Economists remove external effects and confounding factors in order to isolate the system by theoretical fiat. This does not work without a big, successful theoretical framework. For example, physicists can set up thermodynamic thought experiments in textbooks, isolating the system by theoretical fiat, because thermodynamics was already shown to be a successful theory. And the reason physicists were able to set up experiments that isolated the experiments was that the way to isolate the system (with physical distance and barriers) luckily corresponded to our intuition. Macroeconomics is not intuitive, and so any attempt to isolate the system using assumptions is problematic. Essentially, models should be considered not "experiments", but "thought experiments in alternate universes" ... where we don't really know if the universe under consideration is our universe.
Some more on this subject from these two posts:
What's wrong with Dani Rodrik's view of economics
Thought experiments in alternate universes
Jason,
ReplyDeleteRegarding Wren-Lewis on RBC vs. NK, the relative roles of RBC (which I will hence refer to as Neoclassical, more on that in a moment) and NK models are pretty obvious to me simply based on their assumptions. Wren-Lewis is wrong when he suggest neoclassical and NK models should be used based on ex-post realizations from the data. The correct way to select between NK and neoclassical models is to determine what is being analyzed.
If we are trying to explain business cycles, then neoclassical models are useless. This is because they omit what most economists agree is the most important part of business cycles: nominal rigidity.
If we are trying to explain economic growth, then sticky prices become a complicated distraction, however. We know that NK models exhibit long-run neutrality of money (sort of), so there is no need to incorporate nominal rigidity into models of economic growth over decade-long horizons.
This selection has nothing to do with the data and everything to do with the object of analysis. It is superfluous to include sticky prices as a friction in a model that attempts to explain cross-country income differences; it would make modelling more difficult and obfuscate the true cause of differences in income.
Unfortunately, I don't seem to represent economics as a field, so there are apparently a lot of people (Smith and Wren-Lewis included) who seem to think that the models we choose to use depend on the situation (e.g., whether unemployment happened because of layoffs or quits) as opposed to the context (e.g., whether we are studying business cycles or working with the distributional effects of fiscal policy).
On a slightly different note, I'm not sure your point about models in economics being "thought experiments in alternate universes" rather than just "thought experiments" really matters. Everyone should know that the assumptions in any commonly used macro model are not correct in our universe, but why does this matter? So what the assumptions are obviously wrong, all we're doing is asking "what would happen if x, y, and z were true about the universe" any policy implications we draw from that depend on 1) how robust the result of our thought experiment is to changes in assumptions and 2) whether or not those assumptions are at least approximately accurate. Are prices sticky? Yes? All right then, changes in the money supply can caused changes in real variables. Do people generally like to consume stuff and not like to work? Yes? All right then, there's probably a labor-leisure trade-off. What's wrong with this kind of reasoning? After all, no one is actually trying to come up with an economic model that explains everything in economics (except perhaps you).
To be more specifically clear about why 'neoclassical' is better than 'real business cycle,' empirically there appears to be no such thing as a 'real' business cycle in the neoclassical sense of the word, so using neoclassical models for business cycle analysis is like trying to explain business cycles by abstracting from business cycles. RBC is completely defunct; neoclassical economics is not.
DeleteJohn, interesting comments. I figured you had a formal education in economics (at least), since you always seem to have knowledgeable sounding comments here and other places. But the title of your blog suggests otherwise. And then there's this tagline:
Delete"A fifteen year old teaches himself macroeconomics"
???
I'll turn sixteen in 3 months, if that answers your question. Otherwise, economics is just a hobby I picked up about a year ago that I'm considering making a career out of. Being really nerdy has it's benefits, I guess.
DeleteSorry John... I would never have imagined you were really that age.
DeleteWell, in any case, I enjoy reading your comments.
No offense was taken, sorry if my comment implied that.
DeleteThanks.
John, one quick point:
DeleteThere is no problem using obviously wrong microfoundations if in aggregation the 'wrongness' doesn't matter. I think that tends to be true in general (in the MaxEnt approach, the details of the specific microfoundations -- random behavior -- integrate out so don't matter).
But this has to be shown. And if it isn't, then you really can't trust that bad microfoundations aggregate to good theory.
John, no troubles. I'm surprised is all!
DeleteJason,
DeleteI get the impression you and I have a completely different view of what constitutes a "good theory."
You, as I would expect from a physicist (or any non-social scientist, for that matter) seem to value empirical success much more than I do. Personally, I think empirical validity is only consequential in models that try to be quantitatively useful (e.g., Smet-Wouters 2007). Otherwise, empirical validity is too much to expect from the simple thought experiments that economists use to qualitatively assess and suggest policy. I don't expect much empirical validity from a basic NK model, but I do expect it to tell me what monetary should be like, given a set of assumptions. In other words, NK models can and should inform me that inflation stabilization is a good target for monetary policy and that the ideal rate of inflation is relatively low (negative, in fact, so long as downward nominal wage rigidity isn't an issue and there is some form of money demand friction). I don't expect NK models to give me any quantitative analysis.
Or, to use another example, neoclassical models clearly won't fit economic an time series very well because, according to these models, there should never be unemployment, all economic fluctuations are caused by either the government or nebulous shocks to productivity, and they suggest any correlation between inflation and output is completely spurious. Despite all this, I will happily use a neoclassical DSGE to analyze fiscal policy in a situation in which monetary policy is not relevant, e.g., the annoying conservative Ricardian Equivalence argument against deficit financed stimulus: http://ramblingsofanamateureconomist.blogspot.com/2015/12/shut-up-about-ricardian-equivalence.html
All this is not to say that empirical validity is completely unimportant -- obviously, using Market Monetarism as an example, the complete failure of QE to do just about anything anywhere serves to validate the liquidity trap in favor of the Market Monetarist 'the BOJ, Fed, BOE, ECB, etc. really want low inflation and prolonged slumps' nonsense. Conversely, though, the last 7 years do nothing, as some would suggest, to invalidate New Keynesian economics in general.
Hi John,
DeleteI am as interested in a toy model as the next person (and there are lots of toy models in physics). By toy model, I mean something that isn't meant to be a precise description of real data, but rather something that gives insight into the magnitudes and directions of effects. A prime example is the Einstein solid model. I think that is how you view these econ models. I think there are different levels of empirical validity in the context of toy models:
1. Matching the data (i.e. various goodness of fit measures)
2. Being consistent with features of the data
3. Not being consistent with features of the data but it doesn't matter
4. Being broadly inconsistent with the data
When I say that Calvo pricing is not empirically valid, I am saying it is #4, not just failing #1. It could be #3, but that would have to be shown theoretically (straightforward to do with a model -- use Calvo pricing alongside some actual data as input and see the differences). If your model is #4, and not shown to be #3, then you can't really trust what you're getting out of it.
As an aside, one way to interpret a neoclassical model is as an approximation where unrate << 1. That kind of stuff is fine ... it's when they leave off the unrate << 1 scope condition that I have a problem.
PS examples ...
1. Quantum mechanics
2. Ising model
3. Einstein solid
4. The aether
... even the ITM takes (u - u*) << 1 sometimes!
DeleteI can't be bothered to defend Calvo pricing too much, since I refuse to use it myself, but, based on the typical stated reason for its adoption, it is equivalent at a first order approximation to menu cost models, just, arguably, more tractable (I would beg to differ, menu cost models are much easier to work with IMO).
DeleteAlso, I'm not sure it's even possible to come up with a sticky wage model that 1) relies on monopolistically competitive labor markets and 2) uses menu costs. This would explain why every sticky wage DSGE I've ever seen uses Calvo wage setting. Of course, monopolistically competitive labor markets are strange in and of themselves, so I'd probably put them under #4 on your list. It becomes a bit of a problem when economists set out to make a sticky wage model and use whatever assumptions they need to get there. I suppose you could make a similar criticism of sticky price models, but at least monopolistically competitive firms actually exist as the predominant kind of firm in the world.
Another area in which neoclassical models are obviously wrong: perfect competition. Will switching to monopolistic competition change a neoclassical model's implications for the efficacy of fiscal stimulus? No.
These big picture posts are always fun to read. Have you read Dani Rodrik's book? I'm guessing "no."
ReplyDeleteSimon says he didn't find anything thing to disagree with in it.
Not yet. But as different reviews haven't said anything that contradict each other, I think it is safe to assume I'm not mischaracterizing the book. The major point about different models for different scenarios comes from Dani himself [youtube].
Delete