Mark Thoma says economics is changing
But it's hard to deny that the questions [economists] are asking have gone through a considerable evolution since the onset of the recession, and when questions change, new models and new tools are developed to answer them.
I'm sure many people do not share this view -- they think change isn't coming fast enough or isn't radical enough. Me? I'm fine with this degree of change; I just worry about its direction. Thoma's post is all about adding complexity and relaxing assumptions of rationality. Is this a good thing?
Before I continue, let me define some terms I'll use.
- Zero-order theory: This kind of theory at best tells you orders of magnitude. Mostly it provide answers questions like: Does inflation exist?, Is growth positive or negative? or What is money? The quantity theory of money is a zero order theory. These orders of magnitude and answers aren't necessarily empirically correct. You can have an empirically correct (i.e. right) zero-order theory (within scope) or a wrong zero-order theory. You also can't tell the difference between measurement error and real fluctuations at this order.
- First order theory: This theory tells you orders of magnitude and directions of effects. Again, this can be right or wrong (within scope). The IS-LM and AD-AS models are (in some respects) first order theories. At this point you can usually tell the difference between what is a real fluctuation and what is a measurement error (at least given the first order theory).
- Higher order theory: This basically means you've gotten things right -- you have an empirically accurate first order theory -- and now you are trying to expand scope.
My mental picture is a "Taylor expansion" of a theory T with a scale x₀ (that sets scope) -- the scope of the zero order theory is e.g. δx << x₀ while the scope of the first order theory is e.g. δx² << x₀². A higher order theory can handle most values of δx. It looks something like this:
Each term added is a new effective theory with increased scope. Also note that a framework in a sense tells you how to go from zero order theory to first order to higher order (e.g. the scales and and how the theories relate to one another). It's not absolutely necessary, but is extremely useful because it organizes all of the empirical successes of zero order (and higher) theories.
And finally, a theory that gets better (by some metric) as you proceed from zero order to higher order is a progressive research program. One that doesn't get better (n.b. it doesn't actually have to get worse) is a degenerative research program.
There was a dominant macro paradigm that wasn't all that empirically accurate and seemed to predict that something like the global recession would not occur and that inequality was just about differences in effort. Economics as a profession seems to believe this paradigm was a first order theory. That is what I gather from Mark Thoma's post.
The financial crisis of 2008 was a striking failure of that paradigm. Note that we don't need to know if a financial crisis causes or is the effect of macro fluctuations to assert this. Thoma tells us what that pre-2008 paradigm said:
Our modern financial system couldn't crash like those antiquated systems that were around during and before the Great Depression. There was no need to include it in our macro models, at least not in any detail, or even ask questions about what might happen if there was a financial crisis. ... In the past, the Taylor rule was praised as responsible for the Great Moderation. We had discovered the key to a stable economy.
So 2008 was not the kind of event that happens in that paradigm and now monetary policy is under scrutiny. But, Thoma says, economics is changing -- by adding complexity: new monetary policy goals, adding financial sectors and asymmetric bargaining power.
But if you first order theory fails in its first order tasks (within scope), you don't go constructing a second order theory from it. You really only have three choices: 1) reduce the scope of the existing theory, 2) try other first order theories or 3) go back to zero order. We can consider the first two options represented by Dani Rodrik (using models only for their original purpose) and Paul Krugman (a "return to depression economics" like the ISLM model), respectively.
But the existing theory was never very empirically accurate so reducing its scope makes it all the more useless. And the general lack of scope conditions in economic models means we can't really choose which other first order theories apply ex ante, only ex post. Macroeconomists become "economic doctors" in Simon Wren-Lewis's vision.
That is the existential crisis macroeconomics is finding itself in. The only choice left is the one that is hardest for an established field to take: go back to the drawing board and come up with some new zero order theories [1].
Footnotes
[1] That's where the information equilibrium framework comes in. It represents a new framework for constructing zero order theories (and expanding them to first order).
"T(x) ≈ c₀ x₀ + c₁ (δx/x₀) + c₂ (δx/x₀)² + ..."
ReplyDeleteI think you referred to the constants c0, c1, c2 etc as "natural constants" once before. Or cautioned that they should be. What's that mean?
That the ci are order 1. But that isn't particularly relevant to this discussion because you only know that if you have an empirically accurate theory.
DeleteWhat it implies if c₂ ~ 0.00001 or c₂ ~ 100,000 is that the scale x₀ is probably wrong and you have some other scale setting c₂ (e.g. some kind of agent behavior).
I wouldn't preclude unnatural coefficients because there might be agent details subsumed into the coefficients.
The crisis in economics has arisen largely because the values that have driven New Classical economics are akin to those of the jungle. Economic theory has been rationalized from a foundation in neoliberal values for 40 years. Economic policy has co-opted supply side economics exclusively and failed dismally. Neoliberal values find easy identity with supply side economics. Economic discourse is so stilted these days that Genghis Kahn looks like a socialist.
ReplyDeleteForget about your orders of theory, rather reorder and prioritize your foundational values. Economics, despite what many pretend, cannot be and is not divorced from normative considerations. It’s no wonder that it was once known as Political Economy. Attempts to sanitise economics of its normative aspects and have it be respected as a mathematical science were the beginning of the end and used merely to disguise the existence of the underlying value system at its base.
Henry
Moreover because economies have bee based on neoliberal values for 40 years, the economic data coming from them approximate to the theory.
DeleteEssentially you only have data of a man in a straitjacket, and have no idea what the movements would be if the jacket was removed.
Therefore any 'new' theory is going to be difficult to test in the real world, since there is only a restricted movement data set to work with.
Henry,
DeleteThat is why I am trying to develop an economic theory of random agents that has zero normative implications. There is nothing an ideal gas should be doing, nothing it ought to do.
If random behavior leads to supply and demand, there can't be any morality behind it.
Neil, you said:
"economic data coming from them approximate to the theory"
That's pretty sad if these empirically inaccurate theories still can't describe data that's been cajoled into behaving like those theories ...
Jason,
DeleteI don't entirely understand the way you formulate your economic model. However, I would say your endeavour in economic atomization is fraught. Not all economic behaviour, if any at all, is random. I can't see how it can be given human behaviour and its the foibles of human beings, even in the large.
I actually agree with this. Human behavior is critical to market failures. In a sense (in this formulation), markets are working if (and only if) human behavior doesn't matter. e.g. Herding behavior leads to falling output and prices.
DeleteAtoms don't spontaneously panic and move to one side of the container, causing a massive fall in entropy. This doesn't happen, so we have the "second law of thermodynamics" with entropy always increasing.
Humans however can spontaneously panic and try to enter the same economic state (like selling assets). This does happen with regularity ... And so the model I formulate here is basically "thermodynamics without a second law". It allows the atoms to 'panic'.
Whether this is a useful formulation (versus empirical data) remains to be seen ...
Jason,
DeleteOn the one hand you say you have eliminated normative issues because you assume atomized random behaviour and then you say you allow for mass behaviour. I’m a little confused.
In any event, I think you are not very clear on what I mean by normative aspects. You seem to be saying that normative aspects apply to individual economic agents, such as consumers or investors. This is not what I mean by normative aspects. Normative aspects refer to issues such as the value of fiscal policy vis-à-vis monetary policy, the nature and ramifications of inequality or perhaps the nature and value of market power intervention, etc..
To say, “ There is nothing an ideal gas should be doing, nothing it ought to do.”, I still find problematic. That might be fine if you want to model an ideal gas. I can hardly see the application to the real world. Economists pretend to believe (or fool themselves into believing) that positive economics is all that matters. It doesn’t take too much scratching to discover that behind every policy prescription always lies a value judgement.
Henry
Hi Henry,
DeleteAllowing for mass behavior doesn't preclude random behavior being a good approximation most of the time. And the non-normative aspects don't flow exclusively from the random behavior. I think this might clarify a bit ...
In the model we have I(A) = I(B). Sometimes mass behavior makes I(A) > I(B). Maximizing I(B) is not good or bad. It's just information. Random behavior leads to I(A) ≥ I(B).
In mainstream economics, we have utility functions U(B) -- and maximizing U(B) is good. And U(A) > U(B) means A is preferred to B, and utility maximizing behavior leads to more A than B.
I do understand what you mean by normative. My point was that the theory isn't set up with normative definitions like most of economics is. Since mainstream econ is based on utility, utilitarian moral theory is readily imported into economics.
Many mainstream macro theories have been built with the sole purpose of saying fiscal policy is useless -- and that is a separate issue on top of the utilitarianism.
This isn't to say various conclusions from the information equilibrium theory couldn't bolster a case for normative policy prescriptions. If inflation is high, monetary policy will offset fiscal policy and is therefore the only tool of macro stabilization. If inflation is low, monetary policy is useless, and fiscal policy is the only solution.
On inequality, the information equilibrium model says that states with less inequality would have more "entropy" and therefore would be higher growth. It's a normative ethical theory that then takes that to say wealth equality is a desirable state, given the cost-benefit analysis of individual freedoms versus collective growth.
Much like global warming science says the earth is getting warmer, it doesn't say whether we should stop it or let it go on. That is a value judgment involving an ethical theory. Science never says X is bad or good. It just says what X is, not what it ought to be. Now most ethical theories say the consequences of global warming are bad, so we should do something about it.
"Allowing for mass behavior doesn't preclude random behavior being a good approximation most of the time."
DeleteHow can this be expressed formally within the context of your model?
"On inequality, the information equilibrium model says that states with less inequality would have more "entropy" and therefore would be higher growth."
Ditto?
For the first question, in the notation above:
DeleteI(A) ≈ I(B)
or I(A) = I(B) + dI where dI is an error term with ⟨dI⟩ ≈ 0.
For the second question, the maximum entropy distribution given a constraint on the maximum amount is a uniform distribution.
The maximum entropy distribution with a constraint on the variance is a normal distribution and a constraint on ⟨log x⟩ is a Pareto distribution.
Jason,
DeleteThanks.
I figure if a I ask a question I might get closer to understanding what you are on about - only to find myself further down the rabbit hole. :-)
Henry.
I actually think that the relaxation of the so-called rationality assumption for agents and complexification of agents' behaviors is a step forward. Why? Because, as one of my psych profs pointed out, a multitude of small, independent effects approaches randomness. And effectively random behavior of complex non-rational agents leads to a loss of faith in microfoundations. Assume randomness at the micro level. :)
ReplyDeleteThat would be great, but that is assuming an agent based model -- the financial sectors and relaxing assumptions above are in the already aggregated macro models. In that case, they won't appear random ...
DeleteThe death of homo economicus has yet to come, but I think that he will breathe his last in this century, at least in macroeconomics, once he is no longer needed for microfoundations. One problem with homo economicus is that certain behaviors are assumed to be universal, or nearly so, because they are rational, utility maximizing behaviors. Thus, representative agents will act in concert, frequently producing suboptimal results. As you say, you beat game theory again and again. ;)
DeleteAs for various aggregates of agents, such as different sectors or different classes, whether they account for certain effects is an empirical matter. Certainly they are few enough that we should not expect randomness. Your diffusion model assumes an undifferentiated economy, but there is no reason to think that the economy lacks structure. Still, the right place to start is with minimal structure and let structural hypotheses prove themselves.