Tuesday, January 30, 2018

2.4% growth forever?

As part of the work going into my next book that will attempt to re-write the narrative of the past 50 years of economic growth [1], I've been working on the dynamic information equilibrium model of nominal and real GDP (NGDP and RGDP). The key element connecting those two measures is the GDP deflator (DEF), so let's look at it first:

The deflator is best modeled as having a dynamic equilibrium of 1.4%/y with several shocks from 1960-1990 where inflation falls after recessions (indicated with R's). I am calling this the "Phillips curve era" where shocks to employment result in shocks to inflation and the era is associated with the demographic shift of women entering the workforce. I have speculated that these factors are all connected.

There is another shock in the 2000s that lines up with an NGDP shock (below) that is much further away from either the recession preceding or following it. It is because of these unique properties (and because of the description of NGDP below) that I consider this a different "era" that I'll call the "asset bubble era" [2]. Take a look at NGDP:

Most of the NGDP data is dominated by a long duration shock that lines up with the demographic transition of women entering the workforce. After that shock subsides, we have two smaller, shorter positive shocks followed by crashes. Looking at their timing, we can pretty unambiguously associate these with the so-called "dot-com" boom and the housing boom. The latter boom coincides with the aforementioned inflation shock. It is possible the inflation signal is stronger than the dot-com boom because the housing boom involved building actual housing (and therefore employing a lot of labor) as well as assets that a larger fraction of the population own (housing). It is also possible that the reason is far simpler: housing is included in measures of inflation while stocks aren't. But regardless of the reason, the second boom-bust cycle was accompanied by moderate inflation.

The combination of the previous two models gives us RGDP:

The DEF shocks are the purple vertical lines, while the NGDP shocks are the green vertical lines. The DEF inflation rate is shown as negative since it is subtracted from NGDP growth to obtain RGDP growth (RGDP = NGDP/DEF). The a-periodic oscillations of mean RGDP are what give rise to the sensation (illusion?) of a "business cycle", however the "Phillips curve era" oscillations are more strongly connected to DEF shocks while the "asset bubble era" osciallations are more strongly connected to NGDP shocks (collapses of asset prices). We can see that there hasn't been a strong signal of a new asset price bubble (the S&P 500, while seemingly on a recent winning streak, is consistent with fluctuations around its historical growth rate). However this doesn't mean there won't be another recession shock; I'm using the "asset price bubble" label as just a label [2].

In addition to trying to set up a framework to understand these phenomena for the future book, I also saw a tweet from Stephanie Kelton talking about the downward revisions of potential RGDP and potential NGDP both in level and growth rate [3]. She thinks that 2% growth going forward is too pessimistic -- saying we can get 3% growth. Now the model above says that the dynamic equilibrium is 2.4% (so I'd agree that 2% growth is a shade pessimistic, see [3]).

But there is never a period in the history where the US has achieved a sustainable RGDP growth above the 2.4% dynamic equilibrium rate where we have decent data. The 60s and 70s involved a major change in the civilian labor force (increasing the relative fraction of women in the labor force) that gave us 20 or more years of RGDP growth periodically above 2.4% coupled with bouts of sub-2.4% growth. The only other times of above-2.4% growth were during the dot-com and housing bubbles.

I'm not saying it isn't possible, but it would require something that hasn't been tried in post-war US history [4]. I would also add that the RGDP growth rate does not strongly impact the rate of fall of the unemployment rate which has been roughly constant over the same post-war period, so higher or lower RGDP growth is more about the level of income than employment rates. Overall, unless there is another asset bubble of some kind (or a negative shock due to reduced immigration [4]), I think it's 2.4% growth for the foreseeable future.


Update 19 February 2018

Here is an "economic seismograph" view of this data (and more including unemployment, the S&P 500, and the Case-Shiller housing price index). It uses PCE inflation instead of the deflator, but the general form is the same.


Update 31 January 2018

The same general story appears to be true for the UK, except there is no separate "dot-com" boom, but what is rather a general financial boom beginning in the late 90s and continuing until the 2008 financial crisis:

There is also an interesting mini-boom in the late 1980s (the center is 1988.7) that is known as the "Lawson boom" in reference to Chancellor of the Exchequer at the time Nigel Lawson. This "boom" has the properties of the housing boom in the US (an inflation shock at the same time as a nominal output shock, with the recession associated with it coming a bit later than the "Phillips curve" cycles).



[1] This is best read as a kind of self-deprecating hubris.

[2] I'm not really making any kind of normative or even theoretical claim here, just that in the popular culture the housing bubble and dot-com bubble are useful labels for a collection of events. I wanted to tentatively call it the Minsky era because the investment boom/bust cycle is very close to that described by Minksy. However the period following the 2008 financial crisis appears to be relatively calm suggesting it might be some time before another "Minsky cycle" gets started. The dot-com "cycle" was followed immediately by the housing "cycle", but this may not be the typical case going forward. We might observe long stretches of flat growth (like the present period) before another cycle booms and busts.

[3] The most recent potential GDP seems to show a 3.8% NGDP growth rate going forward (matching the dynamic equilibrium above), but only shows 1.8% for RGDP growth which is probably due to the assumption of 2% DEF inflation (the dynamic equilibrium is 1.4%).

Monday, January 29, 2018

On prediction parables

Josh Hendrickson at The Everyday Economist has a post (H/T Mark Thoma) "On Prediction" that illustrates the underlying problem I have with macroeconomic "parables". In his post, he tries to write a parable about a child and spilled milk that is supposed to be a metaphor for prediction and mitigation of business cycles.
Suppose that you are a parent of a young child. Every night you give your child a glass of milk with their dinner. When your child is very young, they have a lid on their cup to prevent it from spilling. However, there comes a time when you let them drink without the lid. The absence of a lid presents a possible problem: spilled milk. Initially there is not much you can do to prevent milk from being spilled. However, over time, you begin to notice things that predict when the milk is going to be spilled. For example, certain placements of the cup on the table might make it more likely that the milk is spilled. Similarly, when your child reaches across the table, this also increases the likelihood of spilled milk. The fact that you are able to notice these “risk factors” means that, over time, you will be able to limit the number of times milk is spilled. You begin to move the cup away from troublesome spots before the spill. You institute a rule that the child is not allowed to reach across the table to get something they want. By doing so, the spills become less frequent.
What is wrong with this parable? We are being set up.

  • First, macroeconomists don't actually know what a recession is or what causes economic growth. In this parable, you'd have to change the specificity of a glass of milk being spilled in the parable with something more like "something happens with this unknown substance in a container".
  • Second, we don't know if there is a "child" — we do not know if recessions are caused by endogenous or exogenous factors to the "substance". Saying the milk knocks itself over is silly in this parable, but may well be what happens in an economy using the analogy where milk is growth and spilled milk is a recession. The container is effectively inside another room and we only get imperfect acoustic or electromagnetic readings.
  • Third, since they don't know what recessions are, macroeconomists can't possibly know what prevents recessions. Adding a lid and moving the cup (things that obviously prevent spilled milk) become "attach a component to the container" and "manipulate the container". Re-written, these actions no longer contain bias of an implicit model.

Let's rewrite this parable:
Suppose that you are a scientist studying an unknown substance in a container kept in quarantine brought back from Alpha Centauri by a probe. Every day, you weigh the container to ensure the substance is still in the container. You occasionally register strange periods of electromagnetic radiation and acoustic bursts. You label these periods "emissions". You try various experiments: irradiating the container with high energy gamma rays, exposing it to acoustic shocks, or heating it. You flip the container over a couple of times. You notice that the "emissions" become less frequent. Since you are a scientist, you draw no conclusions whatsoever because the active component or components of the substance could potentially decay on their own or depend on factors you haven't controlled for.
People start asking why the scientists can't predict the emissions. The scientists answer: because we don't understand the substance. It might be an inherently unpredictable quantum process, but that's just one theory. Nobody has produced a theory that predicts the emissions, but that does not mean one doesn't exist. Scientists admit that a theory that was able to predict emissions would go a long way toward demonstrating an understanding of the substance, but also note that prediction is not the only way to demonstrate such understanding.
Hendrickson claims that recessions are unpredictable, but mitigated by policy. In our parable, he is some theorist promoting some specific theory that the substance is inherently random, but affected by manipulating the container (let's say his theory says flipping it over mixes the substance inside, causing the two immiscible components that generate the emissions to mix, reducing the average distance between regions and thereby reducing the magnitude of the emissions that depend on the substances being separated). That is to say Hendrickson's version of the parable assumes an implicit specific theory of what he is writing the parable about.

I personally think the "scientific" approach of saying "we don't know" when in fact we don't know is a far superior way of dealing with public criticism of macroeconomics than the "we do know, but it's unpredictable" approach — the former approach having the benefit of being both true and an excellent way to counter people who claim to know with meager evidence.

Macroeconomists taking the latter approach may well be behind the proliferation of "heterodox" approaches because a lot of "mainstream" economists can't seem to bring themselves to say no one has sufficient data to make strong claims about economic growth or recessions. The public has a strong desire to understand the world and e.g. politicians can offer that understanding with precisely the same level of empirical confidence macroeconomists muster in the face being unable to predict outcomes (i.e. close to zero).

The other problem is that claiming knowledge where none exists undermines macroeconomists' claim to knowledge where it does exist: minimum wages have very limited negative effects on employment if at all; fiscal austerity is bad during a recession; and immigration doesn't hurt and may help economic growth. I'm not saying there aren't studies out there that show at least some empirical regularities.

I am completely on board with the idea that recessions could well be unpredictable, and that being unable to predict something isn't in itself a valid criticism on its own. The current state of knowledge in macroeconomics is such that recessions are "unpredictable" because we don't know if recessions are predictable, not that they are "unpredictable" because we know they cannot be predicted. This is a subtle point and one prone to being elided through pretending knowledge where it does not yet exist.

Losing my vestigial monetarism

With the latest data on core PCE inflation, I get to close out a forecast of 4 years of inflation data. The monetary information equilibrium model turned out to be biased low to almost the same degree the FRB NY DSGE model was biased high in the last year of the forecast period (-40 basis points versus +46). Nearly all of the error in the DSGE model can be attributed to the assumed return to 2% core PCE inflation. If it had instead predicted a return to 1.7% core PCE inflation, it would have performed almost perfectly.

In fact, the constant 1.7% inflation model did perform almost perfectly, which is excellent news for the dynamic information equilibrium model that predicts a constant 1.7% core PCE inflation in the absense of shocks. I'll define "good" performance relative to the constant model; with that metric the information equilibrium and DSGE models performed poorly.

The poor performance of the monetary information equilibrium model is in part behind my recent post on money as aether. Like a lot of people getting into economic theory, I was susceptible to the "money" view and one of the first models I produced was a "quantity theory of money" where

(1) log P ~ (k(t) − 1) log M

with k(t) falling over time. The M in this case is M0 (i.e. notes and coins) since every other measure (M1, M2, MZM, etc) worked terribly even after allowing k =  k(t). It is now apparent that the k(t) in the model (1) above was accounting for the dissipation of the major demographic shock in the 1970s, and instead of continuing to fall the shock faded resulting in an underestimate of inflation.

So I consider this model rejected and it will get a frown-y face at the prediction aggregation link. This allows me to shed the last vestiges of my adherence to the paradigm of money. This model, which would have outperformed the Fed's so-called P* model using data available at the time (mostly because that linear extrapolation of k(t) from 1990 where the demographic shock was still fading was still an accurate approximation) as well as the various (monetary) models tested by Rochelle Edge and Refet Gurkaynak in 2010, will be laid to rest.

I will say in my defense that it only took me 4 years of looking at the data to reject monetary explanations of macro observables like inflation.

PS Here are the other graphs:


Update 30 January 2018

Here are the comparable dynamic information equilibrium core PCE inflation model graphs (both with and without the 2014 shock implied from NGDP data):

Saturday, January 27, 2018

Looking at horses

Chris Dillow has a great post about "looking at horses" in economics. The metaphor is from Ely Devons:
"If economists wished to study the horse, they wouldn’t go and look at horses. They’d sit in their studies and say to themselves, ‘What would I do if I were a horse?’"
In it, he references a tweet of mine:
I will defend Cobb-Douglas for a second: it arises from general information theory considerations when matching information entropy one distribution to information entropy of two or more distributions
 And says:
Nobody saw fit to point out that if you want to know how useful they are, you should look at how actual firms produce actual stuff: do Cobb-Douglas functions describe the real world or not? Again, nobody’s looking at the horses.
It seems in both cases (the tweet thread and in Dillow's post), I was misunderstood: I was defending the general ansatz

Z = X^{\alpha} Y^{\beta}

not any specific application of the ansatz that exists in economics — with the exception of matching function in search and matching theory. And I was defending it because I was looking at horses. I turns out that if you look at nominal output instead of "real" output, the Cobb-Douglas aggregate production function used in the Solow model is actually remarkably accurate:

I used this to form the basis of the "quantity theory of labor and capital" that is also empirically accurate.

Now it is true that I defended Cobb-Douglas functions from a theoretical standpoint, but the only reason I was comfortable doing so was because I had already shown it was empirically useful. And it's not only as a production function — but as a matching function as I discuss in my recent paper (with empirically accurate models of unemployment). This is to say that it is not prima facie a bad starting point whenever you have two things combining to create a new thing (labor + capital = output, or job seeker + vacancy = hire). I think a lot of people hear Cobb-Douglas and immediately think of the Cambridge Capital Controversy (which I have solved — kidding). This is unfortunate because the general ansatz is perfectly sound.

Friday, January 26, 2018

Another successful forecast (NGDP)

I get to close out another forecast collected at the aggregated prediction link with a smiley face: NGDP (last updated here). The latest NGDP data has been released. In the graphs below, orange indicates the data available at the time of the forecast, the vertical line indicates the beginning of the forecast (January 2015), and the yellow indicates the data as well as a linear fit to the growth rate data. The gray dashed lines indicate the "old normal" which used the growth rate average from before the Great Recession, while the solid gray line indicates "the new normal" which used the growth rate average from 2010-2015 (about 3.8%/y).

The "old normal" (gray dashed lines) is pretty decisively rejected at this point, and the data remains consistent with the information transfer model (ITM) and the "new normal". But what is also great news for the information equilibrium approach in general is that this data also validates the dynamic equilibrium model (see e.g. here) because it also says growth should average 3.8%/y (i.e. the "new normal"). The dynamic equilibrium/"new normal" version (gray solid line in the first graph) almost perfectly matches the linear fit to the NGDP growth data (yellow dashed line in the first graph).


Update 30 January 2018

Here is the dynamic information equilibrium model version of this graph (bands are 1-sigma and 2-sigma error bands):

Wednesday, January 24, 2018

Varying $\langle k \rangle$

As I mention in my (newly revised) recent paper, we can use the ensemble approach to arrive at an almost identical information equilibrium condition for an ensemble of markets:

\frac{d \langle A \rangle}{dB} = \langle k \rangle \; \frac{\langle A \rangle}{B}

If $\langle k \rangle$ is slowly varying enough to treat as a constant $\bar{k}$, then we obtain the same solutions we have for a single market. But what if $\langle k \rangle$ has a small dependence on $B$

\langle k \rangle \approx \bar{k} + \beta \frac{B}{B_{0}}

Where $\beta \ll 1$? The result is actually pretty straightforward (the differential equation is still exactly solvable [1])

\frac{\langle A \rangle }{A_{0}} \approx \left( \frac{B}{B_{0}} \right)^{\bar{k}} \left( 1 + \beta \frac{B}{B_{0}}\right)



[1] The exact solution is

A(B) = A_{0} \exp \left( \beta \frac{B}{B_{0}} + \bar{k} \log \frac{B}{B_{0}} \right)
but since $\beta$ is small, we can expand the exponential and rewrite it in the more familiar form showing the $\beta$ term as a perturbation.

Tuesday, January 23, 2018

(A bad) dynamic equilibrium: homelessness in Seattle

I saw this data in a Tweet from Erica Barnett (above) about the relationship between Seattle (King County) homelessness and rental prices. This data fits pretty well with a dynamic information equilibrium [1] (i.e. "matching" model per my recent paper):

There was a shock that reduced rental prices during the Great Recession (centered at 2008.9, lasting about 6 months). This temporarily halted the growth in rental prices, which was followed by a temporary halt to homelessness growth (a shock centered at 2011.2, lasting about 2 years). Since those shocks, rental prices and homelessness have returned to their previous trajectories. Absent some kind of intervention, it is forecast to continue.



[1] Although I call this an equilibrium, it doesn't mean this equilibrium is "good" for social welfare. The fact that rental prices and homelessness are directly connected is more a failure of the market provision of housing than an endorsement.

Sunday, January 21, 2018

Money is the aether of macroeconomics

So I've never really understood Modern Monetary Theory (MMT). In some sense, I can understand it as a counter to the damaging "household budget" and "hard money" views of government finances. To me, it still cedes the equally damaging "money is all-important" message of monetarism and so-called Austrian school that manifests even today when a "very serious person" tells you it's really the Fed, not Congress or the President that controls the path of the economy and inflation when neither inflation nor recessions are well-understood in academic macroeconomics. People have a hard time giving up talking about money.

Austrian school? Yes. Austrian school. This dawned on me some time ago when I read Noah Smith's steps for combating the monetary "hive mind" he says is pervasive in finance:
So how does one extract an individual human mind from this hive mind? That is always a tricky undertaking. But I've found two things that seem to have an effect: 
Method 1: Introduce them to MMT. MMT is a great halfway house for recovering Austrians.
It does make sense to think of MMT as a way for an Austrian school devotee to wrap their head around quantitative easing not causing inflation without abandoning too many priors. They just have to nudge their target for the "right" amount of inflation a bit higher (or even just to the Fed's ostensible target of 2%).

I came across a link in several places in my Twitter feed the other day (which is why I decided to write this post) that's actually a really good explainer of MMT. It also helps explain this connection to Austrian school economics. Read these two quotes; first: 
Money is created effortlessly every day on computers in large numbers. It’s our access to real resources that is limited.
and second:
As the issuer of the currency, governments have the ability to out-bid any private sector business or even control sectors of the economy, such as education, public infrastructure or health care (nations choose varying approaches). Governments should be held accountable to act responsibly when competing for certain scarce resources in the economy to avoid undesired levels of price escalation. 
At the same time, governments have too often been guilty of the opposite problem – not managing the currency in a way that maintains domestic full employment and acceptable base living standards.
Now read Ludwig von Mises :
In theoretical investigation there is only one meaning that can rationally be attached to the expression Inflation: an increase in the quantity of money (in the broader sense of the term, so as to include fiduciary media as well), that is not offset by a corresponding increase in the need for money (again in the broader sense of the term), so that a fall in the objective exchange-value of money must occur.
In both cases, money is simply a tool to move real resources (i.e. the real goods and services money is needed for). The question of inflation then becomes a question of whether there are too many or not enough real resources to be moved with money, as well as a level of inflation we define as "right" (with traditional Austrians usually going for 0% and MMT-ers going for something like 4% or more). The other conclusions generally follow from this (e.g. a sovereign government can never run out of its own currency, only produce excessive inflation in MMT). And if inflation hit 10% or more, both MMT and Austrian economics could find themselves on the same page. To put in physics terms, the theories converge as inflation becomes large compared to the (inverse) length of business cycles.

I'm not saying these movements are politically aligned — Austrians tend to be more conservative and MMT-ers more liberal. The issue here is that where these theories converge (at high inflation) is also the only place where they're supported by empirical data. Inflation really seems to be proportional to whatever you might think of as money when inflation is high. As we'd say in physics, it's a great effective theory. But at moderate levels of inflation, the theory breaks down. As I wrote in my post on what to do when your theory is rejected, we should set a scale (inflation ~ 10%/y or 10 years, remarkably comparable to the observed period between recessions) and let our imaginations run wild with whatever model fits the data, not blindly apply a high inflation theory to low inflation. Constraining us to thinking about "money" is tying our hands.

So instead of saying "money is just a tool for moving real resources, therefore money is all-important to the economy", what if we say "money is just a tool for moving real resources, therefore (except in extreme circumstances) money doesn't matter"?

Usefully, these views turn out to be transparently expressed in the information equilibrium framework. This framework allows me to more precisely write down what it means for something to move distributions of real resources around — which is equivalent to moving the information specifying the distributions around. Claude Shannon invented the field that studies this specific subject (information theory), and I think the idea that "money" (whatever you mean by it) is most fruitfully thought of as medium of information flow. Let's consider aggregate demand and aggregate supply, assuming they match in equilibrium. We can then say:

(1) P ≡ ∂AD/∂AS = k AD/AS

We can introduce "money" M by using the chain rule [0] in calculus plus M/M = 1:

(2) (∂AD/∂M) (∂M/∂AS) = k (AD/M) (M/AS)

Here, "money" is simply functioning as a tool moving "real resources" AS. You could insert anything in that equation: B for bonds or bitcoin. Or G for government debt. If aggregate demand and aggregate supply are in information equilibrium, and money is in information equilibrium with demand, then money is in information equilibrium with supply, i.e.

(3a) ∂AD/∂M = k₁ AD/M


(3b) ∂M/∂AS = k₂ M/AS

By dividing Eq (2) by Eq (3a). Note that the left hand side of (1) is the exchange rate for a piece of the aggregate economy —  i.e. the price level P. Now Eq (1) tells us 

log AD ~ k log AS 

as well as 

log P ~ (k −1) log AS

If we define ADP Y to use a common symbol in economics for real output (Y), then we find — using the right hand side of Eq (1):

AD ≡ P Y = k (AD/AS) Y

so AS = k Y

That is to say "real output" is directly related to "real resources" in equilibrium. But Eq (3a) also tells us that log AD ~ k₁ log M which means that if "money" grows rapidly compared to real resources AS (i.e. Y is approximately constant), we also find

(4) log P ~ k₁ log M

This latter relationship requires a disequilibrium between money and real resources (AS) because the equilibrium allowing us to write (3a,b) also implies log Y ~ log AS ~ (1/k₂) log M making

(5) log P ~ (k₁ − 1/k₂) log M

reducing the inflation rate in (4) — and in fact requiring some strong restrictions on the form of M (and the relationship between k₁ and k₂) if AD and AS are in equilibrium. Basically, the quantity theory of money as well as money being the source of inflation requires the disequilibrium between money and real resources both Austrian and MMT devotees claim. That's the nugget of truth. But empirically (4) is only roughly true for economies where inflation is well above 10% where M is identified with base money.

But since M was arbitrary (it is introduced via two mathematical identities), the typical case of an economy near equilibrium —  i.e. not in recession or experiencing hyperinflation per Eq (4) —  should be independent of an arbitrary redefinition of the medium of information flow. Whether we say we exchange work for money and then money for goods, or we just exchange work for goods doesn't matter unless you're in hyperinflation or recession. You might say the latter is extremely important, but it turns out economies aren't in recession most of the time (a few quarters every ~ 8 years for the US [1]) so most of the inflation that happens isn't monetary [2]. 

Whatever you think money is, it doesn't really matter.

At least if you're not in hyperinflation or possibly right in the moment of a financial system seizing up as some theories of the financial crisis shock of 2008 propose.

That's the conclusion we should be drawing from the idea that money is "just a tool" to move information about real resources around. Much like how air doesn't really matter to the transmission of sound waves under typical conditions in a room (all the physics of molecules and thermodynamics is subsumed into a constant speed) —  it's just a tool to move vibration information from one point to another —  money appears to have little to do with the bulk of inflation from what empirical data is available. In fact, the inflation rate went right through the financial crisis with nary a blip right when commercial paper —  one of the major mechanisms by which large payrolls are funded —  lost its moneyness.

So what is inflation if it's not monetary? As Noah says in his post linked above that inflation is "one of the biggest mysteries of macroeconomics". My intuition is telling me that inflation is demographic —  the shocks to inflation are contemporary with or follow shocks to the labor force size (coupled with the structure of the fading Phillips curve). Shocks to the monetary base follow the shocks to inflation. You can also read Steve Randy Waldman's account. Social factors leading to the baby boom and women entering the workforce were the likely real drivers of inflation (i.e. "real resources" like labor that money was just a tool to help move around), and money was just along for the ride.

Regardless of whether you like the information equilibrium take or whether you find it useful, the key fact is that inflation —  except in cases of hyperinflation —  empirically isn't related to "money" regardless of what you think money is [3]. The "money is all important" view —  regardless of your pet theory of money —  is based on a facile extrapolation from a completely different regime [4]. That view might be what's behind all these various measures of "money": money has to be important to unemployment and inflation, therefore some measure must exist that makes the correlation manifest. M1? No, M2! Interest rates! No, it's government debt! No! It's NGDP expectations!

It's all reminiscent of the aether in physics. Something must be the medium in which light waves oscillate! Aether dragging! No, partial aether dragging! Really, the aether is just a tool to move electromagnetic energy around.

Money is the aether of macroeconomics [5].


[0] The chain rule is dy/dx = (dy/dz) (dz/dx).

[1] I don't want to be flippant about the actual human suffering in recessions, but I think it is better for that suffering in the long run to have empirically accurate theory that can yield real solutions than dubious monetary maxims that claim to help.

[2] "Most" of the inflation in post-war US economic history was caused by a large shock centered in the late 70s. Aside from that period, inflation has been roughly constant at approximately 2.5% (CPI all items) or 1.7% (core PCE).

[3] Unless you think money is people, which is a slogan I could get behind —  at least in terms of empirical data.

[4] Some theories say that hyperinflation is actually a political phenomenon, meaning even there the correlation between money and inflation may be subordinate to the actual process.

[5] I am probably trolling here more than I should be, but it really doesn't put me that far from Paul Romer who wrote up a menagerie of terms for economic concepts including aether and phlogiston.

Wednesday, January 17, 2018

What to theorize when your theory's rejected

Sommerfeld and Bohr: ad hoc model builders rejecting Newtonian physics ... for action p dx ~ h (ca. 1919)
I was part of an epic Twitter thread yesterday, initially drawn in to a conversation about whether the word "mainstream" (vs "heterodox") was used in natural sciences (to which I said: not really, but the concept exists). There was one sub-thread that asked a question that is really more a history of science question (I am not a historian of science, so this is my own distillation of others' work as well a couple of my undergrad research papers). It began with Robert Waldmann tweeting to Simon Wren-Lewis:
... In natural sciences hypotheses don't survive statistically significant rejection as they do in economics.
Simon's response was:
They do if there is no alternative theory to explain them. The relevant question is what is an admissible theory.
To which both Robert and I said we couldn't think of any examples where this was the case. Simon Wren-Lewis then asks an interesting question about what happens when your theory starts meeting the headwind of empirical rejection:
How can that logically work[?] Do all empirical deviations from the (at the time) believed theory always come along at the same time as the theory that can explain those observations? Or in between do people stop doing anything that depends on the old theory?
The answer to the second question is generally "no". Some examples followed, but Twitter can't really do them justice. So I thought I'd write a blog post discussing some case studies in physics of what happens when your theory's rejected.

The Aether

The one case I thought might be an example where natural science didn't reject a theory (therefore making me qualify that there were no examples in post-war science) was the aether: the substance posited to be the medium in which light waves were oscillating. The truth was that this theory wasn't invented to make sense of any particular observations (Newton thought it explained diffraction), but rather to soothe the intuition of physicists (specifically Fresnel's, who invented the wave theory of light in the early 1800s). If light is a wave, it must be a wave in something, right? The aether was terribly stubborn for a physical theory in the Newtonian era. Some of the earliest issues arose with Fizeau's experiments in the 1850s. The "final straw" in the traditional story was the Michelson and Morely experiment, but experiments continued to test for the existence of "aether wind" for some years later (you could even call this 2009 precision test of Lorentz invariance a test of the aether). 

So here we have a case where a hypothesis was rejected and it was over 50 years between the first rejection and when the new theory "came along". What happened in the interim? Aether dragging. Actually the various experiments were considered confirmation of particular questions about how aether interacts with matter (even including Michelson and Morely's). 

But Fresnel's wave theory of light didn't really need the aether, and there was nothing that the aether did in Fresnel's theory besides exist as a medium for transverse waves. Funny enough, this is actually a problem because apparently aether didn't support longitudinal waves which makes it very different from any typical elastic medium. Looking back on it, it really doesn't make much sense to posit the aether. To me, that implies its role was solely to soothe the intuition; since we as physicists have long given up that intuition we can't really reconstruct how we would think about it at the time in much the same way we can't really imagine what writing looked like to us before we learned how to read.

So in this case study, we have a theory that was rejected and before the "correct" theory came along and physicists continued to use the "old theory". However, the problem with this as an example of Simon's contention is that the existence of the aether didn't have particular consequences for the descriptions of diffraction and polarization (the "old theory") for which it was invented. It was the connection between aether and matter that had consequences — in a sense, you could say this connection was assumed in order to be able to try and measure it. I can't remember the reference, but someone once wrote that the aether experiments seems to imply that nature was conspiring in such a way as to make the aether undetectable!

The Precession of Mercury

This case study brought up by Simon Wren-Lewis better represents what happens in natural sciences when data casts doubt on a theory. Precision analysis of astronomical data in the mid-1800s by Le Verrier led to one of the most high profile empirical errors of Newton's gravitational theory: it got the precession of Mercury wrong by several arc seconds per century. As Simon says: physicists continued to use Newton's "old" theory (and actually do so to this day) for nearly 50 years until the "correct" general theory of relativity came along.

But Newton's old theory was wildly successful (even the observed error was about 40 arc seconds per century). In one century, Mercury travels about 54 million seconds of arc meaning this error is on the order of one in one million. No economic theory is that accurate, so we could say that this case study is actually a massive case of false equivalence.

However, I think it is still useful to understand what happened in this case study. In our modern language, we would say that physicists set a scope condition (region of validity) based on a relevant scale in the problem: the radius of the sun (R). Basically, when the perihelion of the orbit r is large relative to R, other effects potentially enter. And at r/R ~ 2%, this ratio is much larger for Mercury than for any other planet (Mercury is in a 3:2 orbit resonance, tidally locked with the sun). Several ad hoc models of the sun's mass distribution (as well as other effects) were invented to try an account for the difference from Newton's theory (as mentioned by Robert). Eventually general relativity came along (setting a scale — the Schwarzchild radius 2 G M/c² — in terms of the strength of the gravitational field based on the sun's mass M and the speed of light, not its radius). Despite the how weird it was to think of the possibility of e.g. black holes or gravitational waves as fluctuations of space-time, the theory was quickly adopted because it fit the data.

The scale R set up a firewall preventing Mercury's precession from burning down the whole of Newtonian mechanics (which was otherwise fairly successful), and ad hoc theories were allowed to flourish on the other side of that firewall. This does not appear to happen in economics. As Noah Smith says:
I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about.
And as Simon says in his tweet, economists just go on using rejected theory elements and models without limiting its scope or opening the field to ad hoc models. This is also my own experience reading the economics literature.

Old Quantum Theory

Probably my favorite case study is so-called old quantum theory: the collection of ad hoc models that briefly flourished between Planck's quantum of light in 1900 to Heisenberg's quantum mechanics in 1925. Previously, lots of problems started to arise with Newtonian physics (though with the caveat that it was mostly wildly successful as mentioned above). There was the ultraviolet catastrophe (a singularity as wavelength goes to zero) which was related to blackbody radiation. Something was happening when the wavelength of light started to get close to the atomic scale. Until Planck posited the quantum of light, several ad hoc models including atomic motion were invented to give different functional forms for blackbody radiation in much the same way different models of the sun allowed for possible explanations of Mercury's precession.

In much the same way the radius of the sun set the scale for the firewall for gravity, Planck set the scale for what would become quantum effects by specifying a fundamental unit of action (energy × time or momentum × distance) now named after him: h. Old quantum theory set this up as a general principle by saying phase space integrals could only result integer multiples of h (Bohr-Sommerfeld quantization). Now h = 6.626 × 10⁻³⁴ J×s is tiny in terms of our human scale which is related to Newtonian physics being so accurate (and still used today); again using this as a case study for economics is another false equivalence as no economic theory is that accurate. But in the case, Newtonian physics was basically considered rejected within the scope of old quantum theory and stopped being used. That rejection was probably a reason why quantum mechanics was so quickly adopted (notwithstanding its issues with intuition that famously flustered Einstein and continue to this day). Quantum mechanics was invented in 1925 [0], and by the 1940s physicists were working out renormalization of quantum field theories putting the last touches on a theory that is the most precise ever developed. Again, it didn't really matter how weird the theory seemed (especially at the time) because the only important criterion was fitting the empirical data.

There's another way this case study shows a difference between the natural sciences and economics. Old quantum theory was almost immediately dropped when quantum mechanics was developed, and ceased to be of interest except historically. Its one major success lives on in name only as the Bohr energy levels of Hydrogen. However, Paul Romer wrote about economic models using the Bohr model as an analogy for models like the Solow model that I've discussed before. Romer said:
Learning about models in physics–e.g. the Bohr model of the atom–exposes you to time-tested models that found a good balance between simplicity and insight about observables.
Where Romer sees a "balance between simplicity and insight" that might well be used if it were an economic model, this physicist sees a rejected model that's part of the history of thought in physics. Physicists do not learn the Bohr model (you learn of its existence, but not the theory). The Bohr energy level formula turned out to be correct, but today's undergraduate physics students derive it from quantum mechanics not "old quantum theory" using Bohr-Sommerfeld quantization.

A Summary

There is a general pattern where some empirical detail is at odds with a theory in physics:

  • A scale is set to firewall the empirically accurate pieces of the theory
  • A variety of ad hoc models are developed at that new scale where the only criterion is fitting the empirical data, no matter how weird they may seem

I submit that this is not how things work in economics, especially macroeconomics. Simon says we should keep using theories without a scope condition firewall, which Noah says doesn't seem to be thought about at all. New theories in macro- or micro-economics, no matter how weird, aren't judged based on their empirical accuracy alone.

But a bigger issue here I think is that there aren't any wildly successful [1] economic models. There really aren't any macroeconomic models accurate enough to warrant building a firewall. This should leave the field open to a great deal of ad hoc theorizing [2]. But in macro, you get DSGE models despite their poor track record. Unless you want to consider DSGE models to be ad hoc models that may go the way of old quantum theory! That's really my view: it's fine if you want to try DSGE model macro and it may well eventually lead to insight. But it really is an ad hoc framework operating in a field that hasn't set any scales because it hasn't had enough empirical success to require them.


Update 19 January 2018

Both Robert Waldmann and Simon Wren-Lewis responded to the tweet about this blog post (thread here) saying that physics is not the optimal natural science for comparison with economics. However, I disagree. Physics (and chemistry) are the only fields with a comparable level of mathematical formalism to economics. Other natural sciences use lots of math, too, but there is no over-arching formal mathematical way to solve a problem in e.g. biology (and some of the ones that do exist are based on either dynamical systems, the same kind of formalism used in economicsor even economic models). There's even less in medicine (Wren-Lewis's example).

Now you may argue that (macro)economics shouldn't have the level of mathematical formalism it does (I would definitely agree that the mathematical macro models used are far to complex to be supported by the limited data and that it's funny to write stuff like this). If you want to argue that macroeconomics shouldn't be using DSGE models, or that social science isn't amenable to math, go ahead [3]. But that wasn't the argument we were having which was what to do when your mathematical framework (e.g. standard DSGE models with Euler equations and Phillips curves) is rejected. Additionally, the reasons that these models are rejected are due to comparing the mathematical formalism with data — not their non-mathematical aspects. To that end, physics provides a best practice: set a scale and firewall off the empirically accurate parts of your theory.

Aside from the question of how one "uses" a non-mathematical model, one of the issues with the discussion of rejection of non-mathematical models is that there's no firm metric for rejection. When were Aristotle's crystal spheres rejected? Heliocentric models didn't really require rejection of the principle that planets were fixed to spheres made of aether. Kepler even mentions them in the same breath as the elliptical orbits that would reject the Aristotelian/Ptolemaic model completely, so comets and novae didn't reject the concept in Kepler's mind (you could make the case that the aether survives all the way to special relativity above). The "bad air" theory of disease around malaria (since it was associated with swampy areas, hence the name) was moderately successful up until a new theory came along in the sense that staying away from swamps or closing your windows is a good way to avoid mosquitoes.

Actually, it's possible the mathematical formalism is part of the reason macro doesn't just reject the models because of sunk costs (or 'regulatory capture') involved in learning the formalism. I don't know if non-mathematical models are more easily rejected in this sense (lower sunk costs), but I as I mentioned in my tweet as part of the thread linked above I couldn't even think of any non-mathematical models that were rejected that economics still uses — rendering the entire discussion moot if we're not talking about mathematical models.

PS I also added footnotes [2] and [3].



[0] Added 16 November 2019. You could consider the brief period between Heisenberg's September 1925 paper and Schrodinger's December 1926 paper as a period in which a rejected theory (old quantum theory) continued to be used because people were uncomfortable with (or didn't understand) Heisenberg's matrix mechanics. Fifteen months! Physicists jumped at Schrodinger's work more readily since it was a differential equation — something they were comfortable with. Dirac's Principles of Quantum Mechanics (1930) unified the two approaches. In a later edition of that book, it's made even clearer via bra-ket notation.

[1] Noah likes to tell a story about the prediction of the BART ridership using random utility discrete choice models (I mentioned here). One of the authors of that study has said that result was a bit of a fluke ("However, to some extent, we were right for the wrong reasons.").

[2] Added in update. This is part of my answer to Chris House's question (that I also address in my book): Why Are Physicists Drawn to Economics? Because it is a field that uses mathematical models and there are no real scope conditions known opening up the possibilities of any ad hoc model by physicists' standards.

[3] But you do have to contend with the fact that some of this non-mathematical social science is pretty empirically accurately described by mathematical models.

Monday, January 15, 2018

Is low inflation ending?

I'm continuing to compare the CPI forecasts to data (new data came out last Friday, shown on the forecast graph for YoY CPI all items [1]). I think the data is starting to coalesce around a coherent story of the Great Recession in the US. As you can see in the graph above, the shock centered at 2015.1 (2015.1 + 0.5 = 2015.6 based on how I displayed the YoY data)  is ending. This implies that (absent another shock to CPI), we should see "headline" CPI (i.e. all items) average 2.5% [2].

It is associated with the shock to the civilian labor force (CLF, at 2011.3), nominal output per worker (NGDP/L, at 2014.6), and the prime-age CLF paricipation rate (in 2011) — all occurring after the Great Recession shock to unemployment (2008.8, see also my latest paper). What we have is a large recession shock that pushed people out of the labor force (as well as reduced uptake of people trying to enter the labor force). This shock is what then caused the low inflation [3] (in terms of CPI or PCE [2]). This process is largely ending and we are finally returning to a "normal" economy [4] nearly 10 years later.


Update + 2 hrs

I thought I'd add the graph of the full model over the post-war period (including the guides mentioned in [1]), but also note that two of the three periods David Andolfatto mentions as "lowflation" periods line up with the two negative shocks to CPI (~ 1960-1970, and ~ 2008-2018):

The period 1996-2003 does not correspond to low headline CPI inflation in the same way core PCE inflation was below 2%. However 1996-2003 roughly corresponds to the "dynamic equilibrium" period of CPI inflation as well as PCE inflation (~ 1995-2008) — which in the case of PCE inflation is ~ 1.7% (i.e. below 2%). Therefore the 2% metric for lowflation measured with PCE inflation would actually include the dynamic equilibrium, and not just shocks. Another way to say it is that the constant threshold (at 2%) detector gives a false alarm for 1996-2003, whereas a "dynamic equilibrium detector" does not.



[1] Here is the log derivative (i.e. continuously compounded annual rate of change) and the level (with new dynamic equilibrium guides as diagonal lines at 2.5% inflation rate):

[2] Note that the dynamic equilibrium for core PCE inflation that economists like to use is 1.7%, and so the end of the associated shock will not bring inflation all the way back up to the Fed's stated target of 2%.

[3] Interestingly, this negative shock to inflation happens at the same time as a negative shock to unemployment: i.e. inflation went down at the same time unemployment went down, giving further evidence that the Phillips curve has disappeared.

[4] This is a "normal" economy in the sense of dynamic equilibrium, but it might not seem normal to a large portion of the labor force as there has been only a limited amount of time between the end of the demographic shock of the 1970s and the Great Recession shock of the 2000s. As I've said before, there is a limited amount of "equilibrium" data in this sense (the models above would say ca. 1995 to 2008).

Friday, January 12, 2018

Immigration is a major source of growth

Partially because of the recent news — and most certainly because nearly half this country can be classified as a racist zero-sum mouth-breather — I wanted to show how dimwitted policies to limit immigration can be. One of the findings of the dynamic information equilibrium approach (see also my latest paper) is that nominal output ("GDP") has essentially the same structure as the size of the labor force:

The major shocks to the path of NGDP roughly correspond to the major shocks to the Civilian Labor Force (CLF). Both are shown as vertical lines. The first is the demographic shock of women entering the workforce. This caused an increase in NGDP (the shock to CLF precedes the shock to NGDP). The second major shock is the Great Recession. In that case a shock to NGDP caused people to exit the labor force driving down the labor force participation rate (the shock to NGDP came first). The growth rates look like this (NGDP is green, CLF is purple):

The gray horizontal lines represent the dynamic equilibrium growth rates of CLF (~ 1%) and NGDP (~ 3.8%). The dashed green line represents the effects of two asset bubbles (dot-com and housing, described here). Including them or not does not have any major effects on the results (they're too small to result in statistically significant changes to CLF). You may have noticed that there's an additional shock centered in 2019; I will call that the Asinine Immigration Shock (AIS). 

I estimated the relationship between shocks to CLF and to NGDP. Depending on how you look at it (measuring the relative scale factor, or comparing the integrals relative to the dynamic equilibrium growth rate), you can come up with a factor α between about 4 and 6. That is to say a shock to the labor force results in a shock that is 4 to 6 times larger to NGDP.

Using this this estimate of the contribution of immigration to population growth, I estimated that the AIS over the next four years (through 2022) could result in about 2 million fewer people in the labor force (including people deported, people denied entry, and people who decide to move to e.g. Canada instead of the US). The resulting shock to NGDP [1] using the low end estimate of a factor of α = 4 would result in NGDP that is 1 trillion dollars lower in 2022 [2].  This is what the path of the labor force and nominal output look like:

As you can see, the AIS is going to be a massive self-inflicted wound on this country. What is eerie is that this shock corresponds to the estimated recession timing (assuming unemployment "stabilizes") — as well as the JOLTS leading indicators — implying this process may already be underway. With the positive shock of women entering the labor force ending, immigration is a major (and perhaps only) source of growth in the US aside from asset bubbles [3].



[1] Since I am looking at the results sufficiently following the shock in 2022, it doesn't matter whether which shock comes first (so I show them as simultaneous, centered in January 2019). However, I think the most plausible story is that the shock to CLF would come first followed by a sharper shock to NGDP as the country goes into a recession about 1/2 to 1/3 the size of the Great Recession.

[2] It's roughly a factor of 500 billion dollars per million people (evaluated in 2022) since both NGDP and CLF are approximately linear over time periods of less than 10 years (i.e. 1 million fewer immigrants due to the AIS results in an NGDP that is 500 billion dollar lower in 2022).

[3] I also tried to assess the contribution of unauthorized immigration on nominal output. However the data is limited leaving the effects uncertain. One interesting thing I found however is that the data is consistent with a large unauthorized immigration shock centered in the 1990s that almost perfectly picks up after the demographic shock of women entering the workforce wanes (also in the 1990s). As that shock wanes we get the dot-com bubble, the housing bubble, and the financial crisis. It is possible that the estimate of the NGDP growth dynamic equilibrium may be too high because it is boosted by unauthorized immigration that doesn't show up in the estimates of the Civilian Labor Force.

Update 23 January 2018

Here are the graphs of two scenarios: one is dynamic equilibrium estimated from unauthorized immigration data alone, the second is one based on an assumption that the underlying dynamic equilibrium is the same. The latter model shows an interesting "surge" that compensates for the lower growth due to the fading shock of women entering the workforce.