Thursday, April 20, 2017

Growth regimes, lowflation, and dynamic equilibrium

David Andolfatto points out how different models frame the data:
What does Bullard have in mind when he speaks of a low-growth "regime?" The usual way of interpreting development dynamics is that long-run growth is more or less stable and that deviations from this stable trend represent "cyclical" mean-reverting departures from trend. And if it's "cyclical," then it's temporary--we should be forecasting reversion to the mean in the near future--like the red forecasting lines in the picture below. ... This view of the world can lead to a series of embarrassing forecast errors. Since the end of the Great Recession, for example, you would have forecast several recoveries, none of which have materialized.  ... But what if that's not the way growth happens? Suppose instead that growth occurs in decade-long spurts? Something like this [picture]. ...
The two accompanying pictures are here:

As you can see, interpreting data depends on the underlying model. I've talked about this before, e.g. here or here. Let's try another!

What about dynamic equilibrium (see also here)? In that framework, we have a shock centered in the late 70s that hits both NGDP per capita (prime age) and the GDP deflator:

At this resolution, there is another shock to NGDP alone (although it might be visible in the deflator data, see here, but it's not relevant to the discussion in this post). Note: I am talking about quantities per capita (prime age) so it should be understood if I leave off a p.c. in the following discussion. The figure shows the transition locations as well as the width (red). The NGDP p.c. transition is much wider than the deflator transition. Combining these (dividing the NGDP p.c. model by the deflator model), you get RGDP per capita:

The lines represent the "dynamic equilibrium" for RGDP p.c. made from the dynamic equilibria for NGDP p.c. minus the GDP deflator. I translated it up and down tot he maximum and minimum during the period as well as for recent times. You can see how the interaction between two Gaussian shocks of different widths give you an apparent fluctuating growth rate, which is what Bullard/Andolfatto see in the data:

It's actually just the mis-match between the NGDP shock and the GDP deflator shock (likely due to women entering the workforce) that makes it look like different growth regimes when in fact there is just one. If the shocks to each measure were exactly equal, there'd be no change. Therefore it is entirely possible these "growth regimes" are just artefacts of mis-measuring the price level (deflator/inflation) data ‒ that a proper measurement of the price level would result in no changes (since NGDP and the deflator would be subject to the same shocks).

In fact, a LOESS smoothing (gray solid) of the RGDP growth data (blue) almost exactly matches the dynamic equilibrium (blue) result during the 70s and 80s:

In this graph the gray horizontal lines are at zero growth and the dynamic equilibrium growth rate (1.6%,  equal to the dynamic equilibrium growth rate of NGDP = 3.6% minus the dynamic equilibrium growth rate of the deflator/inflation = 2%). We can see that we were at the dynamic equilibrium in the 1950s and the early 2000s as well as today. The other times, we were still experiencing deviations due to the shock.

I also show Andolfatto's 10-year annualized average growth rate (gray dotted), which basically matches up with a 10-year shifted version of the LOESS smoothing.

I'd previously talked about Bullard's regime-switching approach here. In that post, I showed how the information equilibrium approach reverses the flow of the regime selection diagram. But I also talked about how the information equilibrium monetary models can be divided in to "high k" and "low k" regimes (k is the information transfer index). High k is essentially the effective quantity theory of money for high inflation, whereas low k means the ISLM model is a good effective theory for low inflation (or we just have something more complex as I discuss in the quantity theory link). This means that monetary policy would be more effective in a high inflation environment than in a low inflation environment. I've also discussed "lowflation" regimes before here.

This brings up another topic. On Twitter, Srinivas pointed me to a new SF Fed paper [pdf] on monetary policy effectiveness that comes to similar conclusions based on the data: there are low inflation regimes where monetary policy is less effective than in high inflation regimes.

Actually, as indicated by one of the graphs in my reply, I've been discussing this since the first few months of this blog (almost 4 years ago).

One difference between the inflation (i.e. k) regimes and Bullard's regimes is that there isn't "switching" so much as a continuous drift. You don't go from high k to low k in a short period, but rather continuously past through moderate k values over a few decades.

Is there a way to connect lowflation to dynamic equilibrium? Well, one possibility is that we only have "high k" during shocks but we lack enough macroeconomic data to be able to see this clearly – the shock from the first half of the post-war period has only faded out recently.

However, this would make more sense of the fact that all countries haven't reached low k in the partition function/ensemble/statistical equilibrium picture. It's a question that has floated around in the back of my mind for awhile ‒ ever since I put up this picture (from e.g. here):

The problem is evident in that light green US data as it comes from the Depression. That means the US was once at "low k", but then went to "high k" in the WWII and post-WWII era and has since steadily fallen back to low k. The problem is that while the ensemble approach can handle the drift towards lower k values (i.e. the expected value of k falls in an ensemble of markets as the factors of production increase), the mechanism for increasing k involves ad hoc modeling (e.g. exit/reset through wartime hyperinflation).

However, what if shocks (in the dynamic equilibrium sense) reset k to higher values (in the ensemble sense)? If we take this view, then there might be different growth "regimes", but they split into "normal" and "shock" periods (the red bands in the graphs above). The shock periods can have different dynamics depending on the shocks (e.g. the fluctuating RGDP due to the mis-match between the shock to the price level and the shock to NGDP). Outside of these periods, we have "normal" times characterized by e.g. a constant RGDP growth (the gray line described in the graph above).

Which view is correct?

Given the quality of the description of the data using the dynamic equilibrium model, I don't think Bullard's regimes capture it properly. We have a shock that includes both high and low growth, but the low growth regimes on either side of the shock (today and the 1950s) represent the "normal" dynamic equilibrium (the low RGDP growth period of the 1970s wasn't the dynamic equilibrium, but rather just a result of our measure of the GDP deflator and definition of "real" quantities). This is evident from the good match between the RGDP data and the theoretical curve that is just NGDP/GDPDEF (NGDP divided by the GDP deflator). NGDP and the deflator have one major shock in the 1970s that turns into a fluctuating growth rate simply because the difference of two Gaussians [1] with different widths fluctuates:

The two high growth regimes and the intervening low growth regime are simply due to this. Occam's razor would say that there is really just one shock [2] with different widths for the different observables centered in the late 70s instead of three different manifestations of two growth regimes (per Bullard).

Footnotes:

[1] The derivative of the step function in the dynamic equilibrium is approximately a Gaussian function (i.e. a normal distribution PDF), and when you divide NGDP by DEF and look at the log growth rate you end up with the difference of the two Gaussians.

[2] This is the same shock involved in interest rates, inflation, employment-population ratio, etc so we should probably attribute it to a single source instead of more complex models (at least without other information).

1. Jason: “It's actually just the mis-match between the NGDP shock and the GDP deflator shock ... that makes it look like different growth regimes when in fact there is just one”

I am behind in my reading currently, and am not reading your posts in chronological order, so I have just read this one.

This point is a very good one. I am very suspicious of ratios in macro. My reason is different from yours but yours makes complete sense. Economists think they are seeing interesting behaviour but they are not.

My reason is that it’s not clear what the ratios are supposed to represent. Two key examples.

Productivity at a macro level does not have the same units of measure as productivity at a micro level. However, economists ignore this and confuse both themselves and everyone else. I read a post recently about productivity in the UK. The post talked about productivity by sector and noted that the productivity in the energy extraction sector is volatile. This is obviously not because oil extraction companies invent new ways of extracting oil from the North Sea and then immediately forget them. Rather, it is because the price of oil is volatile. I can’t see any value in this measure. Micro productivity comparisons can be invaluable when they compare like for like e.g. a car plant in the UK which produces 100 cars per period versus a car plant in Japan which produces 120 cars per period with similar staff levels. However, that is useful because discrepancies lead to immediate action e.g. find out why the other car plant is more efficient and copy best practice.

Real GDP is not real, so again, I can’t see what it is supposed to be measuring. For example, I have just replaced a 2011 iPad with a 2017 model. The 2017 model has a much faster processor, a better screen and is lighter. It also cost only about 75-80% of the price of the 2011 model. Everything here suggests an improvement – a better product for less money even in nominal terms. However, unless the cheaper price leads to higher sales to compensate for the reduced price, GDP and real-GDP measures will see this as a backward step. The core problem is that economists have no measures of product quality, so they see progress only in terms of total amount of spending.

2. Here's an example of a completely useless discussion on productivity problems. It's all the fault of Facebook apparently!

1. Yes, measuring productivity is problematic as there isn't really a good understanding of what productivity is.

However, I think I might be on to something with thinking of productivity as a collection of states: some high, some low. Every worker or firm flits from one state to another. It's the distribution of these that matters, not individual states. There's no sensible "micro productivity" because it changes all the time. But "macro productivity" can be given meaning if this distribution is fairly stable:

http://informationtransfereconomics.blogspot.com/2016/07/an-ensemble-of-labor-markets.html

This overall picture is formally the same as the picture of stock market performance and you could make the analogy that high productivity workers are like high performance stocks. There are probably some fundamental reasons (earnings), but also some things that are impossible to measure (hype). Also companies go from high performance to low and back all the time. There is an average growth in the stock market that basically goes as the size of the economy.

2. There are some fun animations of the changing productivity states (called k-states in the stock version) in the second link.

3. Jason: “I think I might be on to something with thinking of productivity as a collection of states: some high, some low”

Yes, I agree with you regarding your UNDERSTANDING of macro-level productivity. However, I’d say that the most important reason for understanding productivity is so that we can IMPROVE it.

The typical discussion of macro-productivity by mainstream economists goes something like this:

We measure macro-productivity
The measure is not increasing as we would hope
We don’t understand why
We don’t know what to do about it
Here are some generic suggestions for improvement e.g. more entrepreneurs, more R&D, better education, better collaboration, better infrastructure.

Assuming your understanding is correct, economists want to improve a measure they don’t understand. I could make the same generic “hit and hope” improvement suggestions without measuring macro-productivity.

I’m not clear how your understanding of macro-productivity helps us devise improvement ideas or whether you think that matters. Assuming you are correct, what do you think are the implications of your understanding for our ability to devise macro-productivity improvement initiatives?

Jason: “There's no sensible "micro productivity" because it changes all the time. But "macro productivity" can be given meaning if this distribution is fairly stable”

I don’t understand what you are saying here. Businesses measure their own productivity levels and use them to set improvement targets and to compare themselves to other businesses on a like-for-like basis. That is a perfectly sensible use of “micro productivity”.

What we are discussing here is whether the inclusion of a term called productivity has any value at a macro level when economists don’t understand what it is measuring and when no-one has suggested a practical social use for the measure? What difference would it make to the world if we just stopped measuring macro-productivity?