Tuesday, March 21, 2017

India's demonetization and model scope

Srinivas [1] has been requesting that I look into India's demonetization using the information equilibrium (IE) framework for awhile now. One of the reasons I haven't done so is that I can't seem to find decent NGDP data from before 1996 or so. I'm going to proceed using this limited data set because there are several results that I think are a) illustrative of how to deal with different models with different scope, b) show that monetary models are not very useful.

Previously, the only "experiment" with currency in circulation I had encountered was the Fed's stockpiling of currency before the year 2000 in preparation for any issues:


This temporary spike had no impact on inflation or interest rates. Economists would say that the spike was expected to be taken away, and therefore there would be no impact. Scientifically, all we can say is that rapid changes in M0 do not necessarily cause rapid changes in other variables. This makes sense if it is entropic forces maintaining these relationships between macroeconomic aggregates. Another example is the Fed's recent increases in short term interest rates. The adjustment of the monetary base to the new equilibrium appears to be a process with a time scale on the order of years.

If either interest rates or monetary aggregates are changed, it takes time for agents to find or explore the corresponding change in state space.

India recently removed a bunch of currency in circulation (M0). If the historical relationship between nominal output (NGDP) and M0 were to hold, we'd get a massive fall in output and the price level. However, the change in M0 appears to be quick:


So, what do the various models have to say about this?

Interest rates

The interest rate model says that a drop in M0 should raise interest rates ceteris paribus. However this IE relationship only holds on average over several years. Were the drop in M0 to remain, we should expect higher long term interest rates in India:


However, if M0 continues to rise as quickly as it has in Dec 2016, Jan 2017, and Feb 2017, then we probably won't see any effect at all (much like the year 2000 effect described above). M0 needs to maintain a lower level for an extended period for rates to rise appreciably.

This is to say that the model scope is long time periods (on the order of years to decades), and therefore sharp changes are out of scope.

Monetary model

Previously, like many other countries, India has shown an information equilibrium relationship (described at the end of these slides) between M0 and NGDP with an information transfer index (k) on the order of 1.5. A value of k = 2 means a quantity theory of money economy, while a lower value means that prices and output respond much less to changes in M0.


In fact, as I mentioned in a post from yesterday monetary models only appear to be good effective theories when inflation is above 10%, and in that case we should find k = 2. That k < 2 implies the monetary theory is out of scope and we have something more complex happening.

The quantity theory of labor

The monetary models don't appear to be very useful in this situation. However one model that does do well for countries with k < 2 is the quantity theory of labor (and capital). This is basically the information equilibrium version of the Solow model (but deals with nominal values, doesn't have varying factor productivity, and doesn't have constant returns to scale). unfortunately the time series data doesn't go back very far and there aren't a lot of major fluctuations. Even so, the model does provide a decent description of output and inflation:


The exponents are 1.6 for capital and 0.9 for labor meaning India is a great place to get return on capital investment (the US has 0.7 and 0.8, and the UK has 1.0 and 0.5, respectively).

This model tells us that inflation is primarily due to an expanding labor force, and therefore demonetization should have little to no effect on it.

Dynamic equilibrium

The dynamic equilibrium approach to prices (price indices) and ratios of quantities has shown remarkable descriptive power as I've shown in several recent posts (e.g. here). India is no different and inflation over the past 15 years can be pretty well described by a single shock centered in late 2010 continuing over a time scale on the order of one and a half years:


This model doesn't tell us the source of the shock, but unless another shocks hits we should expect inflation to continue at the same rate as it has over the past 2 years (averaging 4.7% inflation). This also means that the demonetization should have little to no effect.

Summary

The preponderance of model evidence tells us that the demonetization should have little to no effect on inflation or output. The speed at which it was enacted means that the monetary models are out of scope and tell us nothing; we can only rely on other models that are in scope and those have no dependence on M0.

...

Footnotes

[1] Srinivas also sent me much of the data used in this post.

Monday, March 20, 2017

Using PCA to remove cyclical effects

One potential use of the principal component analysis I did a couple days ago is to subtract the cyclical component of the various sectors. I thought I'd take it a step further and use a dynamic equilibrium model to describe the cyclical principal component and then subtract the estimated model. What should be left over are the non-cyclical pieces.

First, here's the model and the principal component data; the description is pretty good:


I won't bore you with listing the results for every sector (you can ask for an update with your favorite sector in comments and I will oblige). Let me just focus on the interesting sectors with regard to the economic "boom" of 2013-2014. There are three different behaviors. The first is a temporary bump that seems to be concentrated in retail trade:


The bump begins in mid-2013 (vertical line) and ends in mid-2016.

The second behavior is job growth. For example, here is health care and social assistance; the rise begins around the date the ACA goes into effect (vertical line):


The third behavior is unique to government hiring, specifically at the state and local level. It drops precipitously at the 2016 election (vertical line):


Note that this doesn't mean hiring dropped to zero, it just mean state and local government hiring dropped back to it's cyclical level after being above it (e.g. because of the ACA, for example).

Belarus and effective theories


Scott Sumner makes a good point that inflation is not only demographics using Belarus as an example. However I think this example is a great teachable moment about effective theories. The data on inflation versus monetary base growth shows two distinct regimes; the graph depicting this above is from a diagram in David Romer's Advanced Macroeconomics. One regime is high inflation and because it is pretty well described by the quantity theory of money (the blue line) I'll call it the quantity theory of money regime. The second regime is low inflation. It is much more complex and is probably related to multiple factors at least partially including demographics (or e.g. price controls).

The scale that separates the two regimes (and that defines the scope of the quantity theory of money theory) is on the order of 10% inflation (gray horizontal line). For inflation rates ~ 10% or greater, the quantity theory is a really good effective theory. What's also interesting is that the theory of inflation seems to simplify greatly (becoming a single-factor model). It is also important to point out that there is no accepted theory that covers the entire data set ‒ that is to say there is no theory with global scope.

In physics, we'd say that the quantity theory of money has a scale of τ₀ ~ 10 years (i.e. 10% per annum). For base growth scales shorter than this time scale like, say, β₀ ~ 5 years (i.e. 20% per annum), we can use quantity theory.

At 10% annual inflation, Belarus should be decently described by the quantity theory of money with other factors; indeed base growth has been on the order of 10%.

The problem is that then Scott says:
So why do demographics cause deflation in Japan but not Belarus?  Simple, demographics don’t cause deflation in Japan, or anywhere else.
Let me translate this into a statement about physics:
So why does quantum mechanics make paths probabilistic for electrons but not for baseballs? Simple, quantum mechanics doesn’t make paths probabilistic for electrons, or anything else.
As you can see this framing of the question completely ignores the fact that there are different regimes where different effective theories can operate (quantum mechanics on scales set by de Broglie wavelengths; when the de Broglie wavelength is small you have a Newtonian effective theory).

Wednesday, March 15, 2017

Washington's unemployment rate, Seattle's minimum wage, and dynamic equilibrium

I live in Seattle and the big thing in the national news about us is that we raised our minimum wage to $15, which just went into effect for large businesses in January of this year. According to many people who oppose the minimum wage, this should have lead to disaster. Did it have an effect? Let's try and see what the dynamic equilibrium model says. I added a couple of extra potential shocks after the 2009 big one:


People who believe the minimum wage did negatively impact could probably see the negative shock centered at 2015.8 as evidence in their favor. However, that could also be the end of whatever positive shock centered at 2013.0 (which I think was hiring associated with the ACA/Obamacare). I showed what the path would look like if those shocks (positive and negative) were left out using a dotted line. If it was the minimum wage, it would have to be based entirely on expectations because it is being phased in (not reaching $15/hour for all businesses until 2020):


However those expectations did not kick in when the original vote happened in June of 2014, so it must be some very complex expectations model. In this second graph I show what the graph looks like in the absence of both the 2013 and 2015.8 shocks (shorter dashes) as well as just the absence of the 2015.8 shock (longer dashes). Various theories welcome!

The Fed raised interest rates today, oh boy

The Fed raised its interest rate target to a band between 0.75 to 1.0 percent at today's meeting, so I have to update this graph with a new equilibrium level C'':


We might be able to see whether the interest rate indicator of a potential recession has any use:


This indicator is directly related to yield curve inversion (the green curve needs to be above the gray curve in order for yield curve inversion to become probable). Here are the 3-month and 10-year rates over the past 20 years showing these inversions preceding recessions (both in linear and log scales):



Principal component analysis of jobs data

Narayana Kocherlakota tweeted about employment making a random claim about the "steady state" employment growth that seems to come from nowhere which inspired me to do something I've been meaning to do for awhile: a principal component analysis of the Job Openings and Labor Turnover time series data ('JOLTS'):


I used a pretty basic Karhunen–Loève decomposition (Mathematica function here) on several seasonally adjusted hires time series from FRED (e.g. here). For those interested (apparently no one, but alas I'll do it anyway) the source code can be found in the Dynamic Equilibrium GitHub repository I set up. Here's the result (after normalizing the data):


The major components are the blue and yellow one (the rest are mostly noise, constant over time). I called these two components the "cyclical" (blue) and the "growth/decline" (yellow) for fairly obvious reasons (the growth/decline is strongest after 2011). It's growth or decline because the component can be added with a positive (growth) or negative (decline) coefficient. Here are how those two components match up with the original basis:



The story these components tell is consistent with the common narrative and some conventional wisdom:

  • Health care and education are not very cyclical
  • Health care and education are growing 
  • Manufacturing and construction are declining  

Here's health care on its own (which looks pretty much like the growth/decline component):


And here's manufacturing (durable goods):


To get back to Kocherlakota's claim, the "steady state" of jobs growth then might seem to depend on the exact mix of industries (because some are growing and some are declining) and where you are in the business cycle. However, as I showed back in January, total hires can be described by constant relative growth compared to the unemployment level and the number of vacancies ‒ except during a recession. This is all to say: it's complicated [1]. 

PS Here are all of the components (here's a link to my Google Drive which shows a higher quality picture):


...

Update 16 March 2017

I added government hiring, normalized data to the mean, and standardized the output of the algorithm. The results don't change, but it looks a bit better:


Here are the two main vectors (standardizing flipped the sign of the growth/decline vector):



And here's the original data, updated with the new data points released today (I subtracted the census peak by interpolating between the two adjacent entries):


...

Footnotes:

[1] Although I'm still not sure where the 1.2 million jobs per year comes from; here's the employment change year over year:
The bottom line is Kocherlakota's 1.2 million figure. The second from the bottom is the 1.48 million rate that comes from averaging the growth rate including recessions (it's almost 1.6 million for just post 1960 data). The upper line is my "guesstimate" for the average excluding recessions (2.5 million). Maybe it was a typo and he meant 2.2 million.

Tuesday, March 14, 2017

Physicists, by XKCD

I keep a print out of this by my desk.


XKCD.

Update 15 March 2017

I also keep a picture of Fourier's grave I took at Père Lachaise:


How do you know if you're researching in bad faith? A handy checklist.

Over the course of a couple hours I encountered so much bad pseudo-academic work that it inspired me to write down the little checklist of items that go through my head whenever I encounter some research in economics or other social sciences. Sean Carroll has a good one more appropriate to hard sciences:
  1. Acquire basic competency in whatever field of science your discovery belongs to.
  2. Understand, and make a good-faith effort to confront, the fundamental objections to your claims within established science.
  3. Present your discovery in a way that is complete, transparent, and unambiguous.
Actually the first two items below are basically Carroll's first two, and the following ones are more specific versions of Carroll's third.

Here are my ten criteria ...
How do you know if you're researching in bad faith?  A handy checklist.
⬜   1) You fail to connect your work to prior art in the field you are working in  
⬜   2) You fail to cite references or consider arguments contrary to your own results
⬜   3) You cite non-peer-reviewed, non-canon, or discredited references without arguing why they should be considered 
⬜   4) You fail to compare your model to other models 
⬜   5) If it is theoretical work, you fail to compare your theory to data 
⬜   6) If it is empirical work, you fail to point out the implicit models involved in the data collection
⬜   7) You fail to see or point out how your data representation may be misleading 
⬜   8) You fail to address that your conclusion is politically or academically convenient for you 
⬜   9) You fail to make your data sources and/or software accessible 
⬜   10) You fail to bend over backwards to come up with reasons you might have missed something 
One could get a couple check marks and still be all right (I'm sure several entries on this blog don't meet every one of these [1]), but 3-5 is a sign you're dealing with someone who isn't presenting research in good faith (but rather motivated to sell you something ... an idea, a product).

...

Footnotes:

[1] Every blog entry here passes 1, 4, 5, and 9 automatically via the links in the sidebar (on the desktop version site).

Monday, March 13, 2017

Models: predictions and prediction failures

I think I first became aware of Beatrice Cherrier's work on economic history in January of last year when I read Roger Farmer's post on competing terminology. I recently barged into a Twitter conversation that was started by Cherrier about critiques of economics [1], and ended up reading her blog which I recommend. I learned a lot about the history of economics with regards to prediction from this post especially.

I agree with Cherrier that the "economists failed to predict the crisis" (or as it is sometimes taken: "mainstream economists ...") trope is problematic from a scientific standpoint. I listed failures of prediction under the heading of Complaints that depend on framing in my list of valid and invalid critiques of economics. What do we mean by prediction? Conditional forecasts? Unconditional forecasts? Finding a new effect based on theory?

I like to use the example of earthquakes to illustrate this. We cannot predict when earthquakes will happen, however we can predict where they will generally happen to some degree of accuracy (along fault lines or increasingly near areas where fracking is used). The plate tectonics model that predicts earthquakes will occur mostly at plate boundaries also explains some observations about the fossil record (fossils are similar in South America and Africa up until the end of the Jurassic). Does this mean earthquakes are predictable or unpredictable? And if unpredictable, do we consider this a failure of the model or possibly the field of geophysics?

We do not yet know if recessions are predictable in any sense. For example, if recessions are triggered by information cascades among agents they could arise so quickly that no data could conceivably be collected fast enough to predict it. They'd be inherently unpredictable by the normal process of science. So you can see that an insistence on prediction is declaring certain kinds of theories (even potentially theories that are accurate in hindsight) to be invalid by fiat.

This possibility sets up a real branding problem for macroeconomics. As I have heard from some commenters on my blog (and generally in the econoblogosphere), a model that is accurate only in hindsight is not useful to most people. This does not mean such a model is unscientific (quantum field theory isn't useful to most people, either), just that a large segment of the population will think the model is failing. As Cherrier points out, this expectation was baked in at the beginning:
Macroeconomics is born out of finance fortune-tellers’ early efforts to predict changes in stock prices and economists’ efforts to explain and tame agricultural and business cycles.
I don't think we've moved beyond this expectation coming from society and politicians. I also think it will be difficult to undo (e.g. by moving towards a view of "economist as doctor" per Simon Wren-Lewis) because macroeconomics deals with issues of importance to society (employment and prices).

Prediction case studies: information equilibrium models

Prediction is an incredibly useful tool in science and can be for economics, but only if the system is supposed to be predictable. Let me show some ways prediction can be used to help understand a system using some examples with the information equilibrium (IE) model I work with on this blog.

In the first example, the dynamic equilibrium model forecast of unemployment depends on whether a recession is coming or not, and produces two different forecasts (shown in gray and red, respectively):


We can see what look like the beginnings of a negative shock (see also here about predicting the size of the global financial crisis shock). This kind of model (if correct) would give leading indicators before a recession starts.

I've used a different model to make an unconditional prediction about the path of the 10-year interest rate:


We can clearly see the shock after the 2016 election. If this model is correct, the evidence of that shock should evaporate in a year or so. This gives us an interesting use case: if the data fails to follow the unconditional forecast, that forecast acts as a counterfactual for the scenario where "nothing happens" allowing one to extract the impacts of policy or other events. 

However there's another example that's more like earthquake prediction: regions where interest rate data is above the "theoretical ideal" curve (gray) for extended periods (highlighted in green) culminate in a recession (red) much like building up snow on a mountainside usually culminates in an avalanche. The latest data says that we've started to build up snow:


This indicator (if it turns out to be correct) doesn't tell us when a recession happens, only if one will possibly happen. According to this model, the chance of recession went above zero in December of 2015.

In yet another example, I put together a prediction where I actually have no information. The information equilibrium model says that the monetary base will generally fall (or output will increase) with interest rate increases, but doesn't say how fast. Essentially, the model tells us where equilibrium is, but not the non-equilibrium process that arrives there. This use of forecasting is primarily as a learning tool:


In the model above, the base should fall towards C' (C is the Dec 2015 rate hike, C' is the Dec 2016 hike, and the likely March 2016 hike will require a C''), but when it should reach it is an unknown parameter since the monetary base has never been this large before. The prediction in a sense already has one success: the model predicted the base would deviate from the path labeled 0 in the graph.

And in this case, I used prediction performance to reject a modification (adding lags) to a model of inflation. The key point to understand here is that this model wasn't conditional, so a large spike in CPI inflation at the beginning of 2016 was sufficient to reject it:


But a good question to ask is how well can e.g. inflation be predicted? A 2011 study shows that many economic models fail to be as predictive as some simple stochastic processes (or the IE model):


Using the same methodology as the 2011 study, I tested the performance of LOESS smoothing of the data and found the best case model would probably only score a ~ 0.6 at 6 quarters out.

As we can see, prediction is a useful tool to tame the proliferation of macroeconomic models. However I would stress that prediction is not necessarily the best metric by which to determine the usefulness of models in understanding economic systems. For example, the unemployment model above is very uncertain about predicting the size and onset of recession shocks due to some basic issues with estimating the parameters of the exponential functions involved. However, if the model is just as accurate post-shock as it was pre-shock that is an argument that the model just fails to forecast during recessions (understandable for an equilibrium model). This is useful for science (and potentially policy design); it's just not useful for people who want forecasts.

...

PS All of the information equilibrium model predictions are collected on this page. I've started uploading the Mathematica codes to GitHub repositories linked here.
 
...

Footnotes:

[1] The initial thread was about defenses of the "Econ 101" (or "economism" by James Kwak, or "101ism" by Noah Smith). I wrote what could be considered a defense of "Econ 101" here. I called it Saving the scissors (in reference to supply and demand diagrams, and also a pun on Dan Aykroyd's portrayal of Julia Child on Saturday Night Live where he says "Save the liver"). I proposed that Econ 101 could be defended, but only if one pays close attention to model scope (one entry on my list of "valid" complaints against econ, another of which is prediction per the post above).

S&P 500 forecast versus data: March update

Here is an update of how the S&P500 forecast is going (the second graph is a zoomed-in version on the recent post-forecast data [black]):