Wednesday, September 26, 2018

The September FOMC meeting

Everyone is talking about the FOMC meeting today (well, actually very few people are) as they've raised interest rates and released a statement that has dropped the word "accommodative". For me, the interesting part is in the forecasts [pdf] (which show a slowdown in real GDP (RGDP) over the next few years. Let's compare their forecasts with the dynamic information equilibrium model (DIEM) described in my paper.

First, let's get the dull core PCE inflation forecast out of the way (as always, click to enlarge):


Note: updated with new core PCE data for August 2018 on 1 October 2018.

The dynamic equilibrium over the post-war period is around 1.7% core PCE inflation; the FOMC sees it's own target of 2% as the forecast. The purple points represent the FOMC median and the range. The data in black is the post-forecast data for the DIEM forecast from 2017 (not just here, but in all the graphs below). The white point with a black outline represents the annual average for 2017.

RGDP is similar, but the FOMC sees an obeservable slowdown (given the range of forecasts — i.e. the median drops by more than the range). The DIEM sees most of the fluctuations as noise. Here's the graph (this time the DIEM is in red-orange because reasons):


Note: updated with revised RGDP data 27 September 2018. The change is barely visible [3].

The unemployment forecasts are more complex, and the median FOMC forecast sees the unemployment rate continuing to fall just like the DIEM [1] through 2020 when the former starts a bit of an up-tick (natural rate kicking in). The range of FOMC forecasts see either no up-tick or a larger one. The FOMC forecasts are in light blue and the DIEM annual averages are shown as gray bands:


But then again, the FOMC has been forecasting this slight fall followed by a flattening out or up-tick in a couple years for some time [2]:


Overall, this is a status quo forecast from my perspective. But the continued fall in the unemployment rate is probably puzzling the FOMC — especially given the lack of inflation. I personally couldn't imagine reducing one's unemployment rate forecast almost every meeting since September 2015 and not thinking there's a problem with the model I was using.

...

Footnotes:

[1] Note that the FOMC forecast of comparable vintage to the DIEM forecast (early 2017) was wrong:


[2] So has the FRBSF:


[3] Revised RGDP data became available 27 September 2018; the change is barely visible (RGDP growth was reduced a bit with the revision). The original graph and the updated graph are here for completeness:


Thursday, September 13, 2018

What do equations mean?


Arjun Jayadev and J.W. Mason have an article out on INET on what MMT purportedly is along with the basic fact that MMT policy prescriptions do not differ too much from that advised by the average macroeconomist in surveys. My post on this had been sitting around in my drafts since the day I read their article, but was boring (to me) and seemed likely to set off the MMT hive-mind. However, Jo Michell put up a thread about it and there was a subsequent discussion between he and  J. W. Mason that brought up an interesting general question: What do equations mean?

Michell says that it looks like you can get the same results Jayadev and Mason give from a standard DSGE treatment — the resulting equations in both cases are formally similar. I agree with that entirely. After bristling from being told the results are formally equivalent to a DSGE model (the three equation New Keynesian DSGE model), Mason says:
“A can be derived from B (with various ancillary assumptions)” is not the same as “B is just a form of A”, any more than fact that a text can be translated into Latin means it was written in Latin all along.
In a sense, I agree with that as well! I'll get into it later — first I want to talk about what Mason said next (I added some brackets to add context back in because it's lost when quoting a single tweet out of a thread):
People like Cochrane, who [r]eally do believe the sacred texts [i.e. microfoundations], are appropriately scathing on this [i.e. that it's not just the final form that matters]
with a link to a blog post by John Cochrane. Cochrane claims that Old Keynesian (OK) and New Keynesian (NK) models may have the same policy prescriptions, but have entirely different mechanisms leading to them. In the OK model, government spending increases income and each household's marginal propensity to consume means that increased income yields more output than the original government outlay (a "multiplier"). In the NK model, the additional output arises because government spending increases inflation and households spend now because their income in later periods is going to be worth less because of that inflation. It's essentially due to the equilibrating effect of the "Permanent Income Hypothesis" (PIE). The best blog post on this kind of legerdemain was at Mean Squared Errors, calling it "Houdini's Straitjacket":
Consider the macroeconomist.  She constructs a rigorously micro-founded model, grounded purely in representative agents solving intertemporal dynamic optimization problems in a context of strict rational expectations.  Then, in a dazzling display of mathematical sophistication, theoretical acuity, and showmanship (some things never change), she derives results and policy implications that are exactly what the IS-LM model has been telling us all along.  Crowd — such as it is — goes wild.
The NK DSGE model comes up with the same policy prescriptions as the OK model by essentially escaping the straitjacket of microfoundations.

Of course, John Cochrane is being disingenuous — the requirement for microfoundations was created explicitly to try and prevent fiscal policy from having any effect in macro models. The NK DSGE model comes along and says it can get the same results even if it plays his game. Cochrane complaining that the model doesn't get the same result for the same reasons is effectively admitting that the microfoundations weren't some hard-nosed theoretical approach but rather just a means to stop fiscal policy from having an effect.

When Mason says that it's the "various ancillary assumptions" or Cochrane says it's the mechanisms that make these theories different, they're right. That the OK model, the NK DSGE model, the IS-AS-MP model (per Michell), the Information Equilibrium (IE) NK DSGE model, and MMT say fiscal expansion can raise GDP doesn't mean they are the same theory.

It does mean they are the same effective theory, though.

And that means that the "various ancillary assumptions" don't matter except for the scope conditions (the limits of validity of the model) they imply, the empirical tests they suggest, as well as giving us a hint as to what might happen when a theory fails empirically. It only matters how you arrive at a result if you failed to fit the data or try to make further progress because you succeeded in fitting the data. How you arrive at a result tells you whether contrary empirical observations are out of scope, what the possible failure mechanisms are, what assumptions you can relax to get more general results, or address other questions. For example, DSGE approaches tend to make assumptions like the PIE. Failures of DSGE models will potentially show up as deviations from the PIE. The IE NK DSGE model I link to above contains assumptions about information equilibrium relationships and maximum entropy; the relationships will fail to hold if agents decide to correlate in the state space (opportunity set) — e.g. panic in a financial crisis.

This is all to say that the "various ancillary assumptions" don't matter unless you make empirical tests of your theory. And in this particular case, I want to show that those empirical tests have to be tests of the underlying assumptions, not the resulting policy conclusions. All of those models contain an equation that looks something like Jayadev and Mason's equation (2) relating interest rates, fiscal balance, and output — motivating the policy prescriptions. I'm going to show that equation (2) arises with really no assumptions about output or interest rates, or even what the variables $Y$, $i$, or $b$ mean at all. They could stand for yakisoba, iridium, and bourbon. The only test of that theory would come from trying (and failing) to make yakisoba from iridium and bourbon in the real world.

This is where we get into my original draft post. Let's say I have an arbitrary function $Y = Y(i, b)$ where $i$ and $b$ are independent. It will turn out that the conclusion and policy prescriptions will depend entirely on choosing these variables (and not others) to correspond to the interest rate and fiscal balance. Varying $Y$ with respect to $i$ and $b$ gives me (to leading order in small deviations, i.e. the total differential):

$$
\delta Y = \frac{\delta Y}{\delta i} \delta i + \frac{\delta Y}{\delta b} \delta b
$$

These infinitesimals can be rewritten in terms of deviation from some arbitrary point $(i_{0}, b_{0} ,Y_{0})$, so let's rewrite the previous equation:

$$
Y- Y_{0} = \frac{\delta Y}{\delta i} (i - i_{0}) + \frac{\delta Y}{\delta b} (b - b_{0})
$$

Let's re-arrange:

$$
Y = Y_{0} - \frac{\delta Y}{\delta i}i_{0} - \frac{\delta Y}{\delta b}b_{0} +  \frac{\delta Y}{\delta i} i + \frac{\delta Y}{\delta b} b
$$

The first three terms are what they define to be $A$ (i.e. the value of $Y$ when $b$ and $i$ are zero). Let's add in $Y/Y \equiv 1$ (i.e. an identity):

$$
Y = A + \frac{1}{Y}\frac{\delta Y}{\delta i} i Y + \frac{1}{Y}\frac{\delta Y}{\delta b} b Y
$$

As stated in their article, "$\eta$ is the percentage increase in output resulting from a point reduction in the interest rate", which means:

$$
\eta \equiv - \frac{100}{100} \frac{1}{Y}\frac{\delta Y}{\delta i} =  - \frac{1}{Y}\frac{\delta Y}{\delta i}
$$

Likewise, the multiplier $\gamma$ is (based on the sign convention for $b$ with deficits being negative):

$$
\gamma \equiv - \frac{1}{Y}\frac{\delta Y}{\delta b}
$$

Therefore:

$$
Y = A - \eta i Y - \gamma b Y
$$

based entirely on the definition of the total differential and some relabeling. This is to say that this equation is entirely content-less [1] in terms of the real world aside from the assertion that the variables correspond to real world observables. It's usefulness (and ability to lead to MMT policy prescriptions) would come from not just estimating the values of $\gamma$ and $\eta$ empirically, but finding that they are constant and positive. Of course this is going to describe a relationship between $Y$, $i$, and $b$ for changing $\gamma$ and $\eta$ because it essentially reiterates what we mean by functions that change (i.e. calculus). That's why all those models I listed above — such as the NK DSGE — come to a roughly isomorphic result. If you're trying to show fiscal expansion increases GDP you're going to arrive at something like this equation to leading order.

It's the assumption that output depends on fiscal balance and interest rates that leads us here, and so it's only useful if we find empirically that the coefficients are constant — otherwise we can always find some $\eta$ and some $\gamma$ that works. It's the same way $PY = MV$, the monetarist equation of exchange, would be useful if $V$ is constant. Otherwise, it is just a definition of $V$. This fiscal equation is actually a bit worse because it doesn't unambiguously identify $\eta$ and $\gamma$ (different values work for different values of $A$).

Defining terms is often a useful start of a scientific endeavor, but what we have here is a mathematical codification of the assumption that fiscal policy affects GDP of almost exactly the same form as the old school monetarist assumption that "printing money" affects GDP. The problem is that this is exactly the problem in question: How does a macroeconomy respond to various interventions? MMT prescribes deficit spending in the face of recession "because the output responds positively to fiscal expansion". It's question begging in the same way that people seem to conduct research into recessions by assuming what a recession is.

In MMT, there appears to be a lot of representing assumptions about how an economy works as math, and then using that math to justify policy prescriptions that is effectively equivalent to justifying policy prescriptions by assuming they are correct. I've discussed this before (e.g. here, here, or here) — whether the "various ancillary assumptions" are that government expenditures are private income, that there is no money multiplier, or that the desired wealth to income ratio is reasonably constant, these assumptions are translated into math and then that math is said to justify the assumptions.

The point of Jayadev and Mason's article is that these assumptions are completely in line with mainstream assumptions and models in macroeconomics — and they are. That's the problem. Macro models like DSGE models also make a bunch of assumptions about how an economy works — some of the same assumptions — but then don't end up describing the data very well. But then the equations are formally similar, which implies the MMT model won't describe the data very well either. There's more parametric freedom in the MMT approach, so maybe it will do better. What we really need to see is some empirical validation, not arguments that "it actually has things in common with mainstream macro". Mainstream macro isn't very good empirically, so that's not very encouraging.

I have no problems with the policy prescriptions of MMT proponents (austerity is bad and deficits don't matter unless inflation gets crazy), but I do have a problem with the idea that these policy prescriptions arise from something called "Modern Monetary Theory" purported to be a well-defined theory instead of a collection of assumptions. It would go a long, long way towards being useful if it empirically validated some of those equations. Without that empirical validation, all the equations really mean is that you were able to find a way out of your own Houdini's straitjacket constructed from your own assumptions in order to arrive at something you probably already believed.

...

Footnotes:

[1] There's nothing wrong with content-less theory on the surface. Quantum field theory, one of the most successful frameworks in human history for explaining observations, is essentially content-less aside from analyticity and unitarity [pdf]. It's the particle content (electrons, photons, etc) you put in it that gives it its empirical power. A similar situation arises with Kirchoff's laws: content-less accounting until you add real world circuit elements.

Wednesday, September 12, 2018

Forecasting the Great Recession: the unemployment rate

There's a new white paper from Brookings [pdf] by Donald Kohn and Brian Sack about monetary policy in the Great Recession. In it, they compile the Fed Greenbook forecasts of various vintages and say:
The forecast errors on output and employment that were made by the Fed staff, by FOMC participants, and by nearly all economic forecasters in the profession were massive by historical standards.
I asked: What would the dynamic information equilibrium model (DIEM) have said during this time? I went back and made forecasts of the same vintages (with data available at the time) as the Greenbook forecasts to compare them. There were six forecasts in their graph. I tried to use the exact same formatting as the original Brookings graph, but it turns out that I needed to zoom out a bit as you'll see below. The vertical lines represent the forecast starting points, the blue line is the Greenbook forecast, the gray line (with 90% confidence bands) is the DIEM, and the red line is the actual data as of today. Click to enlarge any of the graphs.

August 2007


The housing bubble was collapsing, but both the Greenbook and the DIEM were forecasting business as usual. The Greenbook forecast said that we were a bit below the natural rate of about 5% and unemployment would rise a bit in the long term. the DIEM business as usual is a continuation of the 8-9% relative decline in the unemployment rate per year (i.e. 4% would decline a little less than 0.4 percentage points). At this point the JOLTS data and even conceptions (if known at the time) wouldn't have noticed anything.

March 2008


As I've noted before, sometime between December 2007 and March 2008, the DIEM would have noticed a shock, but would have underestimated its magnitude resulting in a more optimistic scenario than the Greenbook. However, both underestimate the magnitude of the coming recession.

August 2008


By August, the DIEM begins to hint at the possibility of a strong recession. The estimated forecast is roughly similar to the Greenbook (which both underestimate the recession), but the confidence intervals on the DIEM grow.

October 2008


By October 2008, we've seen the failure of Lehman and the financial crisis is underway. The presidential candidates even have an unprecedented joint meeting in DC. The DIEM is now saying the recession is likely to be bigger than the Greenbook, and the uncertainty now encompasses the actual path of the data. It shows that a recession with higher unemployment than any recession in the post-war era is possible. 

March 2009


By March of 2009, the ARRA had passed and been signed by President Obama. In the UK, the Bank of England began its quantitative easing program, while what would become known as "QE1" was already underway in the US.  The Greenbook still underestimates near tern unemployment, but is generally close. However, the DIEM is now over-estimating the size of the recession, but as the DIEM doesn't account for policy it is possible that this represents an estimate of the size of the recession without the stimulus (discussed here).

June 2010


By June 2010, we began to see unemployment at least stop getting worse. By this time the Greenbook and DIEM are both roughly correct. The DIEM would go on to be correct until about mid-2014 when a positive shock would hit the unemployment rate.

Summary

While the performance of the DIEM forecast center isn't all that much better than the Greenbook forecast, I think the lesson here is mostly about uncertainty and bias. The Greenbook forecast is always biased towards a less severe recession during this period, and I don't think it even estimates  confidence bands (I checked and I couldn't find them). The DIEM on the other hand both underestimates and overestimates the severity, but provides a lot of useful information through wide confidence bands in uncertain times. 

Tuesday, September 11, 2018

JOLTS data: no real change

It being September, the JOLTS data for July is now available. Aaaaaand ... it's inconclusive — like most of these individual data point updates. Whatever your prior, you can hold onto it. Openings is continuing to show a correlated deviation skirting the 90% confidence interval (hinting at a possible recession). The other measures are showing little deviation (separations showing more, hires and quits showing less). Bring on the graphs (click to enlarge):





Here is the latest interest rate spread tracker (original analysis here):


Note that this last graph is not related to the information equilibrium approach, but is simply tracking a common indicator — yield curve inversion — that I use to motivate the interpretation of the JOLTS data deviation from the models above as possibly indicating a recession. It's basically a linear fit to the path of interest rate spreads during the previous recessions (blue band) with an AR process (red band). The estimated "date" of the counterfactual recession (2019.7) is used as the counterfactual date of the recession in the JOLTS graphs (second vertical line, gray band).

Friday, September 7, 2018

Unemployment rate holds steady at 3.9%

The unemployment rate holds steady at 3.9% which is consistent with the dynamic information equilibrium forecast from January of 2017:


There was also average hourly wages data released that shows something intriguing — if speculative. Average hourly wage growth appears to look like a dynamic information equilibrium with a shock during the Great Recession (much like wage growth), but there's a catch:


It looks like there's a bump during the recession — average hourly wages grew faster for a short period while unemployment was rising. Could this be due to lower wage workers being laid off, making the average wages of those remaining appear higher? Since this data series is short, we can't really tell (hence why I labeled this speculative).

Monday, September 3, 2018

Labor day! ... and declining union membership

Not everyone has today off, but I do — probably at least in part due to the fact that I'm among the declining fraction of the US population represented by a private sector union (this one). Inspired by John Handley's tweet proposing a possible mechanism behind declining union membership in the US (transition to low union density sectors like service from manufacturing), I thought I'd have a look at the data using the dynamic information equilibrium model. I used data via EPI appearing in the graph in this blog. Here is the result:


We have two major shocks centered in 1938.2 ("beginning" in 1934.4) and 1987.7 ("beginning" in 1979.9) with widths computed using the measure discussed in the footnote here. There's an overall "equilibrium" decay rate of 1% per year (the "dynamic equilibrium").

So how does this compare to the shock structure of manufacturing employment? Luckily, I already looked into this a few months ago in my post on "robots versus shipping containers" — here's the model and data:


Manufacturing employment shows a large shock from roughly 1970 to the early 90s (with a second, smaller "shipping container" shock in the early 2000s). So John's story holds up against the data (interpreted with this model): a decline in manufacturing causing a decline in unions is consistent with the data. The cause of this decline in manufacturing isn't nailed down by this data — if could be e.g. a shift towards the service sector or moving manufacturing overseas (or both).

However, besides the shocks there is a general decline in union membership rate. This could be an ambiguity in the data because there's a second local entropy minimum near 0.0%/year. If we force a 0.0% per year dynamic equilibrium, we get a comparable model fit:


I was able to improve it a bit more by dividing the initial positive shock into two with a pause during WWII:


This gets us very close to the decline in the rate of union employment being essentially coincident with the two shocks to manufacturing.

So we have two possible stories: 
  1. An "equilibrium" decline in union employment (1%/year) with a shock that "begins" in the late 70s lagging behind the shock to manufacturing employment
  2. An "equilibrium" constant rate of union employment (0%/year) that essentially tracks manufacturing employment, with a shock to both beginning in roughly 1970
Story (1) has some consistency with the "deregulation"/"anti-union" narrative of the late 70s and the Reagan era, but still leaves a question of why union employment generally declines.

Story (2) has a better fit with the data, and is consistent with an Occam's razor "single explanation" approach to manufacturing and union employment. It makes sense of the data in Noah Smith's original tweet that John was responding to: union decline happens in many European countries as well. It also kind of obviates the "anti-union" narrative [1]. However, story (2) effectively shifts the question to why manufacturing employment declined in the first place (and also why service sector unions didn't organize in their place [2]). [Per the update below, maybe the manufacturing decline is due to employers being anti-union.]

I like story (2) from a scientific perspective, but story (1) isn't completely ruled out by the data (as interpreted by this model). Oh, uninformative macro data ...

Happy labor day everyone!

...

Update 4 September 2018

John Handley saw that unionization within manufacturing fell to 10-15%, meaning that sector shift out of manufacturing doesn't explain much of the decline. It could still be shift within manufacturing. That is to say a greater loss of unionized manufacturing jobs over non-union ones — i.e. employers facing unionized employees disproportionately moved manufacturing overseas or anti-union states. This would makes sense in the light of the "union derangement syndrome" that causes employers to try and move jobs to so-called "right to work" (i.e. anti-union) states regardless of whether it makes sense — surely highly educated researchers would just love to live in Oklahoma City.

...

Footnotes:

[1] Maybe causality actually goes the other way? Did declining manufacturing jobs cause declining union membership — weakening unions politically — so that politicians could enact anti-union policy?

[2] Maybe the story in Footnote [1] is really 1) decline in manufacturing → 2) decline in unions → 3) decline in political power of unions → 4) politicians enacting anti-union policy → 5) service sector unions prevented from forming.

Saturday, September 1, 2018

Successfully forecast over 1½ years of S&P 500 data

I haven't compared my S&P 500 forecast to data in a few months (last time in early June). The original forecast was made back in January 2017, and the two years will be complete come January 2019. The post-forecast data is shown in black:


The pink band represents the model error, while the blue band (overlaid with pink making it purple) represents an AR process — specifically an ARMA(2,1) process — forecast from the last data point (i.e. taking into account the random component, as discussed in a footnote here). The increase in volatility since the corporate tax cut is definitely visible in the data. There was a poster about volatility regimes presented by Tobias Sichert at ASSA 2018 which is relevant (discussed here), and possibly foreshadowing a future recession ...

Here's the data over the longer term, putting the forecast in context: