Thursday, November 29, 2018

Ambiguous histories: productivity

Productivity came up on Twitter yesterday, and I put together a quick dynamic information equilibrium model (DIEM) of the utilization-adjusted Total Factor Productivity (TFP) data curated by John Fernald at the FRBSF. First, let me note that I generally think of TFP as phlogiston. However, this case is a good example of potential ambiguity in finding the dynamic equilibrium.

The TFP data is actually pretty well described by the DIEM, but its possible to effectively exchange the regions of the data that are "shocks" and the regions of the data that are "equilibrium". In this first graph, equilibrium is from the start of the data series until the 70s and 80s (a negative shock) and then in equilibrium again until the 2000s, followed by another negative shock after the Great Recession.


This data actually fits pretty well to Verdoorn's law that says (d/dt) log P ~ 0.5 (d/dt) log RGDP, and the shocks are deviations away from Verdoorn that just change the level. Additionally, this version has most of the data as equilibrium data (an underlying assumption of the DIEM approach). The fit is just slightly worse than this other fit (in terms of AIC, BIC, errors, etc) that sees the 80s and the present as equilibrium with positive shocks in the 50s & 60s as well as during the 2000s (i.e. where the previous fit was "in equilibrium"). This version says Verdoorn's law was just a coincidence during the 1940s & 50s (when it was hypothesized).


Of course, there are other reasons to prefer the second fit — e.g. it matches better with the UK data, it has recognizable events (post-war growth, the financial bubbles). But the best way to show which one is right will be data. The first fit predicts a return to increased productivity growth soon. If higher growth doesn't return soon, it means each new data point requires re-estimating the fit parameters for events in the past — a sign your model is wrong. The second predicts continued productivity growth at the lower rate with any major deviations implying a new shock (not re-estimating parameters for old shocks).

But still, the math on its own is ambiguous. The difference in AIC isn't enough to definitively select one mode over another. Circumstantial evidence can help, but what's really needed is more time for data to accrue.

...

Update 16 May 2019

Not quite enough data has accrued yet ... but the data revisions are more consistent with the fit with shocks in the 60s and 2000s:



...

Update 15 November 2019

Still looking more like the "positive 2000s shock" than the "negative 2010s shock":




Update 24 February 2021

I'm declaring the 2000s positive shock the winner:



Wednesday, November 28, 2018

Third quarter GDP numbers

No lunch break today, so I'm late with these updates. The Q3 GDP numbers and related metrics are out. No surprises, but here are the various forecasts and head-to-heads I'm tracking.

First, here's RGDP growth and inflation from the FRBNY DSGE model and the dynamic information equilibrium model (DIEM) (click to enlarge):


The post-FRBNY forecast data is in black there. Here's RGDP growth over the entire DIEM forecast period (black is post-DIEM forecast data) alongside the FOMC forecast (annual averages):



Tuesday, November 27, 2018

I don't trust Granger causality

In my travels through the econoblogosphere and econ Twitter, I've come across mentions of Granger causality from time to time. I do not trust it as a method for determining causality between time series.

If you need to know what it is, the wikipedia article on it is terrible, and basically you should just refer to either Toda and Yamamoto (1995) or Dave Giles excellent description of what the process actually involves whenever you have co-integrated series which is pretty much all the time in macro.

However, Granger causality was developed before the idea of cointegration. From Granger's Nobel lecture [pdf]:
When the idea of cointegration was developed, over a decade later, it became clear immediately that if a pair of series was cointegrated then at least one of them must cause the other. 
Or as Dave Giles put it:
Both of these ... time-series have a unit root, and are cointegrated ..., we know that there must be Granger causality in one direction or the other (or both) between these two variables.
Since almost every macro time series is cointegrated, you can always find Granger causality one way or another (or both). Since almost every macro time series is cointegrated, you really have to work (basically follow the entire process Giles describes). There are lots of interim results that are needed to judge whether or not you can trust the results from determining cointegration to the number of lags to the results of the test in both directions.

Even then, there are things that will pass Granger causality tests that represent logical fallacies or bend our notion of what we mean by causality. I give some examples, using the dynamic information equilibrium approach — which turns out to provide a much better metric for causality.

Let's say we have two ideal cointegrated series where the noise is much much smaller than the measurement. The only thing adding noise does is make the p-values worse.


The way these two series are set up, only the first (blue) could potentially Granger-cause the second (yellow) because the second is effectively constant (after first differences or subtracting out the linear trend to remove the cointegration) for all times t < 0. Therefore, by construction, we'd only have to test that yellow depends on a possible linear combination of its own lagged values and the lagged values of the blue series since blue cannot depend on lagged values of yellow (they're all approximately constants after first differences or zero after subtracting the linear trend). And depending on the temporal resolution, the yellow curve does not strongly depend on lagged values of itself. This sets up a scenario where Granger causality is effectively satisfied if we can represent the yellow curve in terms of lagged values of the blue curve.

Here are the derivatives after subtracting the linear trend; the green curve is the blue curve shifted to the center of the yellow (it's still a bit wider). A representation in terms of an "economic seismogram" appears above the curves.


Except in cases where there is too much noise, too little temporal resolution or the blue shock is much wider than the yellow one, the yellow shock can nearly always be reconstructed  in terms of a linear combination of the lagged values of the blue curve (the logistic shocks have approximately Gaussian derivatives, which are used in linear combination in e.g. smooth kernel estimation). E.g. for integer lags (p < 0.01 for lag 7) (green curve is the linear combination of the lagged blue curve):



This is great, because it means that — except in extreme cases — an economic seismogram where shocks precede each other is sufficient to satisfy Granger causality. But this is also problematic for Granger causality for exactly the same reason I wouldn't use a single shock preceding another as the sole basis for causality because of the post hoc ergo propter hoc fallacy. Granger causality is effectively a test of whether changes in one series precede changes in another, and calling it "causality" is problematic for me. A better wording for a successful test in my opinion would be "Granger comes before" rather than "Granger causes". However, a failure of the test (i.e. the yellow curve does not 'Granger cause' the blue one) is a more robust causality result because it is based on physical causality — it is literally impossible for an event outside another event's past light cone to have caused that event. As Dave Giles puts it, it's a better test of Granger non-causality: affirming that the yellow curve did not cause the blue one more than affirming the blue one caused the yellow one.

But it gets weirder if we give the earlier shock a different shape than the later one (the darker bands are negative in the economic seismogram, green is simply the shifted version of the blue function again [update: replaced with correct figure]):


It is actually possible to fit the lagged blue function to the yellow one well enough to achieve p < 0.01 for the coefficients:


You can do even better at higher temporal resolution (which also allows more lags):


This different-shaped shock also satisfies Granger causality (the blue series Granger-causes the yellow series), but I would say that we should definitely have less confidence in the causality here — it really is more of a case that the blue shock just "Granger comes before" the yellow one. I would have more confidence if there e.g. two shocks in this case:


What is also strange is that you can also have a single shock Granger cause a pair of later shocks:


Again, I'd really just say that the blue shock "Granger comes before" the yellow ones (despite the green fit being almost perfect).

Anyway, those are some of the reasons why I don't really trust Granger causality as a method especially when there are limited numbers of events (common in macro) — unless it's Granger non-causality, which is fine! 

If I was up on my real analysis, I'd probably try to prove a theorem that says the economic seismograms satisfy Granger causality and under which conditions. The temporal resolution needs to be high enough, noise needs to be low enough, and the earlier shocks need to be sufficiently narrow but I don't have specific relationships. The last one is actually temporal resolution dependent (i.e. increasing the number of samples and the number of lags eventually allows a wide shock to Granger cause a narrow one). But I think a good take away here is that reasoning with these diagrams using multiple shocks is actually better than Granger causality.

Wednesday, November 14, 2018

Data dump: JOLTS, CPI

Checking in on the dynamic information equilibrium model forecasts, and everything is pretty much status quo. The JOLTS hires data [1] is showing even fewer signs of a recession than before, but job openings is still on a biased deviation. Based on this model which puts hires as a leading indicator, we should continue to see the unemployment rate fall through February of 2019 (5 months from September 2018), at which point it will be 3.8 ± 0.2 % (90% CL) [2]. Additionally, CPI inflation is well within expected values. And finally, the S&P 500 forecast is still on a negative deviation, but within the norms of market fluctuations. As always, click to enlarge.

JOLTS




CPI inflation (all items)


S&P 500


Footnotes:

[1] The old hires without the 2014 mini-boom is here:


[2] October's 3.7% was on the low end of the CL — it was expected to be 3.9 ± 0.2 % (90% CL), so there might be a bit of mean reversion between now and March (when the February numbers come out).

Unions, inequality, and labor share

I've started writing the first draft of my next book, so I've been trying to gather up all the dynamic information equilibrium model results into economic seismograms [1] to try to provide a complete picture. In the gathering, there have been some unexpected insights — this time about unions and their effect on inequality. Here's the seismogram in the new style that can be displayed on a Kindle [2] (click to enlarge):


This shows the civilian labor force (women), wages, manufacturing employment (as a fraction of total employment), the labor share of output (nominal wages/NGDP), unionization, and income inequality (using Emmanuel Saez's data).

One of the interesting things I noticed was that unionization and inequality show almost exactly the same pattern: each bump up in unionization sees a bump down in inequality a few years later, and the decline of unionization in the 80s is followed by rising inequality in the 90s.

What's also interesting is that the decline in the labor share of output starts happening before unionization declines — i.e. a decline in unions wasn't the predominant way labor lost its share of output. I've talked about my hypothesis for a more likely causal factor before: labor share declined as women entered the workforce because the US pays women less than men. A rough order of magnitude calculation where capital just pockets the extra 30 cents on the dollar they save by hiring a woman gets the expected decline in labor share about right.

...

Update:

The unionization model is discussed here:


And here are the models of inequality and labor share (also here for the latter):

...

Footnotes

[1] One of the other things I realized in the process was that it's not seismograph, which is the machine, but seismogram. I went back on the blog and corrected all the references in the posts.

[2] Comments are welcome, but be sure to click to see the higher resolution version. Dark bands are negative shocks (or "bad" shocks), while the lighter bands are positive (or "good" shocks).

Tuesday, November 6, 2018

A workers' history of the United States 1948-2020

On my book blog, I'm starting up the next book tentatively titled A workers' history of the United States 1948-2020 based on some of the dynamic information equilibrium model results and macroeconomic seismograms. Take a look ...


I'll say similar things for half the salary

Jan Hatzius made some macro projections about wages, unemployment, and inflation:
Goldman’s Jan Hatzius wrote Sunday that unemployment should continue to decline to 3% by early 2020, noting the labor market also has room to accommodate more wage growth. Hatzius predicted that average hourly earnings would likely grow in the 3.25% to 3.50% range over the next year. ... For now, Goldman has a baseline forecast of 2.3% for core PCE ...
Well, these are all roughly consistent with Dynamic Information Equilibrium Model (DIEM) forecasts from almost two years ago (early 2017, except for the wage growth which is from the beginning of this year). Hatzius' unemployment forecast is a bit lower (I'm currently guessing there will be a recession that will begin the raise unemployment in the 2020 time frame based on JOLTS data making both of these forecasts effectively "counterfactuals"). His wage forecast is consistent but biased low compared to the DIEM, while his inflation forecast is consistent but biased high compared to the DIEM. 

Of course, there's a hedge:
Hatzius said that the economic outlook is still subject to change from a number of geopolitical factors, such as the U.S. midterm elections on Tuesday [today] ...
The DIEM forecasts will generally only change if there is a recession, but as we haven't seen any real impact on JOLTS hires (see here) we should continue to see the unemployment rate fall through January of 2019 (5 months from August 2018) and wage growth increasing through July 2019 (11 months from August 2018).

Here are the graphs — click to enlarge:




Sunday, November 4, 2018

Construction hiring, the Great Recession, and the ARRA

In the previous post, I talked about a drop in construction hiring (a JOLTS subcategory) as a leading factor in the Great Recession. Compared to the rest of the macroeconomic observables, construction hires is first to fall. Here's the model for the full data set, which includes an additional bump that seems very likely due to the fiscal stimulus of the American Recovery and Reinvestment Act (ARRA, aka the "Obama Stimulus"):


Interestingly, the NBER recession (light orange bands) cuts off right when the boost in construction hiring begins. None of the other JOLTS series show this bump at this time. Some, like health care job openings (and unemployment), show a positive bump starting in 2014 along with the ACA, aka "Obamacare". Overall JOLTS hires (i.e. across industries) shows a bump in 2014 as well (the 2014 "mini-boom").

Saturday, November 3, 2018

An information equilibrium history of the Great Recession

I mentioned at the beginning of this year on my book website that I was thinking about writing another book about the macro history of the US as told through dynamic information equilibrium and the resulting economic seismograms. I've been collecting the various models on this blog to put them together into graphics that tell at least one version of history. Previously, I've given evidence that women entering the workforce leads nearly every other measure of growth and inflation in the 70s and 80s. Lately, I've been working on the Great Recession. Here's the seismogram (click to enlarge):


Red-orange indicates negative (i.e. bad) shocks, while blue indicates positive (i.e. good) shocks (rising unemployment is "bad", but rising income is "good"). The labels are identified in footnote [1].

While much of the focus of commentary about the recession was on the Lehman collapse and the Fed meetings immediately preceding it (along with the fall in the stock markets as measured by the S&P 500), these actually come in the middle of the recession process . The first thing that happens by far is the drop in hires in construction (labeled "HIR 2300" based on the JOLTS code) in mid-2006. Around that time, Paul Krugman (e.g.) was talking about a housing bubble deflating (he had been forecasting it earlier in mid-2005) [2]. The shock to housing starts (HS) doesn't come until later (though the shock to starts occurs over a longer period, you can see that hires begin to decline just before housing starts begin to decline).  The drop in construction hires also comes right before the halt in the Fed rate increases that had started in 2004.

Before the NBER-defined recession gets underway, there's a drop in conceptions (per this NBER working paper) that's roughly coincident with (but genuinely followed by) two Fed conference calls in 2007 about the financial markets reeling in the collapsing housing bubble (the negative shock to the Case Shiller index) as well as the first Fed rate cut. The rest of the stuff that is associated with a recession in the media (stock market drops, GDP declining, unemployment rate rising) all come much later during the NBER-defined recession.

Personal income (PI) continues to climb ahead of its typical pace through most of 2007, and wage growth continues to increase (i.e. accelerate) almost until the NBER recession end.

While I've heard many stories about excessive debt being a cause behind the Great Recession, most of the negative shocks to debt measures come later (i.e. debt became a problem because of the recession). Although not shown in this graph, consumer credit takes a hit only as the NBER recession is ending. This is not to say that debt levels didn't contribute to the size of the recession (i.e. making it worse), but rather that they didn't contribute to its timing (i.e causality).

Any causality analysis would put construction hires at the beginning of the story, but oddly the shock to construction job openings comes along with the rest recession — barely leading the shock to job openings of all kinds. In fact, there's a surge in openings around the same time. It's the largest difference in timing for all the JOLTS sectors. That is to say jobs were still being advertised in 2006 (until 2008), just fewer were being hired. This doesn't indicate a pessimism about the housing market (which seems like it would show a fall in openings), but rather a labor shortage of some kind. Were employers unwilling to raise wages? Unemployment had reached its lowest level since before the 2001 recession, so maybe there was a genuine shortage of workers.

Was it xenophobia?


I am going to offer a speculative answer that I do not think I have ever seen offered as a possible reason for the Great Recession: xenophobia. There were a series of protests from March of 2006 to against anti-immigrant legislation being introduced (some of which passed, and in various jurisdictions E-verify was mandated in 2006 to prevent employers from hiring undocumented workers). The shock to construction hires begins right around the same time as those March protests, and every year since 2004 saw a decrease in immigration from Mexico:


The linked article doesn't get this causality right:
Immigration from Mexico dropped after the U.S. housing market (and construction employment) collapsed in 2006. By 2007, gross inflows from Mexico dipped to 280,000; they continued to fall to 150,000 in 2009 and were even lower in 2010.
According to their data, immigration started dropping before 2006 (the peak is in 2004), but given noise in the data and the annual temporal resolution the best we can say is that construction employment and immigration from Mexico dropped approximately concurrently.

I have written before on how much of an effect a drop of 2 million people in the labor force due to immigration restrictions would cause — about 1 trillion dollars in NGDP. Assuming a linear trend past 2007 in the increase in just undocumented immigrants (using Pew data), by 2009 there were 1.8 million fewer undocumented immigrants (11.3 million) than would be expected by the trend (13.1 million). While there would need to be more detail added (accounting for the decline in documented immigration as well as fraction of those two populations in the labor force), this gives us an order of magnitude that is not trivial compared to the size of the Great Recession.

Again, this is speculative. However it is not implausible that the anti-immigrant sentiment of the mid-2000s ended the "housing bubble". Employers continued to look for workers in construction, but suddenly were unable to hire as many starting in 2006 due to declining immigration. The worst hit states in the housing crash were California, Arizona, Nevada, and Florida — the first three being major destinations for documented and undocumented immigrants from Mexico. Since even undocumented immigrants spend money at the same grocery stores you do, sales decline. Declining construction hires is followed by fewer housing starts, and when a new family can't find a bigger house with more rooms they'll not only delay having children but opt to hold off on that house. Housing prices decline from their peak, but by now the general economic outlook is mediocre enough that the Fed starts to lower interest rates in 2007. Pessimism sets in along with the rest of the recession and a financial crisis that goes global. 

...

Update 6 November 2018

A correspondent sent me a link to some work by Kevin Erdmann about how there was actually an under-supply of housing going into the 2008 recession. Now Erdmann is writing for Mercatus which generally means there is a possibility of an ideological slant or at least a particular view of how economies work. Here, that reasoning is an attempt to say there was no housing bubble because there was a "fundamental" reason (short supply). But then, there was a limited supply of tulip bulbs as well. If there was no housing bubble, then it's arguable that the Fed had unnecessarily tight monetary policy (i.e. the desired conclusion in this case). Seeing as monetary policy tends to lag other measures, it's probably not the cause (but may e.g. contribute to the broader conditions and the depth of the recession).

I also want to emphasize that it is almost entirely unlikely the shock to construction hires was the only causal factor. I see it more as a trigger or a straw that broke the camel's back — in an environment of higher interest rates and general pressure from policymakers to cool the housing market, a sudden shock to labor supply makes that "cooling" suddenly look worse in a way that could change one's outlook. In the information equilibrium approach, it's sudden coordinated action (e.g. panicking) causing agents to cluster in the state space that causes recessions. Sometimes that coordinating signal is the Fed, but it could easily be shock to labor supply due to an unwarranted immigration freak out.

...

Footnotes:

[1] The labels are:

HIR 2300: JOLTS hires, construction (JTS2300HIR)
HS: Housing Starts (HOUST)
C/F: Conceptions/fertility
Case Shiller: Case Shiller housing price index (also here)
HIR: JOLTS hires
HIR-ext: Extended JOLTS hires data
JOR 2300: JOLTS job openings, construction (JTS2300JOR)
Debt growth: Growth of debt (All Sectors; Debt Securities and Loans; Liability, Level)
JOR: JOLTS Job opening rate
JOR Barnichon: Job openings in data from Barnichon (2010) [pdf]
QUR: JOLTS quits
SP500: S&P 500
U: U3 unemployment rate
PI: Personal income
PCE: Personal consumption expenditures
NGDP: Nominal Gross Domestic Product
W: Wage growth (Atlanta Fed)
Debt to GDP: Ratio of previous debt measure to NGDP
CLF: Civilian labor force (CLF16OV)



The arrows on the top of the diagram indicate the two Fed meetings (black arrows) prior to the Lehman collapse (red arrow). The arrows on the bottom of the diagram show the first Fed rate increase since the 2001 recession, the beginning of the period of steady rates (mid-2006 to mid-2007) as well as the first rate cut going into the 2008 recession.



[2] I'm not tying to make any point here about "who saw the crisis coming" — only citing some news that I remembered from the time for context.

Friday, November 2, 2018

Unemployment: forecasts and reality

The unemployment rate came in unchanged for October at 3.7%, which is still like 0.1 percentage points below the forecast — but it's a forecast from January 2017, so not bad for almost two years.


It looks even better compared to the competition. The FOMC and FRBSF forecasts of the same vintage definitely didn't capture the decline, with the latter being off by about a full percentage point (click to enlarge):



New fair forecast comparisons


I also put together some new (fair) comparisons with projections from the CBO, FRBSF, and FOMC starting in 2018. Note that I actually expect the path of unemployment to follow something like one of the "recession" curves in the CBO forecast graph because of what look like leading signs of a recession in the JOLTS data (which lead unemployment by several months). In the meantime, all we can do is project what the model shows and wait for the signal to appear in unemployment data [1].





...

Footnotes:

[1] See this post — here's an example for the previous recession and 2014 positive shock: