Thursday, September 13, 2018

What do equations mean?


Arjun Jayadev and J.W. Mason have an article out on INET on what MMT purportedly is along with the basic fact that MMT policy prescriptions do not differ too much from that advised by the average macroeconomist in surveys. My post on this had been sitting around in my drafts since the day I read their article, but was boring (to me) and seemed likely to set off the MMT hive-mind. However, Jo Michell put up a thread about it and there was a subsequent discussion between he and  J. W. Mason that brought up an interesting general question: What do equations mean?

Michell says that it looks like you can get the same results Jayadev and Mason give from a standard DSGE treatment — the resulting equations in both cases are formally similar. I agree with that entirely. After bristling from being told the results are formally equivalent to a DSGE model (the three equation New Keynesian DSGE model), Mason says:
“A can be derived from B (with various ancillary assumptions)” is not the same as “B is just a form of A”, any more than fact that a text can be translated into Latin means it was written in Latin all along.
In a sense, I agree with that as well! I'll get into it later — first I want to talk about what Mason said next (I added some brackets to add context back in because it's lost when quoting a single tweet out of a thread):
People like Cochrane, who [r]eally do believe the sacred texts [i.e. microfoundations], are appropriately scathing on this [i.e. that it's not just the final form that matters]
with a link to a blog post by John Cochrane. Cochrane claims that Old Keynesian (OK) and New Keynesian (NK) models may have the same policy prescriptions, but have entirely different mechanisms leading to them. In the OK model, government spending increases income and each household's marginal propensity to consume means that increased income yields more output than the original government outlay (a "multiplier"). In the NK model, the additional output arises because government spending increases inflation and households spend now because their income in later periods is going to be worth less because of that inflation. It's essentially due to the equilibrating effect of the "Permanent Income Hypothesis" (PIE). The best blog post on this kind of legerdemain was at Mean Squared Errors, calling it "Houdini's Straitjacket":
Consider the macroeconomist.  She constructs a rigorously micro-founded model, grounded purely in representative agents solving intertemporal dynamic optimization problems in a context of strict rational expectations.  Then, in a dazzling display of mathematical sophistication, theoretical acuity, and showmanship (some things never change), she derives results and policy implications that are exactly what the IS-LM model has been telling us all along.  Crowd — such as it is — goes wild.
The NK DSGE model comes up with the same policy prescriptions as the OK model by essentially escaping the straitjacket of microfoundations.

Of course, John Cochrane is being disingenuous — the requirement for microfoundations was created explicitly to try and prevent fiscal policy from having any effect in macro models. The NK DSGE model comes along and says it can get the same results even if it plays his game. Cochrane complaining that the model doesn't get the same result for the same reasons is effectively admitting that the microfoundations weren't some hard-nosed theoretical approach but rather just a means to stop fiscal policy from having an effect.

When Mason says that it's the "various ancillary assumptions" or Cochrane says it's the mechanisms that make these theories different, they're right. That the OK model, the NK DSGE model, the IS-AS-MP model (per Michell), the Information Equilibrium (IE) NK DSGE model, and MMT say fiscal expansion can raise GDP doesn't mean they are the same theory.

It does mean they are the same effective theory, though.

And that means that the "various ancillary assumptions" don't matter except for the scope conditions (the limits of validity of the model) they imply, the empirical tests they suggest, as well as giving us a hint as to what might happen when a theory fails empirically. It only matters how you arrive at a result if you failed to fit the data or try to make further progress because you succeeded in fitting the data. How you arrive at a result tells you whether contrary empirical observations are out of scope, what the possible failure mechanisms are, what assumptions you can relax to get more general results, or address other questions. For example, DSGE approaches tend to make assumptions like the PIE. Failures of DSGE models will potentially show up as deviations from the PIE. The IE NK DSGE model I link to above contains assumptions about information equilibrium relationships and maximum entropy; the relationships will fail to hold if agents decide to correlate in the state space (opportunity set) — e.g. panic in a financial crisis.

This is all to say that the "various ancillary assumptions" don't matter unless you make empirical tests of your theory. And in this particular case, I want to show that those empirical tests have to be tests of the underlying assumptions, not the resulting policy conclusions. All of those models contain an equation that looks something like Jayadev and Mason's equation (2) relating interest rates, fiscal balance, and output — motivating the policy prescriptions. I'm going to show that equation (2) arises with really no assumptions about output or interest rates, or even what the variables $Y$, $i$, or $b$ mean at all. They could stand for yakisoba, iridium, and bourbon.

This is where we get into my original draft post. Let's say I have an arbitrary function $Y = Y(i, b)$ where $i$ and $b$ are independent. It will turn out that the conclusion and policy prescriptions will depend entirely on choosing these variables (and not others) to correspond to the interest rate and fiscal balance. Varying $Y$ with respect to $i$ and $b$ gives me (to leading order in small deviations, i.e. the total differential):

$$
\delta Y = \frac{\delta Y}{\delta i} \delta i + \frac{\delta Y}{\delta b} \delta b
$$

These infinitesimals can be rewritten in terms of deviation from some arbitrary point $(i_{0}, b_{0} ,Y_{0})$, so let's rewrite the previous equation:

$$
Y- Y_{0} = \frac{\delta Y}{\delta i} (i - i_{0}) + \frac{\delta Y}{\delta b} (b - b_{0})
$$

Let's re-arrange:

$$
Y = Y_{0} - \frac{\delta Y}{\delta i}i_{0} - \frac{\delta Y}{\delta b}b_{0} +  \frac{\delta Y}{\delta i} i + \frac{\delta Y}{\delta b} b
$$

The first three terms are what they define to be $A$ (i.e. the value of $Y$ when $b$ and $i$ are zero). Let's add in $Y/Y \equiv 1$ (i.e. an identity):

$$
Y = A + \frac{1}{Y}\frac{\delta Y}{\delta i} i Y + \frac{1}{Y}\frac{\delta Y}{\delta b} b Y
$$

As stated in their article, "$\eta$ is the percentage increase in output resulting from a point reduction in the interest rate", which means:

$$
\eta \equiv - \frac{100}{100} \frac{1}{Y}\frac{\delta Y}{\delta i} =  - \frac{1}{Y}\frac{\delta Y}{\delta i}
$$

Likewise, the multiplier $\gamma$ is (based on the sign convention for $b$ with deficits being negative):

$$
\gamma \equiv - \frac{1}{Y}\frac{\delta Y}{\delta b}
$$

Therefore:

$$
Y = A - \eta i Y - \gamma b Y
$$

based entirely on the definition of the total differential and some relabeling. This is to say that this equation is entirely content-less [1] in terms of the real world aside from the assertion that the variables correspond to real world observables. It's usefulness (and ability to lead to MMT policy prescriptions) would come from not just estimating the values of $\gamma$ and $\eta$ empirically, but finding that they are constant and positive. Of course this is going to describe a relationship between $Y$, $i$, and $b$ for changing $\gamma$ and $\eta$ because it essentially reiterates what we mean by functions that change (i.e. calculus). That's why all those models I listed above — such as the NK DSGE — come to a roughly isomorphic result. If you're trying to show fiscal expansion increases GDP you're going to arrive at something like this equation to leading order.

It's the assumption that output depends on fiscal balance and interest rates that leads us here, and so it's only useful if we find empirically that the coefficients are constant — otherwise we can always find some $\eta$ and some $\gamma$ that works. It's the same way $PY = MV$, the monetarist equation of exchange, would be useful if $V$ is constant. Otherwise, it is just a definition of $V$. This fiscal equation is actually a bit worse because it doesn't unambiguously identify $\eta$ and $\gamma$ (different values work for different values of $A$).

Defining terms is often a useful start of a scientific endeavor, but what we have here is a mathematical codification of the assumption that fiscal policy affects GDP of almost exactly the same form as the old school monetarist assumption that "printing money" affects GDP. The problem is that this is exactly the problem in question: How does a macroeconomy respond to various interventions? MMT prescribes deficit spending in the face of recession "because the output responds positively to fiscal expansion". It's question begging in the same way that people seem to conduct research into recessions by assuming what a recession is.

In MMT, there appears to be a lot of representing assumptions about how an economy works as math, and then using that math to justify policy prescriptions that is effectively equivalent to justifying policy prescriptions by assuming they are correct. I've discussed this before (e.g. here, here, or here) — whether the "various ancillary assumptions" are that government expenditures are private income, that there is no money multiplier, or that the desired wealth to income ratio is reasonably constant, these assumptions are translated into math and then that math is said to justify the assumptions.

The point of Jayadev and Mason's article is that these assumptions are completely in line with mainstream assumptions and models in macroeconomics — and they are. That's the problem. Macro models like DSGE models also make a bunch of assumptions about how an economy works — some of the same assumptions — but then don't end up describing the data very well. But then the equations are formally similar, which implies the MMT model won't describe the data very well either. There's more parametric freedom in the MMT approach, so maybe it will do better. What we really need to see is some empirical validation, not arguments that "it actually has things in common with mainstream macro". Mainstream macro isn't very good empirically, so that's not very encouraging.

I have no problems with the policy prescriptions of MMT proponents (austerity is bad and deficits don't matter unless inflation gets crazy), but I do have a problem with the idea that these policy prescriptions arise from something called "Modern Monetary Theory" purported to be a well-defined theory instead of a collection of assumptions. It would go a long, long way towards being useful if it empirically validated some of those equations. Without that empirical validation, all the equations really mean is that you were able to find a way out of your own Houdini's straitjacket constructed from your own assumptions in order to arrive at something you probably already believed.

...

Footnotes:

[1] There's nothing wrong with content-less theory on the surface. Quantum field theory, one of the most successful frameworks in human history for explaining observations, is essentially content-less aside from analyticity and unitarity [pdf]. It's the particle content (electrons, photons, etc) you put in it that gives it its empirical power. A similar situation arises with Kirchoff's laws: content-less accounting until you add real world circuit elements.

Wednesday, September 12, 2018

Forecasting the Great Recession: the unemployment rate

There's a new white paper from Brookings [pdf] by Donald Kohn and Brian Sack about monetary policy in the Great Recession. In it, they compile the Fed Greenbook forecasts of various vintages and say:
The forecast errors on output and employment that were made by the Fed staff, by FOMC participants, and by nearly all economic forecasters in the profession were massive by historical standards.
I asked: What would the dynamic information equilibrium model (DIEM) have said during this time? I went back and made forecasts of the same vintages (with data available at the time) as the Greenbook forecasts to compare them. There were six forecasts in their graph. I tried to use the exact same formatting as the original Brookings graph, but it turns out that I needed to zoom out a bit as you'll see below. The vertical lines represent the forecast starting points, the blue line is the Greenbook forecast, the gray line (with 90% confidence bands) is the DIEM, and the red line is the actual data as of today. Click to enlarge any of the graphs.

August 2007


The housing bubble was collapsing, but both the Greenbook and the DIEM were forecasting business as usual. The Greenbook forecast said that we were a bit below the natural rate of about 5% and unemployment would rise a bit in the long term. the DIEM business as usual is a continuation of the 8-9% relative decline in the unemployment rate per year (i.e. 4% would decline a little less than 0.4 percentage points). At this point the JOLTS data and even conceptions (if known at the time) wouldn't have noticed anything.

March 2008


As I've noted before, sometime between December 2007 and March 2008, the DIEM would have noticed a shock, but would have underestimated its magnitude resulting in a more optimistic scenario than the Greenbook. However, both underestimate the magnitude of the coming recession.

August 2008


By August, the DIEM begins to hint at the possibility of a strong recession. The estimated forecast is roughly similar to the Greenbook (which both underestimate the recession), but the confidence intervals on the DIEM grow.

October 2008


By October 2008, we've seen the failure of Lehman and the financial crisis is underway. The presidential candidates even have an unprecedented joint meeting in DC. The DIEM is now saying the recession is likely to be bigger than the Greenbook, and the uncertainty now encompasses the actual path of the data. It shows that a recession with higher unemployment than any recession in the post-war era is possible. 

March 2009


By March of 2009, the ARRA had passed and been signed by President Obama. In the UK, the Bank of England began its quantitative easing program, while what would become known as "QE1" was already underway in the US.  The Greenbook still underestimates near tern unemployment, but is generally close. However, the DIEM is now over-estimating the size of the recession, but as the DIEM doesn't account for policy it is possible that this represents an estimate of the size of the recession without the stimulus (discussed here).

June 2010


By June 2010, we began to see unemployment at least stop getting worse. By this time the Greenbook and DIEM are both roughly correct. The DIEM would go on to be correct until about mid-2014 when a positive shock would hit the unemployment rate.

Summary

While the performance of the DIEM forecast center isn't all that much better than the Greenbook forecast, I think the lesson here is mostly about uncertainty and bias. The Greenbook forecast is always biased towards a less severe recession during this period, and I don't think it even estimates  confidence bands (I checked and I couldn't find them). The DIEM on the other hand both underestimates and overestimates the severity, but provides a lot of useful information through wide confidence bands in uncertain times. 

Tuesday, September 11, 2018

JOLTS data: no real change

It being September, the JOLTS data for July is now available. Aaaaaand ... it's inconclusive — like most of these individual data point updates. Whatever your prior, you can hold onto it. Openings is continuing to show a correlated deviation skirting the 90% confidence interval (hinting at a possible recession). The other measures are showing little deviation (separations showing more, hires and quits showing less). Bring on the graphs (click to enlarge):





Here is the latest interest rate spread tracker (original analysis here):


Note that this last graph is not related to the information equilibrium approach, but is simply tracking a common indicator — yield curve inversion — that I use to motivate the interpretation of the JOLTS data deviation from the models above as possibly indicating a recession. It's basically a linear fit to the path of interest rate spreads during the previous recessions (blue band) with an AR process (red band). The estimated "date" of the counterfactual recession (2019.7) is used as the counterfactual date of the recession in the JOLTS graphs (second vertical line, gray band).

Friday, September 7, 2018

Unemployment rate holds steady at 3.9%

The unemployment rate holds steady at 3.9% which is consistent with the dynamic information equilibrium forecast from January of 2017:


There was also average hourly wages data released that shows something intriguing — if speculative. Average hourly wage growth appears to look like a dynamic information equilibrium with a shock during the Great Recession (much like wage growth), but there's a catch:


It looks like there's a bump during the recession — average hourly wages grew faster for a short period while unemployment was rising. Could this be due to lower wage workers being laid off, making the average wages of those remaining appear higher? Since this data series is short, we can't really tell (hence why I labeled this speculative).

Monday, September 3, 2018

Labor day! ... and declining union membership

Not everyone has today off, but I do — probably at least in part due to the fact that I'm among the declining fraction of the US population represented by a private sector union (this one). Inspired by John Handley's tweet proposing a possible mechanism behind declining union membership in the US (transition to low union density sectors like service from manufacturing), I thought I'd have a look at the data using the dynamic information equilibrium model. I used data via EPI appearing in the graph in this blog. Here is the result:


We have two major shocks centered in 1938.2 ("beginning" in 1934.4) and 1987.7 ("beginning" in 1979.9) with widths computed using the measure discussed in the footnote here. There's an overall "equilibrium" decay rate of 1% per year (the "dynamic equilibrium").

So how does this compare to the shock structure of manufacturing employment? Luckily, I already looked into this a few months ago in my post on "robots versus shipping containers" — here's the model and data:


Manufacturing employment shows a large shock from roughly 1970 to the early 90s (with a second, smaller "shipping container" shock in the early 2000s). So John's story holds up against the data (interpreted with this model): a decline in manufacturing causing a decline in unions is consistent with the data. The cause of this decline in manufacturing isn't nailed down by this data — if could be e.g. a shift towards the service sector or moving manufacturing overseas (or both).

However, besides the shocks there is a general decline in union membership rate. This could be an ambiguity in the data because there's a second local entropy minimum near 0.0%/year. If we force a 0.0% per year dynamic equilibrium, we get a comparable model fit:


I was able to improve it a bit more by dividing the initial positive shock into two with a pause during WWII:


This gets us very close to the decline in the rate of union employment being essentially coincident with the two shocks to manufacturing.

So we have two possible stories: 
  1. An "equilibrium" decline in union employment (1%/year) with a shock that "begins" in the late 70s lagging behind the shock to manufacturing employment
  2. An "equilibrium" constant rate of union employment (0%/year) that essentially tracks manufacturing employment, with a shock to both beginning in roughly 1970
Story (1) has some consistency with the "deregulation"/"anti-union" narrative of the late 70s and the Reagan era, but still leaves a question of why union employment generally declines.

Story (2) has a better fit with the data, and is consistent with an Occam's razor "single explanation" approach to manufacturing and union employment. It makes sense of the data in Noah Smith's original tweet that John was responding to: union decline happens in many European countries as well. It also kind of obviates the "anti-union" narrative [1]. However, story (2) effectively shifts the question to why manufacturing employment declined in the first place (and also why service sector unions didn't organize in their place [2]). [Per the update below, maybe the manufacturing decline is due to employers being anti-union.]

I like story (2) from a scientific perspective, but story (1) isn't completely ruled out by the data (as interpreted by this model). Oh, uninformative macro data ...

Happy labor day everyone!

...

Update 4 September 2018

John Handley saw that unionization within manufacturing fell to 10-15%, meaning that sector shift out of manufacturing doesn't explain much of the decline. It could still be shift within manufacturing. That is to say a greater loss of unionized manufacturing jobs over non-union ones — i.e. employers facing unionized employees disproportionately moved manufacturing overseas or anti-union states. This would makes sense in the light of the "union derangement syndrome" that causes employers to try and move jobs to so-called "right to work" (i.e. anti-union) states regardless of whether it makes sense — surely highly educated researchers would just love to live in Oklahoma City.

...

Footnotes:

[1] Maybe causality actually goes the other way? Did declining manufacturing jobs cause declining union membership — weakening unions politically — so that politicians could enact anti-union policy?

[2] Maybe the story in Footnote [1] is really 1) decline in manufacturing → 2) decline in unions → 3) decline in political power of unions → 4) politicians enacting anti-union policy → 5) service sector unions prevented from forming.

Saturday, September 1, 2018

Successfully forecast over 1½ years of S&P 500 data

I haven't compared my S&P 500 forecast to data in a few months (last time in early June). The original forecast was made back in January 2017, and the two years will be complete come January 2019. The post-forecast data is shown in black:


The pink band represents the model error, while the blue band (overlaid with pink making it purple) represents an AR process — specifically an ARMA(2,1) process — forecast from the last data point (i.e. taking into account the random component, as discussed in a footnote here). The increase in volatility since the corporate tax cut is definitely visible in the data. There was a poster about volatility regimes presented by Tobias Sichert at ASSA 2018 which is relevant (discussed here), and possibly foreshadowing a future recession ...

Here's the data over the longer term, putting the forecast in context:

Monday, August 27, 2018

UK interest rate model performance

I haven't been showing the performance of the UK 10-year interest rate model comparable to this one for the US (that I originally showed here). It turns out that the UK model is doing quite well (as opposed to the US version which appears to be subjected to a bit of a shock sometime between 2015 when the forecast was made and today ...). Click to enlarge:



In fact, just after the Brexit vote the UK rates deviated from the expected model error by about as much as the recent US data has, but by 2017 the "Brexit shock" had faded. Was the Brexit shock to interest rates transitory, while the election shock more durable? The "shock" view does give a reason for the US data to deviate from the model forecast (however, so does the "recession indicator" view).

I would also like to note that the dynamic equilibrium model of the Moody's AAA data doesn't show any major shock:


However, that model also sees the US model at the top of this post as under-estimating the expected interest rate in recent years (for more information see here).

Monday, August 20, 2018

Dynamic equilibrium: inflation forecast update

The inflation data has continued to be consistent with the forecast confidence limits since 2017, and the latest data out over a week ago is no different. It's true these confidence limits are pretty wide. Year over year inflation should've been between −0.1 and +4.0% last month according to the model (the value was 2.9%). Next month, the model says it should be between 0.0% and 4.1%. But this spread is comparable to the NY Fed's DSGE model for PCE inflation nearly two years out (which is a more stable measure than CPI). And the dynamic equilibrium model only has four parameters [1]!

The solid red line is the original forecast and the dashed line shows an update of the shock parameters I made in March of this year (after the "lowflation" shock ended). As the change was negligible (i.e. well inside the confidence bands), it's shown for informational purposes. Below, I show continuously compounded (i.e. log derivative) and year over year changes in CPI (all items) along with the CPI level. (Click to enlarge.)




...

Update 18 September 2018

I didn't think there was any real need to write an entirely new post for the data that came out in September, so here's the graph with the latest post-forecast data:



...

Footnotes:

[1] Over the period from 2010 to 2020. If we include the data back to the 1960s, there is another shock adding three more parameters — but the contribution of those additional parameters has been exponentially suppressed since the 1990s.

Wednesday, August 15, 2018

Shifts and drifts of the Beveridge curve


As a follow up to my post on long term unemployment, I wanted to discuss the Beveridge curve. Gabriel Mathy discusses changes to it in his paper (ungated version here). He shows how there are differences in shifts in the Beveridge curve for short and long term unemployment (click to enlarge):


In an earlier version of the paper, he includes a graphical explanation:


The dynamic information equilibrium approach also describes the Beveridge curve with a formally similar "matching" framework described in my paper. However, one of the primary mechanisms for shifts of the Beveridge curve is actually just a mis-match in the (absolute value of the) dynamic equilibria, i.e.

\begin{eqnarray}
\frac{d}{dt} \log \frac{U}{L} = - \alpha + \sum_{i} \frac{d}{dt} \sigma_{i}(a_{i}, b_{i}; t-t_{i})\\
\frac{d}{dt} \log \frac{V}{L} = \beta + \sum_{j} \frac{d}{dt} \sigma_{j}(a_{j}, b_{j}; t-t_{j})
\end{eqnarray}

with $\alpha, \beta > 0$ — the difference in sign means you get a hyperbola. I can illustrate this using an idealized model with several shocks $\sigma_{i}(t)$. Let's keep $U$ constant, but change the relative parameters of $V$ (altering the dynamic equilibrium $\Delta \alpha$, the timing of the shocks $\Delta t$ and the amplitude of the shocks $\Delta t$). Here are $U(t)$ and $V(t)$ (click to enlarge):


If everything is the same ($\alpha = \beta$, $\Delta t = \Delta a = \Delta b = 0$), then you get the traditional Beveridge curve that doesn't shift:


Changing the dynamic equilibrium ($\alpha \neq \beta$) gives you the drift we see in the data:


This means the drift is due to the fact that (in the regular model) $\alpha$ = 0.084 and $\beta$ = 0.098 (vacancy rate increases at a faster rate than the unemployment rate falls). If we look at changes to the timing $\Delta t$ and amplitude $\Delta a$ of the shocks, we get some deviation but it is not as large as the change in dynamic equilibrium rate:



Combining the changes to the amplitude and timing also isn't as strong as changing the dynamic equilibrium ($\Delta \alpha$):


But if we do all of the changes, we get the mess of spaghetti we're used to:


...

PS I didn't change the widths of the shocks ($\Delta b$) because ... I forgot. I will update this later showing the effects of changing the widths. Or maybe I will remember to do it before this scheduled post auto-publishes (unlikely).

...

Update

The $\Delta b$'s add adorable little curlicues (click to enlarge):



Something has changed in long term unemployment

I read a paper by Gabriel Mathy today (an earlier version appears here) about long term unemployment. We'll use the BLS definition of being unemployed 27 weeks or more, available from FRED here. Mathy notes that in the aftermath of the Great Depression, long term unemployment decreased (relative to unemployment) — and that this hasn't happened in the aftermath of the Great Recession. It's referred to as "hysteresis" in economics, borrowing a term in physics (coined by Ewing in the late 1800s) used to describe e.g. a history dependence in magnetization.

I looked into it using the dynamic information equilibrium model (my paper describing the approach is here), and sure enough it seems something has changed in long term unemployment. Here's the basic model description of the data since the mid-1990s


We find the structure is similar to the unemployment rate. If we look at an "economic seismograph" (which represents the shocks) with various other labor market measures including job openings from JOLTS and Barnichon, we can see the overall structure is similar:


As expected, the shocks to unemployment longer than 27 weeks appear about 27 weeks after the shocks to unemployment (the arrows show the recession as well as 27 weeks later). However, we can see one other difference between the unemployment rate U and the long term measure in the relative size of the shocks (i.e. relative magnitude of the colors). The shocks to long term unemployment are comparable, while the shocks to unemployment show smaller shocks for the recessions prior to  the Great Recession. However, eyeballing this diagram isn't the best way to compare them — a small narrow shock can appear darker than a wider larger shock (it's the integrated color density that matters).

So let's look back a bit further in time:


Here we can see an effect that looks like the same "step response" (overshooting +  oscillations) we see in the unemployment rate, except a bit stronger. It also decreases over time, just like it does for the unemployment rate. But the big difference is that the size of the earlier shocks for long term unemployment are very different from the size of the shocks to the unemployment rate. A good way to see this by scaling the unemployment rate to match long term unemployment; to match the data before the 1990s, you need a scale factor of about α₁ = 2.6, but after the 90s you need one more than twice as big — α₂ = 5.3:


A 1 percentage point increase in unemployment in the 1960s and 70s led to a 2.6 percentage point increase in the fraction of long term unemployment 27 weeks later. In the Great Recession, that same increase in unemployment rate led to a 5.3 percentage point increase in the fraction of long term unemployment 27 weeks later.

Somthing happened between the 70s and the 90s to cause long term unemployment to become cumulatively worse relative to total unemployment.

The "proper frame" in the dynamic equilibrium approach is a log-linear transformation such that the constant rate of relative decline (i.e. the "dynamic equilibrium") is subtracted leaving a series of steps (which represent the "shocks"). We look at 

U(t) → log U(t) + α t + c

I also lagged unemployment by 27 weeks to match up the shock locations. If we do the transformation for both measures, we can see how long term unemployment declined relative to unemployment before the 1990s (labelled a, b, and c) in the graph.



However, the decline was tiny going into the 1990s recession (labeled 'd'), and non-existent afterwards (the ?). The 90s recession was also when the level of long term unemployment started to increase relative unemployment. Looking at the difference between these curves, we can see that starting in the 90s the recession shocks started to accumulate long term unemployment:



In order to keep them at roughly zero difference (i.e. the same scale) in equilibrium (i.e. after recessions), we have to subtract cumulative shocks (arrows). In the figure, I subtracted 0.37, then 0.37 + 0.25 = 0.62, then 0.37 + 0.25 + 0.35 = 0.97.

What is happening here? It seems in the past there used to be a point when labor became scarce enough that employers started hiring the long term unemployed at a faster rate. At some point, e.g. experience outweighs the negative effect of being unemployed for over 6 months. That's just a story, but it's one way to think about it. One possibility to explain the change is that it takes longer for that faster decline in long term unemployment to kick in such that another recession hits before it can happen. Another possibility is that there's just no pick-up in hiring the long term unemployed — employers see no real difference between an experienced worker who has been unemployed for 27 weeks and a worker without experience.

But the data is clear — the past few recessions just add to the fraction of long term unemployed relative to total unemployment and there hasn't been any subsequent recovery.

Monday, August 13, 2018

Wage growth update

The Atlanta Fed has its latest wage growth data up on their website. The post-forecast data is in black while the dynamic information equilibrium model is in green. We could potentially make a case that the "bump" that occurs in 2014 is fading out, but it's within the model error.


...

Update

I didn't think it warranted its own post, so here are a few more labor market measures. Discussion is effectively the same as for this post from a few months ago. Click to embiggen.



Also, Ernie Tedeschi posted a fun graph of changing CBO forecasts for the employment population ratio. Unfortunately, I didn't produce a forecast for the exact measure in the graph, but through the magic of ALFRED, I can show what a forecast I would have made back in January 2017 for this measure would have looked like today:



Update 19 September 2018

Here's the September release data; I didn't think it warranted its own post so I just updated this one.