Friday, May 31, 2019

TCJA and PCE growth

The Personal Consumption Expenditures (PCE) data came out today, and as this measure is informationally equivalent to NGDP but available more frequently I thought I'd take a look at the dynamic information equilibrium model (DIEM) to see what we should expect for Q2 GDP — I noticed something (click to enlarge and maybe you'll see it better):


The model is really quite accurate, but the latest data appears to fall somewhat above the error band in a correlated and persistent way. Zooming in we can see it does fit the profile of a small non-equilibrium shock in the DIEM:


The big tax cut passed at the end of 2017 may account for it, so I added a counterfactual shock (red). The center is at 2017.98, corresponding to the "Tax Cuts and Jobs Act" (TCJA) of 2017. If the counterfactual shock accounts for it, then it 1) boosted nominal PCE growth from 3.7% to 4.7% at its peak, and 2) added 249 billion dollars (integrated, i.e. total) to PCE level over the past year. This is essentially the same as the 270 billion dollars difference between the forecast CBO tax revenues for 2018 (3.60 trillion) and the actual revenues for 2018 (3.33 trillion). These are all nominal measures. Here's the growth graph:


The effect over the next ten years (absent a recession) would be effectively 3.7% growth of that 249 billion, so in 2028 PCE will be 360 billion dollars higher than it would have been without the TCJA (1.8% higher). Cumulatively over the next 10 years, it will increase PCE by 2.96 trillion dollars. Of course, the budget deficit is estimated to be 2.29 trillion dollars over that period so reasoning from an accounting identity here basically works out within a couple percent (i.e. GDP = C + S + T with lower T has increased C).

The "active" (i.e. non-equilibrium) effect of the TCJA appears to be over leaving only the "passive" (i.e. equilibrium) effect of compounding growth rates. however the previous two non-equilibrium shocks increasing growth (late 90s, mid-2000s) were followed by negative shocks and recessions (dot-com bust, housing bust/Great Recession) in the "asset bubble era". Even the 2014 mini-boom seems to have been followed by a 2016 mini-recession:


But absent a recession (or a mini-recession), we should expect this quarter's nominal GDP growth to come in roughly around the "new normal" (post 2008, after the fading of the demographic shock of the 60s, 70s, and 80s) of 3.8% — similar to the (nominal) PCE measure's new normal of 3.7%.

Tuesday, May 21, 2019

Prices are sticky except when they aren't. Um, ok.


I was reading over the good discussion of Emi Nakamura's interesting work in the wake of her Clark medal over at A Fine Theorem. One of the areas she's worked on (among many) is prices, but I personally find it strange how the issues are framed ... 
How much do prices actually change – do we want to sweep out short-term sales, for example? ... [Nakamura's] “Five Facts” paper uses BLS microdata to show that sales were roughly half of the “price changes” earlier researchers has found, that prices change more rapidly when inflation is higher ... For example, why are prices both sticky and also involve sales?
Why in any consideration of the question of "nominal rigidities" would you even think to "sweep-out" (i.e. ignore) short term sales? Why would you utter the nonsensical phrase that prices are "both sticky and also involve sales"?

Nakamura does the same thing:
The simultaneous existence of rigid regular prices and frequent sales is an important challenge for the theoretical literature on monetary nonneutrality.
Well sure, if you take out the part where prices change, they're going to change a lot less.

In a sense, my thoughts here are the same as my thoughts on Eichenbaum et al's straining to hold on to sticky prices in the face of empirically self-evident flexible prices. Going directly to the Eichenbaum et al paper:
Instead, nominal rigidities take the form of inertia in reference prices and costs. Weekly prices and costs fluctuate around reference values which tend to remain constant over extended periods of time.
To which I said in response:
I'd say that a sudden drop of a price by 30% is not "sticky" in the colloquial sense of the word. The authors of that paper (Martin Eichenbaum, Nir Jaimovich, and Sergio Rebelo) seem to want to hold onto sticky prices, however. ... A reference price with fluctuating sales on and off actually leads to something that looks exactly like a random walk if you average over a moving window.
The example is reproduced at the top of this post. In fact, this is exactly how a lot of systems in my real job work (e.g. satellite thrusters that are on or off, where turning them on an off for discrete periods of time allows you to get any amount of thrust you want).

Don't get me wrong: how prices actually change empirically is useful information — it's the sticky + not-sticky decomposition for macro-scale implications I find odd. My interpretation of the data is that these observations show prices are not sticky at the micro scale. However, that does not preclude macro-scale stickiness that manifests as stable distributions of price changes (e.g. here or here, or as discussed in my first paper in Section 4.1) — a hypothesis for which there is stronger evidence** ...


**There is an odd blip in this data in April 2010 that I talk about in the linked post.

Tuesday, May 14, 2019

Accounting identities and conservation laws

David Glasner brought me into a twitter thread that resulted in a disagreement between Noah Smith and myself which is unfortunate because I think we're saying the same thing regarding "accounting identities" in economics being arbitrary definitions. Noah just seems to think that conservation laws aren't also (direct consequences of) definitions. First, let me get all relativistic (in both senses).

Newton's laws are basically a series of definitions that say "I am defining a quantity called momentum (1st and 2nd laws) that is conserved by definition (3rd law)" [1].  This turned out to be one of the most useful definitions in science — though it was counterintuitive at the time. Imagine people reading the 1st law about objects tending to stay in motion when everything in their life usually ground to a halt due to friction.

This definition of momentum led to being able to predict the orbits of comets and the paths of projectiles pretty well. Emmy Noether eventually discovered the reason: it's because the universe has an approximate translational symmetry such that laws of physics at a point x is the same as the laws of physics at a point x + dx. It gets a few things wrong, like the orbit of Mercury — that's because the actual symmetries are Lorentz invariance and general covariance. But by "actual" here, we mean that they dynamics that result from the definition of momentum that arises from assuming Lorentz invariance (4-momentum) gives the results we measure. We also often arbitrarily separate the momentum conserved due to rotational symmetries (angular momentum) from the momentum conserved due to translational ones.

But the universe doesn't care about momentum or its conservation — we humans defined it based on a way we decided to decompose the universe that we found useful. And in the end, it comes down to the definition of what a derivative is. That x + dx is directly related to the momentum operator d/dx and the better definitions of momentum that are conserved have covariant derivative momentum operators. So, it's really our human definition of calculus.

As we found issues with the definition of momentum, we've expanded it and made it more nuanced — because the purpose of the definition of momentum is to get empirically accurate theories, not hold on to Newton's definition.

Now not being an economist I may get this wrong but Simon Kuznets' original definition of GNP/GDP as the market value of final goods and services produced in a quarter (or year, or other time period) was so defined because people thought it would be useful as a measure of production related to the current level of employment. If a Starbucks barista made you a coffee this morning, it'll be in this quarter's GDP. If I sell you my vintage 1980s Dougram collection, it doesn't employ anyone in the current period so it's not in GDP.

Just like how momentum was defined in order to try and produce a useful theory of motion ("physics"), GDP was defined in order to try and produce a useful theory of employment ("macro"). It's just a definition:
GDP = (your cup of Starbucks Coffee) + (my cup of Starbucks Coffee) + ... (other Non-Coffee final goods and services produced in 2019 Q2)
We can also arbitrarily group (partition) them:
GDP = C + NC
That arbitrary grouping is probably not useful. But this one has stuck around for a long time:
GDP = C + I + G + NX
We call this arbitrary grouping a national income accounting identity. Now just because it has stuck around doesn't make it right, but it does capture one useful aspect of modern economies — G tends to move all at once with changes in government fiscal policy and can move in the same or opposite direction relative to e.g. C. Empirical data appears to show that changing G can be used to offset the collapse in C during a recession, for example. In a long-ago blog post, I discussed how it might be useful to think of an additional financial sector (which redefines C, I, and NX a bit) so that:
GDP = C + I + F + G + NX
That's another arbitrary grouping that's purely a definition. But like our evolving definition of momentum that's been found wanting on occasion, we can evolve our definition of the national income accounting identity to pick out a financial as well as a government sector. Why? Because the financial sector is also large and may move in the same direction all at once like in 2008. If we think of the distribution of growth rates of various companies and government entities in terms of their contribution to GDP, the whole collection will have some average growth rate based on the average of that distribution:


GDP growth will be the ensemble average. Like partitioning down to the individual coffee level, this may or may not be useful. However, if we group the financial sector into one big box (gray) and the government sector into another (blue), we instead have maybe something like this:


If there's a financial crisis, then maybe the whole financial sector shrinks:


and the new ensemble average growth rate results in a GDP that declined in that quarter. Shifting the government sector up could potentially offset that a bit. Again, maybe this arbitrary grouping is useful and maybe it isn't — a lot depends on the interactions of the various pieces (I talk about that a bit more here).

The main point is that definition of GDP and the accounting identity partition of it are both completely arbitrary, but like the arbitrary definition of momentum (which is based on our calculus-based approach to physics) they might be useful.

It's true per Noah's original tweet in the thread that we don't want to put too much weight on what is essentially an arbitrary definition that might not be useful. And you definitely want to be careful about reasoning from the identity alone — also because calculus:


This graphic lays out the possibilities for the interaction of those sectors (including the little boxes) in the distribution pictures above (but this graphic is for levels, while the distribution is for growth rates). The picture I showed in the distributions is the upper right where the change in the financial box doesn't do anything to the other boxes.

Macroeconomics is a nascent science compared to physics — Newton's definition of momentum is from 1687 while Kuznets' definition of GDP is from 1934. So yes, by all means recognize that the definition of GDP and its various accounting identities are arbitrary definitions. GDP seems to be a useful macro definition for studying employment and fiscal policy does seem to have an effect on the economy warranting a separate G term. Maybe there are better — more useful — definitions. But don't go too far in the other direction and think that the enormously successful definition of momentum as a conserved quantity makes it not arbitrary. It's still just a label we humans applied to the universe.

...

Footnotes:


[1] Actually Newton's Lex II was a bit vague:
Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.
A somewhat direct translation is:
Second Law: The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.
The modern understanding is:
Second Law: The change of momentum of a body is proportional to the impulse impressed on the body, and happens along the straight line on which that impulse is impressed.
Where momentum and impulse now have very specific definitions as opposed to "motive force" and "motion". This is best interpreted mathematically as

I ≡ Δp

where I is impulse and p is the momentum vector. The instantaneous force is (via the fundamental theorem of calculus, therefore no assumptions of relationships in the world)

I = ∫ dt F

F ≡ dp/dt

where p is the momentum vector. The alteration of "motion" (i.e. momentum) is Δp (or infinitesimal dp), and the rest of the definition says that the vector (and impulse vector I) is parallel to the vector. Newton would have written in his own notes something like f = ẋ using his fluxions (i.e. f = dx/dt).


Monday, May 13, 2019

Wage growth forecast continues to perform (unemployment, too)

The dynamic information equilibrium model (DIEM) wage growth forecast from over a year ago now continues to perform a bit better than Jan Hatzius's forecast from six months ago:



For some reason, I didn't put the latest unemployment rate report on the blog (and labor force participation) — let me correct that:



Tuesday, May 7, 2019

JOLTS: a continuing deviation from the trend

There's not much to say about the JOLTS data except that it's continuing the negative deviation it has been on for awhile now. The most robust deviation is actually in separations, with hires — the most robust indicator of a future recession (about 5 months in advance of indications in the unemployment rate) still showing no deviation. Based on this model which puts hires as a leading indicator, we should continue to see the unemployment rate fall through August of 2019 (5 months from March 2019). As always, click to enlarge ...




Wednesday, May 1, 2019

Economic weakness in the West?

I noticed that the unemployment rates for two west coast states — Washington state (where I live) and California (the largest state by population) — have shown a noticeable uptick recently. Now state level unemployment rates have large fluctuations. However, the unemployment rate for the West Census Region (defined here) is a bit smoother and is also showing that same turnaround. So I decided to try the dynamic information equilibrium model (DIEM) on it:


This shows the last few months of data rising up a bit over the expected non-recession (dynamic equilibrium) path. I tried a few counterfactual estimates: a free parameter shock (A), a shock with a fixed center date at 2019.7 and other parameters free (B), and a typical magnitude/duration shock with a free center parameter (C).

First, it should be noted that typically the non-equilibrium shocks are underestimated in their magnitude during the leading edge, and then overestimated until the shock center has passed. This is probably the reason for scenarios A and B being small relative to the average size of a recession shock.

Scenario A could potentially be showing us the end of the 2014 mini-boom (that end appears clearly in the WA unemployment rate, but at the end of 2015, not 2019). But the mini-boom shows up clearly in JOLTS hires data (again, here) — which we can look at for the West region. In that data (using the DIEM), it appears to end in mid-2016, far earlier than late-2018/early-2019:


It's not implausible that we're just seeing the end of the mini-boom (which does not appear to have ended for the country as a whole).

The latest JOLTS data also appears to be skirting the bottom edge of the hires data. Scenarios B and C both show what a counterfactual recession would look like that's consistent with the deviation from the DIEM since 2018, but — depending on the shock width — shocks to the hires time series precede shocks to the unemployment rate by about 5 months or so. In this case, the hires data is a bit noisy and just might not see the a shock yet.

More data will be coming out in mid-May that might clarify things a little more. It will be interesting to see the unemployment data that comes out on Friday and whether or not this uptick will start to appear nationally.