Wednesday, February 28, 2018

Forecast performance of a quantity theory of labor


One of the dynamic information equilibrium model forecasts I've been tracking on the order of a year now to measure its performance is what I call the "N/L" or "NGDP/L" model [1] (specifically FRED GDP, i.e. nominal GDP, divided by FRED PAYEMS, i.e. total nonfarm payrolls). Revised GDP data came out today, so I thought it'd be a good time to check back in with the model [2]:


One way to think about this is as a measure of nominal productivity. We are coming out of the aftermath of the shock to the labor force following the great recession, so we can see a gradual increase back towards the long-run equilibrium.

If we use this dynamic equilibrium model instead of NGDP alone as the shocks, we can see in a history "seismograph" that this measure basically coincides with the inflation measures.


There's a good reason for this: this is effectively a model of Okun's law (as described here) if we identify the "abstract price" with the price level P:

$$
P \equiv \frac{dNGDP}{dL} = k \; \frac{NGDP}{L}
$$

which can be rearranged

$$
\begin{align}
L & = k \; \frac{NGDP}{P} \equiv RGDP\\
\frac{d}{dt} \log L & = \frac{d}{dt} \log RGDP
\end{align}
$$

to show changes in employment (and therefore unemployment) are directly related to changes in real GDP.

...

Footnotes

[1] Also, the "quantity theory of labor" per the title because the model implies log NGDP ~ k log L.

[2] Here is the complete model:


Tuesday, February 27, 2018

Women in the workforce and the Solow paradox

Paddy Carter sent me a link on Twitter to a study [pdf] using a different model that came to conclusions similar to the view I've been expressing on this blog:
The increase in female employment and participation rates is one of the most dramatic changes to have taken place in the economy during the last century.
From their conclusions:
Furthermore, the unexplained portion [of the rise in women's employment] is quite large and positive; in other words, for cohorts born before 1955, the simulations overpredict female employment and for more recent cohorts, underpredict it. Therefore, there must have been other changes taking place among married women by cohort. We have shown above that it is consistent with the model to claim that technological progress in household production or a change in social norms has brought down the costs of working outside the home.
As part of my continuing series of dynamic information equilibrium "seismograms" (previously here), I put together another version of the story of women entering the workforce [1] as a driver of dramatic changes in the economy:


A big lesson is about causality. The positive shock to the level of women in the labor force precedes a (smaller relative) shock to the level of men in the labor force, both of which precede shocks to output and finally inflation (both CPI and PCE shown). Women entering the workforce caused a general economic boom — which drew additional men into the workforce and increased output and prices.

In the study linked above, the authors speculate that some of the effect was due to "technological progress in household production" (e.g. household labor-saving devices like washing machines and dishwashers) which made me think of the 'Solow paradox' ("You can see the computer age everywhere but in the productivity statistics."). What if the reason that household technology shows up in the economy is because household technology enabled more people to enter the labor force, while computers were mostly used by people already in the labor force [2]? This idea can be taken a step further to suggest that maybe the high GDP growth and inflation of the 1970s was due to the fact that a significant fraction of work that isn't counted in GDP statistics (household production) was automated allowing people to participate in work that was counted in GDP. That is to say that if household production was counted in GDP, it is possible that there might not have been a "great inflation".

This is of course speculative. However it is a good "thought experiment" to keep in mind to keep you from assuming that GDP is some ideal measure and remind you that the "events" that appear in the GDP data may well be artefacts of the measurement methodology [3].

...

Update

Diane Coyle makes the case [pdf] for the possibility I mention in footnote [2]: transition to more "digital production" behind recent low productivity.

...

Update 28 February 2018

Commenter Anti below mentions inflation expectations, so I thought I'd add the dynamic information equilibrium model of the price level implied by the University of Michigan inflation expectations data [4]. I've added the result to the macroeconomic seismogram:


Note that shock to inflation expectations follows the shock to measured inflation (making a simple backward-looking martingale a plausible model).

...

Footnotes

[1] Here are the CLF models for men and women:



The NGDP model is from here; the inflation models were used in my first history seismogram.

[2] This brings up the question of whether current home production that isn't counted in GDP — much of which is done on computers — is behind the recent "low growth" of a lot of developed economies.

[3] And even the economic system as households (as well as firms) are typically more like miniature centrally planned economies.

[4] The model fit is pretty good (dynamic equilibrium is α = 0.03, or basically 3% inflation):



Thursday, February 15, 2018

Dynamic equilibrium in wage growth


I saw some data from the Atlanta Fed [1] on wage growth that looked remarkably suitable for a dynamic information equilibrium model (also described in my recent paper). One of the interesting things here is that it is a dynamic equilibrium between wages ($W$) and the rate of change of wages ($dW/dt$) so that we have the model $dW/dt \rightleftarrows W$:

$$
\frac{d}{dt} \log \frac{d}{dt} \log W = \frac{d}{dt} \log \frac{dW/dt}{W} \approx \gamma + \sigma_{i} (t)
$$

where $\gamma$ is the dynamic equilibrium growth rate and $\sigma_{i} (t)$ represents a series of shocks. This model works remarkably well:


The shock transitions are in 1992.0, 2002.4, 2009.4, and 2014.7 which all follow the related shock to unemployment. A negative shock to employment drives down wage growth (who knew?), but it also appears that wage growth has a tendency to increase at about 4.2% per year [2] unless there is a positive shock to employment (such as in 2014) when it can increase faster. The most recent downturn in the data is possibly consistent with the JOLTS leading indicators showing a deviation, however since the wage growth data seems to lag recessions it is more likely that this is a measurement/noise fluctuation.

I added the wage growth series to the labor market "seismogram" collection, and we can see a fall in wage growth typically follows a recession:


...

Update 20 March 2021

It's not groundbreaking, but I should add that since $W \rightleftarrows NGDP$, the model is basically (by the transitive property of information equilibrium) that increases in NGDP are informationally equivalent to increases in the rate of growth of wages — a growing economy opens the opportunity set for wage growth.

At least until this happens.

(I've felt that the $\frac{dW}{dt} \rightleftarrows W$ formulation was a bit abstract, but realized I've never updated the post with how I thought about it in a more concrete way.)

...

Footnotes:

[1] The time series is broken before 1997, but data goes back to 1983 in the source material. I included data back to 1987. However the data prior the 1991 recession does not have the complete 1980s recession(s), so the fit to that recession shock would be highly uncertain and so I left it out.

[2] Wage growth it typically around 3.0% lately, so a 4.2% increase in that rate would mean that after a year wage growth would be about 3.1% and after 2 years about 3.3% in the absence of shocks.

Are interest rates inexplicably high?

The interest rate model of the long and short term interest rates are predicting average interest rates below the current observed rates. For example in this forecast:


Now the actual forecast is for the average trend of monthly rates and I'm showing actual daily interest rate data, so we can expect to see occasional deviations even if the model is working correctly.

But how can we tell the difference between some expected theoretical error and a deviation? I decided to look at the elevated recent data in the light of the models' typical error. In the case of the long rate above, we're in the normal range:


The short rate is on a significant deviation:


However these errors basically assume that the model error is roughly constant in percentage (i.e. a 10% error means 100 basis point error on a 10% interest rate while a 10% error means a 10 basis point error on a 1% interest rate). This is definitely not true because the data is reported only to the nearest basis point, but the finite precision effect should only come into play near log(0.01) ~ -4.6. This error is possibly due to the Federal Reserve's implied precision of 25 basis points where log(0.25) ~ -1.4. Since the Fed doesn't make changes of less than a quarter basis point, and the short rate typically sticks close to the Fed funds rate, we'd expect data near or below log(0.25) as shown on the graph to have larger error than points above log(0.25).

I don't see any particular reason to abandon these models without a more significant deviation.

Wednesday, February 14, 2018

Comparing CPI forecasts to data

New CPI data is out today, and here is the latest data point as both continuously compounded annual rate of change and year-over-year change. The latest uptick is consistent with the a general upward trend after the post-recession shock to the labor force.



Tuesday, February 13, 2018

Some historical myths about Einstein and relativity

Which "thought experiment" leads you this geometry?

One of the things I've noticed ever since I started doing some "freelance" (or "maverick" or "nutcase") economic research is how many strange accounts of how special relativity came about are out there in the world. It's a story frequently invoked by people from all walks of life from economists to philosophers to general fans of science as an example of an ideal process of science. However the story invoked is often at odds with what actually happened or with how physicists today view the outcome.

The popular re-telling actually has many parallels with the popular but erroneous [1] re-telling of how 70s inflation "proved Friedman was right" in macroeconomics — even to the point where some practitioners themselves believe the historical myths. The popular (but false) narrative goes something like this: Michelson and Morely conclusively disproved the idea of the aether and in order to solve the resulting problems, Einstein used intuition and some thought experiments about moving clocks to derive a new theory of physics that refuted the old Newtonian world.

This should immediately raise some questions. 1) What problems with Newtonian physics would be caused by showing the aether (which doesn't exist) doesn't exist? 2) Why do physicists still use Newtonian physics? 3) Isn't Einstein famous for the equation E = mc² — which thought experiment leads to that?

The real story is more like this: Maxwell had produced an aether-based framework that was unifying the physics of light waves, electricity, and magnetism but there were some counterintuitive aspects of this framework that all had to do with moving charges and light sources involving a bunch of ad hoc mathematical modifications like length contraction, models of the aether, and an inconsistency in the interpretation of Maxwell's equations; Einstein came up with a general principle that unified all of these ad hoc modifications, made the aether models unnecessary, and resolved the asymmetry.

This answers my questions 1) through 3) above. 1) The aether was shown to be unnecessary, not erroneous. 2) Newtonian physics is a valid approximation when velocity is small compared to the speed of light. 3) E = mc² is a result of Lorentz invariance (i.e. math), not the thought experiments that help us get over the counterintuitive aspects of Lorentz invariance.

Now I am not a historian, so you should take this blog post as you would any amateur's. I did an undergraduate research project on the motivations for special relativity as part of my interdisciplinary science honors program [2], presented the result in a seminar, and I'm fairly familiar with the original papers (translated from German and available in this book). I also spent a bit of time talking with Cecile DeWitt about Einstein, but I'd only really use this to confirm the popular notion that Einstein had a pretty robust sense of humor so direct quotes should be considered with that in mind.

Let's begin!

Myth: Einstein was "bad at math"

This takes many forms from denial that the theoretical advances Einstein made were extremely advanced math at the time, to that he was actually bad at math leading him to his "thought experiments". This myth likely arises from a quote from a 1943 letter in response to a high school student (Barbara Wilson) who had called Einstein one of her heroes (emphasis mine):
Dear Barbara: 
I was very pleased with your kind letter. Until now I never dreamed to be something like a hero. But since you have given me the nomination, I feel that I am one. It's like a man must feel who has been elected by the people as President of the United States. 
Do not worry about your difficulties in mathematics; I can assure you that mine are still greater.
This probably was just said as encouragement, and Einstein might even have been thinking about his own crash course in differential geometry and comparing himself to mathematicians he knew like his teacher Minkowski. Einstein was something of a mathematical prodigy when he was younger and all of his work on relativity is mathematically challenging even for modern physics students. It would be hard to look at mathematics like this and say the person who was able to use it to produce an empirically successful theory of gravity was "bad at math". Also, here's the blackboard he left after his death:

You forgot to contract the indices on the Christoffel symbols.

Update 19 January 2019 (H/T Beatrice Cherrier). Apparently the math was so obscure at the time that only Einstein and is close mostly German colleagues really understood it and due to English-German animosity in WWI it took some time to leak out to English physicists. The article that's from also has other things related to the rest of this post.

Myth: Einstein's thought experiments led to relativity

There are quite a few versions of this idea, but really it is more the reverse. Math led Einstein to conclusions he used thought experiments to understand (i.e. explain to himself and others) because of how counterintuitive they were. Maxwell's equations and their Lorentz invariance led Einstein to effectively promote a symmetry of electromagnetism to a symmetry of the universe. Einstein later used Minkowski's mathematical representation of a 4-dimensional spacetime as the framework for what would become general relativity.

It's somewhat ironic because Mach — who coined our modern use of "thought experiment" and that Einstein had learned "relativity" from — believed that human intuition was accurate because it was honed by evolution. But why would evolution provide humans with the capacity to intuitively understand the bending of space and time (or the quantum fluctuations at the atomic scale)? Einstein turned that upside-down, and used Mach's thought experiments to instead explain counterintuitive concepts like time dilation and length contraction. I think a lot of people confuse Einstein's and Mach's ideas of "thought experiments" which led to this myth [3]. You can read more about this here.

I once had a commenter on this blog who decided to argue against even direct quotes from Einstein saying he got the idea of space-time for general relativity from Minkowski's 4-dimensional mathematics. Although some things in physics get named for the wrong person (the Lorentz force wasn't first derived by Lorentz), it's called Minkowski space-time for a reason.

This is a powerful narrative for some reason; I suspect it is the math-phobic environment that seems unique to American discourse. It is fine as an American to freely admit you are bad at math and still think of yourself as somehow "cultured" or "intellectual" (or in fact to elevate your status). The myth that Einstein didn't need math to come up with relativity plays into that.

Myth: The aether was disproved just before (or by) relativity

As I talked about here, there were actually several different theories of the aether (e.g. aether dragging) and various negative results over 50 years from Fizeau's experiment to Michelson and Morely's were often seen as confirmation of particular versions. Experiments continued for many years after Einstein's 1905 paper [3], and despite the modern narrative that Michelson and Morely's experiment led to special relativity it was really more about mathematical theory than experiment [4].

I'm not entirely convinced that the aether has been completely "disproved" in the popular imagination or even among physicists anyway. We frequently see general relativity and gravity waves explained through the "rubber sheet" analogy which might as well be called an "aether sheet". If the strong and weak nuclear forces hadn't been discovered in the meantime it is entirely possible that Kaluza and Klein's 5-dimensional theory that combined general relativity and electromagnetism would have become the dominant "standard model" and the aether could have been re-written in history as what space-time is made of [5].

What the #$@& is this substance that's oscillating here?

Myth: Special relativity "falsified" Newtonian physics

This one can be partially blamed on Karl Popper, but also on various representations and interpretations of Popper. I've frequently found descriptions of Popper's idea of falsification that say something like "Eddington's 1919 experiment falsified Newton's theory of gravity and caused it to be replaced with Einstein's". For example, here:
Popper argues, however, that [General Relativity] is scientific while psychoanalysis is not. The reason for this has to do with the testability of Einstein’s theory. As a young man, Popper was especially impressed by Arthur Eddington’s 1919 test of GR, which involved observing during a solar eclipse the degree to which the light from distant stars was shifted when passing by the sun. Importantly, the predictions of GR regarding the magnitude shift disagreed with the then-dominant theory of Newtonian mechanics. Eddington’s observation thus served as a crucial experiment for deciding between the theories, since it was impossible for both theories to give accurate predictions. Of necessity, at least one theory would be falsified by the experiment, which would provide strong reason for scientists to accept its unfalsified rival.
As best as I can tell, Popper only thought that Eddington's experiment demonstrated the falsifiability of Einstein's general relativity (e.g. here [pdf]): Eddington's experiment could have come out differently meaning GR was falsifiable. I have never been able to find any instance of Popper himself saying Newton's theory was falsified (falsifiable, yes, but not falsified). Popper was a major fanboy for Einstein which doesn't help — it's hard to read Popper's gushing about Einstein and not believe he though Einstein had "falsified" Newton. Also it's important to note that general relativity isn't required for light to bend (just the equivalence principle), but the relativistic calculation predicts twice the purely "Newtonian" effect. That is to say that light bending alone doesn't "falsify" Newtonian physics, just the particular model of photon-matter gravitational scattering.

In any case, both Newtonian gravity and Newtonian mechanics are used today by physicists unless one is dealing with a velocity close to the speed of light or in the presence of significant gravitational fields (or at sufficient precision to warrant it such as in your GPS which includes some corrections due to general relativity). The modern language we use is that Newtonian physics is an effective theory.

More myths?

I will leave this space available for more myths that I encounter in my travels.

...

Footnotes

[1] Read James Forder on this.

[2] Dean's Scholars at the University of Texas at Austin

[3] I sometimes jokingly point out that there is a privileged frame of reference that observers would agree on: the Big Bang rest frame. We only recently discovered our motion with respect to it in the 1990s. This idea also complicates some of the "thought experiments" used to explain special relativity (i.e. an absolute clock could be defined as one ticking in the rest frame of the CMB).

[4] I blame Popper for this:
Famous examples are the Michelson-Morley experiment which led to the theory of relativity
Einstein actually begins [pdf] with the "asymmetries" in Maxwell's equations, and relegates the aether experiments to an aside:
Examples [from electrodynamics], together with the unsuccessful attempts to discover any motion of the earth relatively to the “light medium,” suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest.
The paper itself is titled On the electrodynamics of moving bodies, further emphasizing that Einstein's motivation was more understanding the "asymmetries" of Maxwell's equations and Lorentz's electrodynamics. Einstein's paper basically reformulates Lorentz's "stationary aether" electrodynamics, but does it without recourse to the aether.

Experiments like Michelson and Morely's (such as Fizeau's 50 years prior, and a long list of others) were part of a drumbeat of negative results of measurements of motion with respect aether. In a sense, Einstein is telling us the aether (and therefore any attempt to measure our motion with respect to it) is basically moot — not that some experiment "disproved" it:
The introduction of a “luminiferous ether” will prove to be superfluous inasmuch as the view here to be developed will not require an “absolutely stationary space” provided with special properties, nor assign a velocity-vector to a point of the empty space in which electromagnetic processes take place.
[5] For example: "In the early 1800s Fresnel came up with the wave theory of light where the electromagnetic vibrations occurred in a medium called the luminiferous aether that we now refer to as space-time after Kaluza and Klein's unification of the two known forces in the universe: gravity and electromagnetism."

Monday, February 12, 2018

Economic seismograms: labor and financial markets


Steve Randy Waldman wrote a tweet asking about whether the stock market falls imperfectly predicted recessions or caused them, to which I responded saying the former in the "Phillips curve era" and the latter in the "asset bubble era" (both described here). But I thought I'd show a dynamic information equilibrium history chart that helps illustrate this a bit better for the US data. I first started making these graphs a few months ago partially inspired by this 85 foot long infographic from the 1930s; I thought they provided a simpler representation of the important takeaways from the dynamic information equilibrium models (presentation here or see also my paper) that I plan on using in my next book. Be sure to click on the graphics to expand them.

The light orange bars are NBER recessions. The darker orange bars represent the "negative" shocks (in the sense that you'd consider a bad change in the measure — unemployment rate goes up or the stock market goes down) with the wider ones meaning a longer duration shock. The blue bars are "positive" shocks (unemployment rate goes down, stock market goes up). The models shown here are the S&P 500, unemployment rate, JOLTS (quits, openings, hires), and prime age Civilian Labor Force participation rate

As you can see in the top graph, major shocks to the S&P 500 precede recessions (and unemployment shocks) in the Phillips curve era (the 1960s to roughly the 1980s) and are basically concurrent with recessions (and unemployment shocks) in the asset bubble era (late 90s to the present).

At the bottom of this post, I focused in on the latter five labor market  measures. This graph illustrates the potential "leading indicators" in the JOLTS data with hires coming first, openings second, and quits third. I don't know if the order is fixed (if there is a recession coming up, openings appears to be leading a bit more than hires). The other interesting piece is that shocks (in both directions [1]) to prime age CLF participation lag shocks to unemployment. There's an intuitive "story" behind this: people become unemployed, search for awhile, and then leave the labor force.


PS I thought I'd include these measures that illustrate my contention that the "great inflation" of the 1970s was primarily a demographic phenomenon of women entering the workforce that I describe here in order to have a single post to reference for some of my more outside the mainstream conjectures. I present two measures of inflation (CPI and PCE) as well as the civilian labor force (total size) alongside the employment population ratio for men and women.



...

Footnotes

[1] You may be asking why there's a positive shock to unemployment, but no (apparent) shock to any of the JOLTS measures. That's an excellent question. The answer probably lies in the fact that shocks to unemployment are made up of a combination of smaller shocks to the other measures as well as a shock to the matching function itself. Therefore the shock to hires and openings might be too small to see in those (much noisier) measures. One way to think about it is that the unemployment rate is a sensitive detector of changes in both hires, openings, and the matching function.

Wednesday, February 7, 2018

What is the chance of seeing deviations in three JOLTS measures?

JW Mason had a post the other day wherein he said:
The probability approach in economics. Empirical economics focuses on estimating the parameters of a data-generating process supposed to underlie some observable phenomena; this is then used to make ceteris paribus (all else equal) predictions about what will happen if something changes. Critics object that these kinds of predictions are meaningless, that the goal should be unconditional forecasts instead (“economists failed to call the crisis”). Trygve Haavelmo’s writings on empirics from the 1940s suggest third possibile goal: unconditional predictions about the joint distribution of several variables within a particular domain.
To that end, I thought I'd look at the joint probabilities of the JOLTS data time series falling below the model estimates. First, let's look at some density plots of the deviation from the model (these are percentage points) for JOLTS hires (HIR), openings (JOR), and quits (QUR) for the data from 2004-2015 and then place the data from January 2017 to the the most recent (Dec 2017) on top of it (points):


Can we quantify this a bit more? I looked at two measures using the full 3-dimensional distribution: the probability of finding a point that is further out from the center as well as the probability that at least one of the data series has a worse negative deviation than the given point and plotted both of those measures versus the distance from zero:



The first measure doesn't account for the correlation between the different series very well, but does give a sense of how far out these points are from the center of the distribution. The second measure gives us a better indication of not only the joint probabilities but the correlation between them — even if one of the three series is far from the center, it can be mitigated by one that is closer especially if they are correlated.

While there is 19% chance that one of the hires, openings, or quits data could've come in worse than it did on Tuesday based on the data from 2004-2015, that's not all that small of a probability leaving open the possibility that the data is simply on a correlated jog away from the model. This is basically capturing the fact that most of the deviation is coming from the openings data while the other two are showing smaller deviations:


Tuesday, February 6, 2018

JOLTS data ... and that market crash?

The latest JOLTS data does seem to continue the deviation from the dynamic information equilibrium we might see during the onset of a new shock (shown here with the original forecast and updated counterfactual shock in gray; post-forecast data is in black):




I will admit that the way I decided to implement the counterfactual shock (as a Taylor expansion of the shock function that looks roughly exponential on the leading edge) might have some limitations if we proceed into the shock proper because adding successive terms causes the longer ranges of the forecast to wildly oscillate back and forth as can be seen here for a sine function. Using the full logistic function isn't necessarily a solution because it produces a  series of under- and over-estimates (see here). Basically, forecasting a function that grows exponentially at first can be hard. One other measure is the joint function of openings and unemployment making up the Beveridge curve which is starting to show a deviation from the expected path as well (moving almost perpendicularly to it):


This brings me to the discussion around the latest market crash which included a lot of "the market is not the economy" and a pretty definitive "literally zero percent chance we are in a recession now" from Tim Duy. The only thing I would bring up is that the JOLTS data is a possible leading indicator of a recession and that data is not obviously saying "no recession" — and is in fact hinting at one (in the next year or so).

Coincidentally, I just updated the S&P 500 model I've been tracking and the latest drop puts us almost exactly back at the dynamic equilibrium (red, data and ARMA process forecast is blue, post-forecast data is black):


Which is to say that we're right where we'd expect to be — not on some negative deviation from equilibrium (just a correction to a positive deviation). I think it is just coincidental that the market fell to exactly the dynamic equilibrium model center; I wouldn't read too much into that. The fluctuations we see are well within the historical deviations from the dynamic equilibrium (red band is the 90% band).

...

Update 7 February 2018

I thought I'd add in the interest rate model forecast that's been going on for over three years as well. Note that the model prediction is for monthly data, therefore the random noise in daily data will have somewhat larger spread, but it is still a bit high (which is one of the possible precursors of recession, connected to yield curve inversion in the model, see also here or here):


Sunday, February 4, 2018

Long term exercises in hubris: forecasting the S&P 500

I've been tracking the S&P 500 forecast made with the dynamic information equilibrium model. The latest mini-boom and subsequent fall are still within the normal fluctuations of the market:


However, I wouldn't be surprised if the massive giveaway to corporations in the latest Republican tax cut didn't in fact constitute a "shock" (dashed line in the graph above). Also relevant: the multi-scale self-similarity of the S&P 500 in terms of dynamic equilibrium.

...

Update 5 February 2018

Ha!

Also, the close today brings us almost exactly back to the dynamic equilibrium:


Also bitcoin continues to fall (this is not a forecast, but rather a model description):

...

Update 26 February 2018

Continued update of S&P 500 and bitcoin:



Saturday, February 3, 2018

African American unemployment spike

There's almost a sense of dramatic irony that after the State of the Union speech last week where credit was taken for the stock market and African American unemployment, both reversed themselves in the most recent data. While the spike in unemployment is outside the 90% confidence bands for the dynamic information equilibrium model for black unemployment, I do think it is just a fluctuation (statistical or measurement error):


We'd expect 90% of the individual measurements to fall inside the bands, so occasionally we should see one fall outside. It's not an actual increase in human suffering, and in fact is consistent with the continued decline in unemployment seen by the model. The unemployment rate is somewhat of a lagging indicator of recessions as well, so we should expect to see a decline in one or more JOLTS measures first if this is the leading edge of a recession.

However.

We should always keep our minds open to alternative theories, and along with the spike in hate crimes since the 2016 election it is possible that employers have felt more empowered to discriminate against African Americans. JOLTS data is not broken out by race, and so a racially biased decline in hires could well be hidden in the data (e.g. it could be partially responsible for the potential decline we are currently seeing in the aggregate measures — why would JOLTS hires fall when the "conventional wisdom" is that the economy is doing "great"?). This "leading" indicator wouldn't be as good of a leading indicator for a racially biased recession. In the past two recessions, the shocks to unemployment hit African Americans a couple months later (the centers are at 2002.0 vs 2001.8, and 2009.0 vs 2008.8), so a recession where black unemployment leads would be anomalous.

I don't think that is what is happening (it's just a single data point after all), but it can't be ruled out using available data. And after the experience of the past two years, I wouldn't put money on the better angels of white Americans' nature.

Friday, February 2, 2018

Unemployment and labor force participation (models vs data)

The latest employment situation data is out and the unemployment rate holds steady at 4.1%. This is still in line the dynamic information equilibrium model (here or in my recent paper) as we begin the model's second year of accurate forecasting:


The data is also still in line with the some of the latest forecasts from the Fed and FRBSF (but not their earlier ones):


Note that the unemployment rate seems to be a lagging indicator compared to JOLTS data (out next Tuesday 6 February 2018), so while there is some evidence in the JOLTS hires data of a possible turnaround it won't show up in the unemployment rate for several months.

Also out is the latest labor force participation data which doesn't help us distinguish between the two models (with and without a small positive shock in 2016) as it's consistent with both:


And finally there is the novel "Beveridge curve" connecting labor force participation and unemployment rate:


Update:

In light of this post by JW Mason, I decided to add the error bands to the "Beveridge" curve above based on the individual errors. It's not exactly looking at the probability of the joint distribution of multiple variables, but it's a step in that direction.


Thursday, February 1, 2018

When did we become gluten intolerant?

I don't know about you all, but I've been doing this since the early 2000s.

The dynamic information equilibrium approach I talk about in my recent paper doesn't just apply to economic data. The idea that the information content of observing one event relative to observing another event has rather general application. As an example, I will look at search term frequency. Now if the English language was unchanging, given that there are a huge number of speakers, we'd expect relative word frequencies to remain constant and the distributions to be relatively stable. Changes to the language would show up as "non-equilibrium shocks" — a change in the relative frequency of use that may or may not reach a new equilibrium. A given word becomes more or less common and therefore has a different information content when a that word is observed (a "word event").

We might be able to see some of these shocks in Google trends data — a collection of "word events" entered as search terms. It's is only available since 2004, so we really can only look at language changes that happen within a few years. Longer changes (e.g. words falling into disuse) won't show up clearly, but this time series is well-suited for looking at fads.

I wanted to try this because I read an offhand comment somewhere (probably on Twitter) that said something like "everyone suddenly became gluten intolerant in 2015" [1]. What does the search data say?


The gluten transition in the US is centered near January 2009, but takes place over about 6 years (using the full width at half maximum for the shock). It "begins" in the mid-2000s and we seem to have achieved a new equilibrium over the past couple years.

How about avocado toast? That happened around 2015 in the US:


However, I did notice on Twitter there were a lot more and earlier references to avocado toast from Australians (in fact I think it was a mention in Australian media that it wasn't just the breakfast I made myself for years after having been given it by a Chilean friend where it's been a common dish for a long time ("palta")). Was this hunch visible in the data? Yes — almost a full year earlier:


So anyway, I just wanted to show a fun application of the information equilibrium framework. It applies to a lot of situations where there is some concept of balance between different things: supply and demand, words and their language, cars and the flow of traffic, neurons and the cognitive state, or electrons and information.

...

Update 2 February 2018

The "macro wars" (Nov 2007–Mar 2011):


...

Footnotes:

[1] Update: found it.
As a casual student of American food faddism, something that is still more than alive and well today (Yes, it’s an amazing coincidence that a sizable percentage of the educated liberal upper middle class all became gluten intolerant over a 3 year period. Must be pollution or something), I always love stories about our ridiculous food history.
It's a 6-year period above, but the definition of the "width" of a transition is somewhat arbitrary (I used the full width at half maximum above).