Thursday, July 12, 2018

One purpose of information theory


Information theory turns 70 this year; Claude Shannon's famous paper A Mathematical Theory of Communication [pdf] was published in 1948 and has been lauded as one of the foundations of the "digital age". One of the first things it did was allow engineers to design communication networks that worked in the presence of noise. As a subject, it's far more general, though.

Unfortunately, Shannon's entropy, often referred to as information entropy, and then shortened to just information, is often confused with the colloquial term "information". This brings connotations of data, of knowledge, of specific sets of symbols with specific meaning. But as Shannon and Weaver said in their book from a year later, we must not confuse information theory information with meaning. This collision of terminology is amplified when it encounters economics, where information economics deals specifically with the economic value of meaningful information.

I believe the best way to understand this difference is to understand what information theory illuminates. Information theory gives us a way to quantify concepts when we have limited knowledge about what underlies those concepts. For example, information theory is essentially a more general framework that encompasses thermodynamics in physics — thermodynamics is the science of how collections of atoms behave despite not having remotely enough knowledge about the trillions upon trillions of atoms to make a model. We give up talking about what a single atom in a gas is doing for what an atom could be doing and with what probability. We cease talking about atoms are doing and instead talk about the realm of possibilities (the state space) and the most likely states.

Now thermodynamics is a much more specific discipline than information theory, not in the least because it specifies a specific relationship between energy and the (log of the) size of the state space through the Boltzmann constant k (where thermodynamic entropy S is related to the state space via S = k log W where W counts the size of that state space). But the basis of thermodynamics is the ability to plead ignorance about the atoms formalized and generalized by information theory.

Information theory helps us build efficient communications systems because it allows us to plead ignorance about the messages that will be sent with it. I have no idea which sequence of 280 characters you are going to tweet, but information theory assures us they will be faithfully transmitted over fiber optic cables or radio waves. And if I am ignorant of the message you send, how can its meaning be important — at least in terms of information theory.

Maximum entropy methods in e.g. earth science let us plead ignorance about the exact set of chemical and physical processes involved in the carbon or water cycles to estimate the total flux of thermodynamic energy in the Earth-Sun system. Maximum entropy lets us program a neural network to identify pictures of cats without knowing (i.e. setting) the specific connections of the hundreds of individual nodes in the hidden layer — I mean, it's hidden! In a similar fashion, I've been trying to use information theory to allow me to plead ignorance about how humans behave but still come up with quantitative descriptions of macroeconomic systems [1].

But that's why the information in information theory isn't about meaning. One purpose of information theory is to give us a handle on things we don't have complete knowledge of (so a fortiori can't know the meaning of): the motions of individual atoms, the thousands of node connections in a neural network, the billions of messages sent over the internet, or (maybe) the decisions of millions of humans in a national economy. If we're pleading ignorance, we can't be talking about meaning.

...

Update 13 July 2018

First, let me say this blog post was inspired by this Twitter thread with Lionel Yelibi. And second, there's a related post about Cesar Hidalgo's book Why Information Grows and his "crystals of imagination". Hidalgo has a parable about a wrecked Bugatti Veyron that tries to get the point across that the value of objects is related to the arrangement of atoms (i.e. specific realizations of state space). However, in that particular case the value information is not entirely encoded in the atoms but also (and maybe even primarily) in human heads: someone who didn't know what a Bugatti was would not value it in the millions. They might still value a car closer to tens of thousands of dollars (although even that is also based on my own experience and memory of prices).

...

Footnotes:

[1] In a sense, information equilibrium could be seen as the missing concept for economic applications because it gives a possible way to connect the information in two different state spaces which is critical for economics (connecting supply and demand, jobs with vacancies, or output with input).


Wage growth (and finishing out bitcoin)

The Atlanta Fed wage growth data has been updated for June 2018, and is pretty much in line with the dynamic information equilibrium model I've been tracking since February:


...

Post script/post mortem

Also, while not useful for forecasting, the bitcoin exchange rate model did provide a decent post hoc description of the data over the past several months but getting the average rate of decline a little high (using about −2.6/y, almost exactly 100 times the dynamic equilibrium depreciation rate of gold of −0.027/y when the actual empirical decline from the 18 December 2017 peak to the most recent 11 July 2018 measurement here was only −1.8/y):


It's possible there's another shock in the data [1] earlier this year, but as I said in this blog constantly adding shocks (even if they're really there) doesn't really validate the model. We'd need to validate the framework on other data and use that validity to motivate an unstable bitcoin exchange rate with tons of shocks.

Update

Here's what happens when you include that shock:


Note that in the "proper" frame (a log-linear transform that removes the dynamic equilibrium decline), the stair-step appearance (noted here and in my paper) is more obvious:



...

Footnotes:

[1] We could motivate this shock centered in April 2018 further by noting that the rate of decline from the 5 May 2018 peak to 11 July 2018 was −2.8/y and the rate of decline from 18 December 2017 to 18 March 2018 was −3.2/y meaning the lower rate of decline of −1.8/y from December to July was mostly due to the bump in April of 2018.

July update of CPI (with June data)

The latest CPI data is out today, and we see the continued end of the "lowflation" period in the US that trailed the 2008 recession and the global financial crisis. Overall, there's not of news here so I'll just post the graphs with the latest post-forecast data (black) compared to the forecast/model (red) for both continuously compounded annual rate of change and year-over-year change (as always, click to enlarge):


Here are some zoomed-in versions:


The errors bands are the standard deviation (~70%) of the model errors on the fit data (blue). The dashed red line is the (minimally) revised estimate of the post-recession shock parameters.

...

PS I forgot to include separations in the JOLTs data release earlier this week, so I'm posting it now. Also, I decided to use the interest rate spread estimate for the counterfactual recession timing (2019.7) in the static graphs instead of the previous arbitrary one (2019.5). The animations still show the effect of changing that timing on the counterfactual forecast. I'll also show the JOLTS openings rate with this updated timing guess:



Tuesday, July 10, 2018

Counterfactual 2019 recession update (JOLTS data)

Unfortunately the latest data from JOLTS isn't that informative — we're effectively in the same place we were last month with a continued correlated deviation from the dynamic information equilibrium model "no recession" counterfactual for JOLTS job openings. Here are the counterfactual forecasts updated with the latest data:


The quits and hires are showing trend behavior as before (click to expand):


A correspondent on Twitter did point me to NFIB data as an additional source — it tells a similar story to the JOLTS data with somewhat higher uncertainty:


Their measure is the fraction of firms reporting at least one unfilled job opening in their survey.

The median interest rate spread among several measures continue to decline. I added an AR process estimate of the future median monthly rate spread based on the linear model. It seems to show yield curve inversion is unlikely before the recession hits this pseudo-cycle:



And here's a somewhat more zoomed-in version:


PS Here's the updated JOLTS opening animation showing different counterfactual recession centers from 2018.5 to 2019.5:

As well as the Beveridge curve (latest point is the white dot with black outline):



Friday, July 6, 2018

Economic growth in India


Via DM, I was asked about the path of India's GDP in the dynamic information equilibrium model (DIEM). The result is in the graph above. I had to cobble together some annual data from the Reserve Bank of India's statistics page along side the quarterly data available on FRED. What is interesting is that India shows a different pattern of DIEM "shocks" from Anglophone and European countries. The first shock is a negative one centered in 1952, spanning the years 1949 to 1955; the likely "cause" is India's independence in 1947. Some may want to attribute this to India adopting a socialist system, but its first five year plan doesn't happen until 1951 — halfway through this shock. Plus, the following period shows roughly constant growth. At any rate, more study is needed.

The DIEM description of the data shows a period of "equilibrium" growth between 1960 and 1980 that would match up with Raj Krishna's "Hindu rate of [real] growth" of about 3.5%.  The 9.9% nominal GDP per annum would have to have inflation of about 6.4% during that period (which is about right: log CPI from 0.9 in 1960 to 2.2 in 1980 would be 0.065 = 6.5%) to produce 3.5% real growth.

The period from 1980 to 1995 was a large positive shock. It could be attributed to the sixth five year plan and its economic liberalization, however in most of the rest of the economies I've looked at the major factor is demographic. This could be e.g. people leaving agriculture for industry. It could also be the surge in deficit spending around the same time (slightly before). Again, I have to look more closely at other time series to understand what is happening here.

Finally, there's another surge that occurs from 2006 to 2012 (with the global financial crisis appearing as a blip in the middle). This neatly corresponds to the eleventh five year plan (2007-2012), and overlaps with the construction of the "Golden Quadrilateral" road system.

Since 2012, India appears to be back on its equilibrium growth path of 9.9% per year (nominal GDP), which is expected to continue into the future.

I always like looking at the data for other countries than the US — laziness and the ease of accessing FRED data are big reasons for most of the models being tested on US data. Additionally, the political economy of the US tends to bring up more US-centric questions.

I'm also not very well informed about a lot of the political economy and economic history of other countries. This is both good and bad. It's good because it means I don't go modeling the data with a preconceived economic history; it's bad because I don't necessarily have decent intuitive explanations for what the models uncover. I'd be appreciative for any information about the economic history of India beyond my rudimentary knowledge above in comments.

Unemployment up to 4.0% (but still consistent with forecast)

The dynamic information equilibrium model (DIEM) forecast (the model is detailed in my paper) is still going strong since it was made back in January of 2017 (1.5 years). Last month's data was outside the 90% confidence intervals, but this month has us back up to 4.0% as a bit of mean reversion [1]. Here's the comparison with the FRBSF forecasts as well as the Fed's (annual average) forecasts:



The color-coded arrows in the second graph show when the Fed forecasts were made.

Note that if there is going to be a recession in 2019 (via JOLTS data indicator, via yield curve indicator), we'd expect the path of the unemployment rate to follow something like the September 2017 forecast from the FRBSF and the Fed — the unemployment rate will be significantly above the DIEM forecast error allowing e.g. this "recession detection" algorithm to posit a recession.

These kinds of forecasts always fill me with conflict. A recession is a terrible hardship for many people. However, I'm also excited to see if the model works. I tend to rationalize it by saying that forecasting recessions is a bit like forecasting earthquakes or volcanic eruptions — it can help people prepare — but we should always remember that economic time series are metrics for real world hardship.

...

Footnotes:

[1] I heard the news on NPR, and I found the "very serious" talk by economic journalists and economists about the "meaning" of the jump as if something had changed or that it was due to various factors ... hilarious. Even without the dynamic equilibrium model, this increase was well within the random fluctuations/measurement error.

Tuesday, July 3, 2018

Why hate on beauty?

I've seeing the media surrounding the release of Sabine Hossenfelder's new book Lost in Math: How Beauty Leads Physics Astray and I think we now have a physics example of the econ critique trope. I tweeted about it the other day:
I'd almost put this as directly analogous to an econ-critical economist saying economists are too enamored with beautiful theories when cursory inspection of your average DSGE model would generate almost any adjective except beautiful.
At the time I had read Hossenfelder's blog post about "beauty" in physics, but I just found the link to Andrew Gelman's blog about it and it's seems to have picked up sufficient steam that I really think I need to say that Hossenfelder is mis-characterizing physics — in much the same way econ-critical economists tend to mis-characterize economics in their criticism by playing on public perceptions of the field.

Hossenfelder defines beauty in physics as "simplicity, naturalness, and elegance" and proceeds to discuss each in turn; I will do the same.

Simplicity

I think Hossenfelder is playing on the prejudices of a general audience here. If people know any "simple" theoretical physics models, they probably are aware of Maxwell's equations or Einstein's general relativity. Actually, Andrew Gelman gives the list the typical member of the target audience might give:
Newton’s laws, relativity theory, quantum mechanics, classical electromagnetism [i.e. Maxwell's equations], the second law of thermodynamics, the ideal gas law: these all do seem beautiful to me.
People have tattoos of some of these equations! I knew someone (not a physicist) who had a Schrodinger equation tattoo. They make several Maxwell's equations T-shirts. I always thought they should write ℒ = tr F ∧ ★F using differential forms instead of the 19th century vector form people are most familiar with. The thing is that nearly all of those theories were known by the first half of the 20th century. The most recent one on the list is quantum mechanics, and quantum mechanics is over 100 years old (the Schrodinger equation itself turned 90 a couple years ago). These are not recent theories. Yet, I think Hossenfelder is playing on the fact that her audience has these examples on the tip of their tongue (availability heuristic).

Of course, missing from that list is quantum field theory, the content-less method for maintaining [pdf] "analyticity, unitarity, cluster decomposition, and symmetry". But quantum Yang-Mills theory and examples of it like QED and QCD have a kind of beautiful simplicity. At least when you write them down as a Lagrangian (they're both described by the classical Yang-Mills Lagrangian in the previous paragraph, but QCD has additional non-commutative matrix indices). Computing 600 diagrams to get a few more decimal places in the calculation of the magnetic moment isn't really "simple", and there's nothing "simple" about non-perturbative QCD for which one of the major approaches (lattice QCD) has all the beauty and simplicity of your undergrad implementation of Runge-Kutta integration.

The underlying jab, of course, is at string theory. Hossenfelder studies quantum gravity, for which there are few candidate theories that make sense. String theory requires six or seven additional unobserved dimensions, and loop quantum gravity violates Einstein's special relativity (Lorentz invariance). I personally like Verlinde's wisecrack [1] — his paper about entropic gravity that in a sense says quantum gravity doesn't exist.

But as anyone who has actually studied string theory would know (the UW introduced its first string theory class while I was there) the string theories aren't exactly "simple" — much like how the simplicity of the Yang-Mills Lagrangian hides complex non-perturbative physics, string theories are incredibly complicated to actually write down and perform calculations with. Sometimes a "simple" idea comes up (T-duality, AdS/CFT correspondence), but the preponderance of papers in purportedly simple string theory look like this [pdf]:


This is not simple in any way that would be considered "beautiful" (I'm not knocking this paper!), so obviously beauty as "simplicity" is not always a driving factor in research. And most string theory looks like this! It makes me wonder if Hossenfelder is playing on the fact that very few people reading her book have ever actually done a calculation with a Virasoro algebra — even among physicists. 

Naturalness

Hossenfelder's technical description of naturalness is fine (dimensionless parameters being of order 1), but the direction of inferences from naturalness is wrong. A lack of naturalness is usually a sign of a puzzle, but if a theory describes empirical data well enough no one rejects the theory. An example: QCD. The QCD Lagrangian, from a theoretical perspective (based on Weinberg's paper I used as a citation for the content-less-ness of quantum field theory above), should have another term that allows QCD to violate CP symmetry (charge-parity symmetry, the conjugate of time-reversal symmetry). This is called the strong CP problem. For some reason, the coefficient of that term, if it's not zero, is really small. Unnaturally small. It's small enough that the axion was proposed as a possible solution. A similar consideration happens in general relativity which should have a cosmological constant; however that constant is unnaturally small (at least from the scale we think should set it — which likely means it should be some other scale). 

But in no way is this lack of naturalness cause to reject general relativity or QCD (which are both wildly empirically successful in other areas), or consider either any less "beautiful". At least I thought the theory was 'beautiful'; my thesis was about a potential approach to non-perturbative QCD that could be measured in nuclear physics experiments. Naturalness as beauty has not led physics astray in its study of QCD or general relativity.

In fact, if you had some new theory of quantum gravity and the only thing in your way is a lack of naturalness in your parameter values that fit empirical data well, then that would be a major breakthrough. I can't imagine any physicist that would reject it. The issue is that there's no theory that predicts new effects that have been (or could be) measured to make any kind of naturalness consideration at all of the parameters that fit that non-existent data.

Elegance

Hossenfelder's definition of elegance seems to be a redundant restatement of the other aspects of beauty ("Elegance is the fuzziest aspect of beauty. ... By no way do I mean to propose [elegance] as a definition of beauty; it is merely a summary of what physicists mean when they say a theory is beautiful." — beautiful via the other two criteria, I guess?)

I'd agree her example of general relativity is elegant. It's also simple from a certain perspective. In fact, going by the effective theory approach, general relativity is the simplest non-trivial curved space-time theory we could write down:

Gμν + Λ gμν = α Tμν

That basically says space-time curvature + cosmological constant = energy-momentum. Of course, there's a big naturalness problem right there in that cosmological constant: Why is it so small? But we don't reject the theory. Einstein thought the Λ = 0 version was more elegant. However, Einstein also thought the equations were so complicated (And they are! The notation above hides so much! [2]) that it would be difficult to find any closed form solutions. (Schwarzchild did find such a solution a few years later.) As with a century of quantum mechanics, time can alter our perspective. Once thought hopelessly complex, people now routinely solve Einstein's field equations and make precision measurements of its novel effects.

Hossenfelder also mentions grand unification as elegant, but grand unification is more a set of circumstantial evidence than a "theory". The charges of the various quantum field theories in the standard model change with energy in such a way that they almost coincide at a huge energy called the Grand Unified Theory (GUT) scale. Adding supersymmetry (which is mentioned as a separate case of elegance) makes them coincide much better, and there are a variety of "grand unified theories" (really, various models). Of course, most of them are untestable at current energies and many predict the same observable things at energies we can reach. I guess it's elegant that the supersymmetry that's required to make string theories consistent also makes the GUT scale work out better! But then, supersymmetry has never been observed. The only real parameter it has is the number of supersymmetries, so you could maybe say N = 50 would an unnatural number of supersymmetries. But then 26 dimensions falls naturally out of bosonic string theory [3], so who's to really say? Having fewer supersymmetries would fall more under Occam's razor (which Hossenfelder mentions) than simplicity and naturalness as metrics of beauty getting in the way.

As mentioned above, sting theory (her other example of "elegance") isn't really "simple" if you're actually trying to work with it. It's really neat that e.g. the string boundary conditions become real dynamical objects ("branes") in string theory, but that weirdness is more why some physicists say that string theory is actually 23rd century alien math we discovered accidentally in the 20th century (that's a vaguely remembered comment that I forget the source of). Essentially, string theories are theories of more than just strings, and we do not fully know how much more yet.

Conclusion

The subtitle of Hossenfelder's book is "How Beauty Leads Physics Astray", and the implicit judgment is on string theory. Much like how hating on DSGE models [4] is popular in pop-econ, it seems hating on string theory is popular in pop-physics. Hossenfelder along with Peter Woit are well-known bloggers who frequently critique string theory (I think there might be a connection between blog popularity and critique). Hossenfelder's thesis is that too many resources are devoted to it. In a world with limited resources, this is definitely good to question. Woit just seems to think that it is too popular in pop-physics for how little he thinks it seems to have accomplished (I think, I'm not sure as his various writings come across as just pessimism rather than criticism; he mostly seems like that cranky grad student that finds the pessimism in any discussion).

I don't really get Hossenfelder's and Woit's impatience. It's been 40 years! they cry. Is the critique of too much emphasis on "beauty" just masking impatience? (Actually, one of the blurbs on the Amazon site says "Sabine Hossenfelder is impatient for new waves of discovery.") Did you think we'd go from the Standard Model to a theory of everything in your lifetime? It was 200 years before we started to upend Newtonian physics with quantum mechanics. Again, string theory is 23rd century alien math. 

The lack of empirical confirmation of anything specific to string theory is actually more a reason not to devote resources to any kind of high energy fundamental physics at all. There are no plans for dramatically larger accelerators than the LHC, so if you think there isn't going to be confirmation or rejection of string theory that means it's unlikely there will be any confirmation of any theory at the same scale.

In the end, criticism like this is the kind I dislike. We should do something different! Ok, what? Um, I don't know. Don't tell us we are on the wrong path, show it by finding a more fruitful path. Hossenfelder addresses this in a separate blog post:
As far as quantum gravity is concerned, string theorist’s main argument seems to be “Well, can you come up with something better?” Then of course if someone answers this question with “Yes” they would never agree that something else might possibly be better. And why would they – there’s no evidence forcing them one way or the other.
This seems an odd retort to the expectation to show something different is useful — a kind of tu quoque where something purportedly better has no evidence either. It's also a bit disingenuous because a string theory calculation did come up with the Hawking-Bekenstein area law. And even if there are some possible issues with that (firewalls), string theory still unifies all the forces and gravity into a single framework. Let's go back to Hossenfelder's own blog:
String theory arguably has empirical evidence speaking for it because it is compatible with the theories that we know, the standard model and general relativity. The problem is though that, for what the evidence is concerned, string theory so far isn’t any better than the existing theories. There isn’t a single piece of data that string theory explains which the standard model or general relativity doesn’t explain.


The reason many theoretical physicists prefer string theory over the existing theories are purely non-empirical. They consider it a better theory because it unifies all known interactions in a common framework and is believed to solve consistency problems in the existing theories, like the black hole information loss problem and the formation of singularities in general relativity. Whether it is actually correct as a unified theory of all interactions is still unknown. And short of a uniqueness proof, no non-empirical argument will change anything about this.
I would say this — being compatible with all the empirical successful theories while unifying them, but just not giving anything extra — is a remarkable feather in string theory's cap. In a sense, Maxwell's equations unified a bunch of known electromagnetic forces into a single framework in this same way. Later it was discovered to have some issues that Einstein solved with his 1905 special relativity paper (note that those issues were in fact the impetus for Einstein's paper and why it's titled "On the Electrodynamics of Moving Bodies").

Also, don't tell me that better path is loop quantum gravity because it violates Lorentz invariance. For all the failings of string theory, at least is doesn't violate one of the most empirically successful theories to come out of physics. In fact, if string theory was languishing and all the resources were going to loop quantum gravity, I'd be totally on-board with Hossenfelder/Woit style criticism of loops and calls for redirection of resources to other areas.

But then, who's to say resources are being misdirected? I don't know about Hossenfelder or Woit, but even when I was studying boring (to quantum gravity people, at least) QCD I frequently wrote down attempts to come up with pieces of a possible final "theory of everything". I speculated about the common coefficients in quantum field theory calculations being related to to Galois groups — something that is currently being studied. I had a wild idea to re-make the idea of smooth manifolds in mathematics as essentially leading approximations to some new underlying space and tried to understand its topology (I think I just re-invented fractional derivatives, though). I was always messing around with possibilities — random ideas that were essentially funded by my nuclear theory research position. I imagine most string theory practitioners do the same thing. Einstein did his work while being "funded" by the patent office. Heck, some string theory concepts like holography may be illuminated by incredibly simple models that seem to have started out as just messing around. I could imagine training in string theory would be decent background for understanding a final theory, whatever it turns out to be.

The trick is to keep new students funded and engaged. I don't think the specific projects that get funded are necessarily that important. Can you imagine? A big government agency picking research projects and their choice is what ends up as the final "theory of everything"? Not to go all libertarian on you, but that kind of top-down direction seems unlikely to generate breakthroughs. Breakthroughs often come while you were studying something else [5]. String theory itself started as essentially a side project looking at the details of a simple model of mesons (that was later rejected for the better QCD). Who knew funding that would lead to a string theory-industrial complex that Hossenfelder claims is eating all the resources?

I guess I'm saying: who really knows where innovation comes from? Why is motivation through beauty not a source of innovation? Why is wasting resources on string theory not a source of innovation? Maybe even writing books about wasting resources on string theory is a source of innovation to those that read them.

In the end, any final "theory of everything" that describes all matter, energy, space, and time is going to be beautiful regardless of what it looks like because it describes all matter, energy, space, and time.

...

Footnotes:

[1] This is an insider joke; a reference to "Witten's wisecrack" (as described by Sidney Coleman) that said in natural units, the perturbative expansion of QED in the coupling constant of about e ~ 1/3 was no worse than the large-Nc expansion of QCD with Nc = 3.

[2] Expanding a bit:



[3] It's weird, but the Zeta-regularized sum of natural numbers is ζ(−1) = −1/12, and in order to make a cancellation, you end up with 2 − 1/ζ(−1)  = 26 (if I remember correctly). Also, the Casimir force is attractive because this number is negative.

[4] DSGE models are actually pretty general — just a few canonical elements seem to be included out of inertia (Phillips curves, Euler equation).

[5] Not to say information equilibrium should be hailed as a "breakthrough" (yet!), but it came about from studying compressed sensing.

PCE inflation and checking forecast validity

This post isn't going to be very exciting, but sometimes science is just doing the legwork. PCE price level (inflation) data was released last week — the forecast is doing fine:

Monday, June 25, 2018

Yield curve inversion and a future recession

There was a recent article out on the internet about yield curve inversion. Using the spread between Moody's AAA rate (blue, a decent proxy for the 10-year rate with less noise) and the 3-month secondary market rate (purple) we can see that from the 1950s until today, a low spread has been associated with recessions:


However, in the aftermath of the Great Depression, inversion of this measure wasn't a good indicator. It's only become an indicator since the 1950s. The past few recessions have all been preceded by a closing of this measure, but the degree of closing has gotten smaller since the 80s (actual inversion before the 90s has turned into just entering there error band):


Looking at the recent data and assuming the dynamic equilibrium model is correct along with a linear trend in rate increases, we see that the indicator will enter the error band sometime before 2020:


However, the period of time the spread spends inside that error band ranges from a few months to a year (yield curve inversion is usually described as being an indicator a recession will happen within a year). So unless we have other data, we won't be able to predict the timing of this future recession. We do have other indicators, and this extrapolation is consistent with them.

...

Update 26 June 2018

I've done a better analysis of the estimate of the US recession onset via the yield curve inversion indicator by aggregating several different measures of the spread (collected here). I looked at the median (yellow), average (blue), and a principal component analysis (green). These gave nearly identical results:


Since those were practically identical, I used the mean median for the subsequent calculations. I then extracted the slope of the approach to the three previous recessions (early 1990s, early 2000s, and the "Great Recession" of 2008, dropping the first year after the start of the previous recession) using a linear model, and used that slope to estimate the most likely recession onset (first quarter of the NBER recession) for a future recession (assuming the current decline in spreads will eventually lead to a recession). That value is 2019.7 ± 0.3 (two standard deviation error). This is what the current approach to the recession looks like in that context (the previous three recessions are shown in blue, yellow, and green and labeled by the end of the NBER quarter — i.e. 0.5 is the end of calendar Q2):


The blue band represents the 90% confidence on the single prediction errors of the linear model (dashed line). Since the declaration of an NBER recession typically lags the first indications of a recession in unemployment and JOLTS data, we should be seeing the first signs in those data series in the next 6 months to a year. Since we are already seeing some signs in the JOLTS data, these indicators all seem consistent.

Note that the above analysis in this update is "model agnostic" in the sense that it just relies on the empirical regularity of a trend towards yield curve inversion between recessions, but no specific model of how yield curve inversion works or which way causality goes. It does imply a certain inevitability of a recession. Since the mean spread rose to about 3 percentage points after each recession, and the slope is -0.36 percentage point per year, this implies about 8.3 years between recessions — which is what a Poisson process estimate says based simply on the frequency of recessions (λ ~ 0.126/y, or an inter-arrival time of 7.9 years as mentioned here).

...

Update 3 July 2018

Apparently I mislabeled the variables medianData and meanData in my code, switching them up. Anyway, the above result uses the median, not the mean (average). I also added post-forecast daily data in red, which is more rapidly updated than the monthly data time series.

Tuesday, June 19, 2018

Q: When is bad methodology fine?

A: When you clearly say it's bad methodology.

For example, I am currently playing around with housing starts data and the dynamic information equilibrium model (DIEM). It really only looks like the data from about 1990 on can be described by the model (which interestingly matches up with a similar situation with the ratio of consumption to investment).

However, I noticed something in the data -- if you delete the leading edges of recessions, the DIEM works further back. It's possible that a step response is involved; here's the log-linear transform of the data:


It's totally bad methodology to just willy-nilly delete segments of data by eye, and I wouldn't create a forecast with this model result that I'd take seriously. I won't even transform back to the original data representation to help prevent this graph from being used for other purposes. But sometimes I notice prima facie interesting things, and as this blog effectively operates as my lab notebook [1] I try to document them. They could turn out to be nothing! Why? Bad methodology!

Footnotes


[1] There's apparently an "open notebook" movement that I guess I've been a part of since 2013.