Saturday, July 18, 2020

Dynamic information equilibrium and COVID-19


Since I've gotten questions, I thought I'd put together a brief explainer on the Dynamic Information Equilibrium Model (DIEM) and its application to the path of COVID-19.

Prolog

I wrote a preprint on the DIEM a couple years ago (posted at SSRN), and gave a talk about the approach at the UW economics department (see here). The primary application was to labor markets, specifically the unemployment rate. However, the model has many other applications in economics (and the original information equilibrium approach has applications to physics). So how did I end up applying this model to COVID-19? It started from laziness.

Back in April, I was looking at the various models of COVID-19 out there, in particular the IHME model. I wanted to compare the performance to the data, but instead of coding it up myself I took a screenshot and digitized the data. Digitizing adds error and digitizing exponentially falling functions creates all kinds of problems, so I instead fit the IHME forecasts with a DIEM model since I had the code readily available.

It turned out to do a decent job of describing the IHME models, but additionally when there were discrepancies with the observed data it turned out the DIEM worked better. Thinking about the foundations of the DIEM, the reason it worked became clear.

DIEM

The DIEM is an application of "information equilibrium" — the idea that one process $A$ can be the source of information for another process $B$ such that it takes the same number of bits of (information theory) information to specify $A$ as it does to specify $B$. In a sense, if $A$ is in information equilibrium with $B$ then the two are informationally equivalent. Information equilibrium constrains what a process that matches e.g. $A$ with $B$ can look like.

That's all very abstract, but in economics we have demand for a good being matched with supply (creating a transaction) or job openings being matched with unemployed people (creating a hire) — in equilibrium. In the case of COVID-19, we have virus + healthy person $\rightarrow$ sick person

Like any communication channel transferring information, these matches can fail to happen. Voices are garbled on a cell phone call causing a failure of the information specifying the sound waves going into the the speaker's phone being transferred completely to the sound waves coming out of the listener's phone. Information equilibrium is something of an idealized state that can be interrupted by non-equilibrium. It may seem vacuous to say sometimes you have equilibrium and sometimes you have non-equilibrium, but the information theory underlying it gives us some useful handles (e.g. failures to fully sample the underlying space, correlations, or other changes in information entropy).

Dynamic information equilibrium asks what information equilibrium can tell us when the processes $A$ and $B$ are growth processes.

\[
\begin{align}
A & \sim e^{a t}\\
B & \sim e^{b t}
\end{align}
\]

Just because they are "growth" processes, that doesn't mean they are growing — they could be shrinking or $A$ could be growing and $B$ could be shrinking.

If you go to the paper you can get the details of the mathematics (including how this generalizes to ensembles of processes), but the key result is that information equilibrium requires

\[
\frac{d}{dt} \log \frac{A}{B} \simeq (k - 1) b \equiv \alpha
\]

where $k$ measures the relative information content of events in process $A$ versus events in process $B$. What this says is that if you look at the data on a log plot versus time, it will consist mostly of data where the rate of growth of decline of the data will be a straight line (i.e. exponential growth or decay with constant log-linear slope).

Mostly. What makes this DIEM a model and not a theory is that there's an assumption about what happens in non-equilibrium. In the original application of the model to the unemployment rate, there was an assumption that the straight line isn't interrupted by non-equilibrium too much — that non-equilibrium events are sparse in the time series data. If this wasn't true, then it'd be impossible to measure that $\alpha$ and your model of non-equilibrium would be everything. In labor markets, recessions are the sparse non-equilibrium events in the unemployment rate and the recovery is the equilibrium:


Adding in a logistic step function to handle the recessions shocks gives us a description of the unemployment rate (and other economic variables) over time:


COVID-19

It turns out that the DIEM is really good model of the data for COVID-19 cases and deaths and the forecast from April for the path of the outbreak in the US was remarkably accurate — at least until the 2nd surge in the most recent data (i.e. a non-equilibrium event):



The model works well for most countries, for example here are Italy and the UK (click to enlarge):


The fact that we can't really see that 2nd surge until it starts is due to the model being too simple to predict non-equilibrium events. It can, however, be used to see when a non-equilibrium event is getting started and then monitor its progress. For example, back on May 20th I was predicting the beginning of a 2nd surge in Florida based on the DIEM model of cases there (and I later added a 2nd non-equilibrium shock, which can be handled using e.g. this algorithm):



Another limitation of the model is that it has explicit assumptions that the number of events $n$ you're seeing is large $n \gg 1$. This means the model does not work well when there are just a few cases or deaths and for the initial onset of the outbreak. For example, here is South Korea:


Related to the $n \gg 1$ assumption, we basically start an outbreak at $t_{0}$ in the midst of a non-equilibrium shock with dynamic equilibrium valid for $t \gt t_{0}$. This is effectively treated in the model as if a previous outbreak had recently ended (so that dynamic equilibrium is also valid for $t \lt t_{0}$). The model that would deal with the initial outbreak would almost certainly have to incorporate specifics of the individual virus and the networks it travels in that is beyond the scope of information equilibrium — itself a "shortcut" in describing complex systems.

Other observations

One the things the model predicts is that after a 2nd (or 3rd) surge, the data should return to the previous log-linear path unless something has changed. This appears to be happening for several regions — Germany and King County, WA for example:


This remains to be seen if this holds up. In Sweden, the rate of decline after the 2nd surge in cases seems to have improved and is now comparable to Germany's


Previously, Sweden's rate of decline in cases of $\alpha \simeq$ 2% per day was approximately the same as most of the US — about half the rate of 4-5% apparent in most of Europe as well as in NY state (dominated by counts from NYC). Did people in Sweden change behavior in the face of that 2nd surge? It's an open question. [See update 25 July 2020 below.]

Another thing we need to keep in mind that these are reported cases and deaths. With testing increasing in many countries, more and more cases are discovered. This results in an obvious difference between the rate of decline for cases in the US versus that for deaths:



Other countries have much more similar rates of decline for the two measures. For the US, this means the rate of decline for cases is somewhat lower than would be if testing was widely available. That is to say observed $\alpha_{US} \simeq \alpha_{US}^{\text{cases}} + \alpha_{US}^{\text{testing}}$. It also means the observed rate of decline for cases must decrease at some point in the future (e.g. once testing far outpaces transmission). As it is, the "case fatality rate" (CFR) appears to be heading to zero:


This theoretically should flatten out at some point at the true population CFR (although it's complicated since more deaths can occur during a surge because hospitals are at capacity). Estimated CFRs are in the 0.1% order of magnitude so this point is likely far in the future for the US.

Summary

The DIEM is an incredibly simple model. In the senses above — too simple. However, it has also proven useful for estimating the long run path of COVID-19 in several regions. In the places it applies, a given pandemic can be seen as an instance of a universal process with its specific parameters aggregating the effects of multiple aspects of society from policy to social networks to details of the specific virus.

Overall, we should keep in mind that the combination of policy, epidemiology, and social behavior is a social system. There might be empirical regularities from time to time, but humans can always change their behavior and thus change outcomes.

...

Update 21 July 2020

Minor edits and updated Sweden, Germany and US ratio graphs with more recent data.

...

Update 25 July 2020

The assumption of sparseness mentioned above may have failed us in the estimation of the dynamic equilibrium rate for Sweden — the first and second surges were too close together to properly measure it. It would resolve some inconsistencies (i.e. Sweden seeming to have a higher rate than the rest of Europe before the 2nd surge, Sweden oddly shifting to a rate more consistent with the rest of Europe after the 2nd surge). Here is the model using the most recent data (as of 11am PDT) to estimate the dynamic equilibrium $\alpha$ compared to the original fit (click or tap to enlarge):


Seismograms

Another way to visualize multiple DIEMs is via what I call "seismograms" which displays the temporal information about the parameters (the shock width and the shock timing) on a timeline like this for several US states (click or tap to enlarge — the blue is only to differentiate the US aggregate, not direction of shock as in other uses):


The translation is fairly straightforward — a longer shock is represented by a wider band placed at the center (in time) of a non-equilibrium shock (above red-ish, below in gray). In the link above, you can add amplitude/magnitude information by scaling the color but this version just emphasizes time. Here's a graphical version of how these translate from my book:



...

Update 9 September 2020

The "return to equilibrium" has turned out to be remarkably accurate for the US:


A 3rd surge may be getting started in the US (associated with schools opening for the new year) — zoomed in on the gray box in the previous graph:


In Sweden, there is a 3rd surge ending ...


Also, the predicted path of deaths in the US using cases turned out to be fairly accurate with only the lag being uncertain in advance:


The ratio of deaths to cases for the US has returned to the "equilibrium" of a decline due to a likely combination of effects from demographic to increasing testing (the latter seeming like the primary contribution):




...

Data sources:

International data from European CDC

US state data from the COVID Tracking Project

Friday, May 1, 2020

What's in a name?


That which we call a model by any other name would describe as well ... or not
Shakespeare, I think.

I'm in the process of trying to distract myself from obsessively modeling the COVID-19 outbreak, so I thought I'd write a bit about language in technical fields.

David Andolfatto didn't think this twitter thread was very illuminating, but at its heart is something that's a problem in economics in general — and not just macroeconomics. It's certainly a problem in economics communication, but I also believe it's a kind of a professional economics version of "grade inflation" where "hypotheses" are inflated into "theorems" and "ideas" [1] are inflated into "models".

Now every economist I've ever met or interacted with is super smart, so I don't mean "grade inflation" in the sense that economists aren't actually good enough. I mean it in the sense that I think economics as a field feels that it's made up of smart people so it should have a few "theorems" and "models" in the bag instead of only "hypotheses" and "ideas" — like how students who got into Harvard feel like they deserve A's because they got into Harvard. Economics has been around for centuries, so shouldn't there be some hard won truths worthy of the term "theorem"?

This was triggered by his claim that Ricardian equivalence is a theorem (made again here). And I guess it is — in economics. He actually asked what definitions were being used for "model" and "theorem" at one point, and I responded (in the manner of an undergrad starting a philosophy essay [2]):
the·o·rem 
a general proposition not self-evident but proved by a chain of reasoning; a truth established by means of accepted truths 
mod·​el 
a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs
I emphasized those last clauses with asterisks in the original tweet (bolded them here) because they are important aspects that economics seems to either leave off or claim very loosely. No other field (as far as I know) uses "model" and "theorem" as loosely as economics does.

The Pythagorean theorem is established from Euclid's axioms (including the parallels axiom, which is why it's only valid in Euclidean space) that include things like "all right angles are equal to each other". Ricardian equivalence (per e.g. Barro) instead based on axioms (assumptions) like "people will save in anticipation of a hypothetical future tax increase". This is not an accepted truth, therefore Ricardian equivalence so proven is not a theorem. It's a hypothesis.

You might argue that Ricardian equivalence as shown by Barro (1974) is a logical mathematical deduction from a series of axioms — just like the Pythagorean theorem — making it also a theorem. And I might be able to meet you halfway on that if Barro had just written e.g.:

$$
A_{1}^{y} + A_{0}^{o} = c_{0}^{o} + (1 - r) A_{1}^{o}
$$

and proceeded to make a bunch of mathematical manipulations and definitions — calling it "an algebraic theorem". But he didn't. He also wrote:
Using the letter $c$ to denote consumption, and assuming that consumption and receipt of interest income both occur at the start of the period, the budget equation for a member of generation 1, who is currently old, is [the equation above]. The total resources available are the assets held while young, $A_{1}^{y}$, plus the bequest from the previous generation, $A_{0}^{o}$. The total expenditure is consumption while old, $c_{1}^{o}$, plus the bequest provision, $A_{1}^{o}$, which goes to a member of generation 2, less interest earnings at rate $r$ on this asset holding.
It is this mapping from these real world concepts to the variable names that makes this a Ricardian Equivalence hypothesis, not a theorem, even if that equation was an accepted truth (it is not).

In the Pythagorean theorem, $a$, $b$, and $c$ aren't just nonspecific variables, but are lengths of the sides of a triangle in Euclidean space. I can't just call them apples, bananas, and cantaloupes and say I've derived a relationship between fruit such that apples² + bananas² = cantaloupes² called the Smith-Pythagoras Fruit Euclidean Metric Theorem.

There are real theorems that exist in the real world in the sense I am making — the CPT theorem comes to mind as well as the noisy channel coding theorem. That's what I mean by economists engaging in a little "grade inflation". I seriously doubt any theorems exist in social sciences at all.

The last clause is also important for the definition of "model" — a model describes the real world in some way. The Hodgkin-Huxley model of a neuron firing is an ideal example here. It's not perfect, but it's a) based on a system of postulates (in this case, an approximate electrical circuit equivalent), and b) presented as a mathematical description of a real entity.

Reproduced from Hodgkin and Huxley (1952)
The easiest way to do part b) is to compare with data but you can also compare with pseudo-data [3] or moments (while its performance is lackluster, a DSGE model meets this low bar of being a real "model" as I talk about here and here). *Ahem* — there's also this.

Moment matching itself gets the benefit of "grade inflation" in macro terminology. I'm not saying it's necessarily wrong or problematic — I'm saying a model that matches a few moments is too often inflated to being called "empirically accurate" when it really just means the model has "qualitatively similar statistics".

One of the problems with a lack of concern with describing a real state of affairs is that you can end up with what Paul Pfleiderer called chameleon models — models that are proffered for use in policy, but when someone questions the reality of the assumptions the proponent changes the representation (like a chameleon) to being more of a hypothesis or plausibility argument. You may think using a so-called "model" that isn't ready for prime time can be useful when policy makers need to make decisions, but Pfleiderer put it well in a chart:



But what about toy models? Don't we need those? Sure! But I'm going to say something you're probably going to disagree with — toy models should come after empirically successful theory. I am not referring to a model that matches data to 10-50% accuracy or even just gets the direction of effects right as a toy model — that's a qualitative model. A toy model is something different.

I didn't realize it until writing this, but apparently "toy model" on Wikipedia is a physics-only term. The first line is pretty good:
In the modeling of physics, a toy model is a deliberately simplistic model with many details removed so that it can be used to explain a mechanism concisely.
In grad school, the first discussion of renormalization in my quantum field theory class used a scalar (spin-0) field. At the time, there were no empirically known "fundamental" scalar fields (the Higgs boson was still theoretical) and the only empirically successful uses of renormalization were QED and QCD — both theories with spin-1 gauge bosons (photons or gluons) and spin-½ fermions (electrons or quarks). Those details complicate renormalization (e.g. you need a whole different quantization process to handle non-Abelian QCD). The scalar field theory was a toy model of renormalization of QED — used in a class to teach renormalization to students about to learn QED that had already been shown to be empirically accurate to 10s of decimal places.

The scalar field theory would be horribly inaccurate if you tried to use it to describe the interactions of electrons and photons.

The problem is not that many economic "toy models" are horribly inaccurate, but rather that they don't derive from even qualitatively accurate non-toy models. Often it seems no one even bothers to compare the models (toy or not) to data. It's like that amazing car your friend has been working on for years but never seems to drive — does it run? Does he even know how to fix it?

At this stage, I'm often subjected to all kinds of defenses — economics is social science, economics is too complex, there's too much uncertainty. The first and last of those would be arguments against using mathematical models or deriving theorems at all, which a fortiori makes my point that the words "model" and "theorem" are inflated from their common definition in most technical fields.

David's defense is (as many economists have said) that models and theorems "organize [his] thinking". In the past, my snarky comment on this has been that economists must have really disorganized minds if they need to be organizing their thinking all the time with models. Zing!

But the thing is we have a word for organized thought — idea [4]:
i·de·a 
a formulated thought or opinion
But what's in a name? Does it matter if economists call Ricardian equivalence a theorem, a hypothesis, or an idea? Yes — because most human's exposure to a "theorem" (if any) is the Pythagorean Theorem. People will think that the same import applies to Ricardian Equivalence, but that is false equivalence.

Ricardian Equivalence is nowhere near as useful as the Pythagorean Theorem, to say nothing about how true it is. Ricardian Equivalence may be true in Barro's model — one that has never been compared to actual data or shown to represent any entity or state of affairs. In contrast, you could right now with a ruler, paper, and pencil draw a right triangle with sides of length 3, 4, and 5 inches [5].

I hear the final defense now: But fields should be allowed their own jargon — and not policed by other fields! Who are you fooling? 

Well, it turns out economists are fooling people — scientists who take the pronouncements of economics at face value. I write about this in my book (using two examples of E. coli and capuchin monkeys):


We have trusting scientists going along with rational agent descriptions put out there by economists when these rational agent descriptions have little to no empirical evidence in their favor — and even fewer accurate descriptions of a genuine state of affairs. In fact, economics might do well to borrow the evolutionary idea of an ecosystem being the emergent result of agents randomly exploring the state space.

...

PS

My "to be fair" items so that I'm not just "calling out economics" are "information" in information theory and "theory" in physics. The former is really unhelpful — I know it's information entropy, but people who know that often shorten it to just information and people who don't think information is like knowledge despite the fact that information entropy is maximized for e.g. random strings.

In physics, any quantum field theory Lagrangian is called a "theory" even if it doesn't describe anything in the real world. It is true that the completely made up ones don't get names like quantum electrodynamics but rather "φ⁴  theory". If it were economics, that scalar field φ would get a name like "savings" or "consumption".

...

Footnotes:

[1] I had a hard time coming up with the word here — my first choice was actually "scratch work". Also "concepts" or "musings".

[2] ... at 2am in a 24 hour coffee shop on the Drag in Austin.

[3] "Lattice data" (for QCD) or data generated with VAR models (in the case of DGSE) are examples of pseudo-data.

[4] Per [1], this is also why I thought "concept" would work here:
con·cept

something conceived in the mind
[5] This is actually how ancient Egyptians used to measure right angles — by creating 3-4-5 unit triangles [pdf].

Friday, April 24, 2020

Seven years later ...

On the 7th anniversary of this blog, we are finding ourselves in the midst of a deadly pandemic and the biggest macroeconomic shock since possibly the Great Depression. I hope everyone out there is staying healthy, practicing good mitigation, and still has a job.

The next seven years on the blog are going to be different — gone will be the days of tracing the path of the macroeconomic equilibrium, replaced with following the first non-equilibrium shock since the information equilibrium framework was formalized. Will we see a sharp rise in unemployment followed by the typical decline we've seen over the past century in US data? Will there be a step response? I hope the economy recovers from this shock faster than it has in the past, but I am not optimistic.


...

PS The post title is a MST3K reference to "The Final Sacrifice". Here's to wondering if there is beer on the sun.


Sunday, April 12, 2020

What does this physicist think of economists?**

I have had fringe contact with more macroeconomics than usual as of late, for obvious reasons (e.g. I have been producing macroeconomic models that outperform mainstream models by orders of magnitude), and I do understand this is only one corner of the discipline. I don’t mean this as a complaint dump, because most of physics suffers from similar problems due to being a similarly male-dominated field, but here are a few limitations I see in the mainstream economic models put before us:

1. They do not sufficiently grasp that social forces and unpredictable human nature are more powerful than economic forces and “rational agents”. In the short run you try economic stimulus, but in the long run you learn that not giving Republicans cover to dismantle democracy through “public choice” protects you the most. Or you move from doing “unemployment insurance” to “paying companies to keep people on the payroll” once you get that job search and matching is driven more by social relationships than economic theory. In this regard the economic models end up being too pessimistic about human brains (reduced to a 1-dimensional utility function!), and it seems that “the econophysics complaints about the economists” (yes there is such a thing) are largely correct on this count. On this question econophysics models (e.g.) really do better, though not the models of everybody.

2. They do not sufficiently incorporate people's humanity. An economic stimulus plan, for instance, may be freakishly amoral, which leads to adjustments along the way, and very often those adjustments are stupid policy moves suggested by impatient billionaires. This is not built into the economic models I am seeing, even though there is a large independent branch of sociology research. It is hard from them to understand, I guess? Still, it means that economic models will be too alien, rather than too human. Economists might protest that it is not the purpose of their science or models to incorporate social change and morality, but these factors are relevant for prediction, and if you try to wash your hands of them (no Easter pun intended) you will be wrong a lot.

3. The concept of scope, specifically the part that tells us that effective theories of the same system at different scales may have little relationship to each other at leading order — so much so that they may have incommensurate domains of validity. Economists seem super-unaware of this, at least much less so than physicists are these days, though it seems to be more of a “la-la-la-I-can't-hear-you” pursuit of tractable macro models aggregating “rational agents” than earnestly trying to understand the complex system they are purportedly researching. That is really hard, either in physics or economics. Still, on the predictive front without a good understanding of scope and scale a lot will go askew, as indeed it does in economics.

The economic models also do not seem to incorporate Richard Feynman-like bias offset techniques. Don't fool yourself, and you're the easiest person to fool! But economists still feel like opining about subjects well outside their domain of expertise without considering that their political priors may strongly influence their ideas. Some of their “ideas” are shown to be horribly misguided through the subsequent scrutiny. Economists might claim these factors already are incorporated in the variables they are modeling, since they claim to incorporate human behavior. Ideally you may wish to incorporate the past work of the modeler themselves (i.e. the past light cone of the observer's causal wavefunction) in the model's Bayesian prior probability, so that they do not see everything as a nail when all they have is a hammer. I have not yet seen a Dunning-Kruger-aware dimension in economic models, though you might argue many economists are “Dunning-Kruger” in their public rhetoric, blurting out what they think is good for us rather than actually learning about it first. The institutional modesty of physicists (whole theories are predicated on the principle that “we are not special in the universe”) is slightly subtler.

4. Selection bias from the failures coming first. The early macroeconomic models were calibrated from the Great Depression, because what else could they do? Then came the Great Recession, which was also a mess. It is the messes which are visible first, at least on average. So some of the models may have been too pessimistic at first. These days we have Japan, South Korea, and a bunch of Nordic states that haven’t quite “blown up” with several million people making initial unemployment claims and literal Depression-era food lines we see here. If the early models had access to all of that data, presumably they would be more predictive of the entire situation today. But it is no accident that the failures (like Richard Epstein) will be more visible in the media early on.

And note that right now some of the very worst countries (United States, possibly the United Kingdom?) are not far enough along on the data side to yield useful inputs into the models. So currently those macro models might be picking up too many semi-positive data points of functioning governments and not enough from failed states or “train wrecks,” and thus they are too optimistic.

On this list, I think my #1 comes closest to being an actual criticism, the other points are more like observations about doing science in a messy, imperfect world. In any case, when economic models are brandished, keep these limitations in mind. But the more important point may be for when critics of economic models raise the limitations of those models. Very often the cited criticisms are chosen selectively, to support some particular agenda, when in fact the biases in the economic models almost certainly run in one direction — towards the interests of billionaires (see a. below).

Which is how a lot of macro men think it should be.

Now, to close, I have a few rude questions directed at economists that nobody seems willing to publicly acknowledge, but actually we all already know the answers to them:

a. As a class of scientists, how much are economists paid by vested interests (e.g. GMU/Mercatus, Hoover Institution, Cato)? Is is being wrong or right better for their salaries?

b. How smart are they? What are their average GRE scores?

c. Are they hired into thick, liquid academic and institutional markets? Or does it take five years to publish a paper? And how meritocratic are those markets? Is it just people from five schools who are allowed to get jobs or publish?

d. What is their overall track record on predictions, whether before or during this crisis?

e. On average, what is the political orientation of economists? And compared to other academics?  Do they use the market social welfare function when they make non-trivial recommendations?

f. We know, from physics, that if you are a French physicist, being a Frenchman predicts your space-time location better than does being an physicist (there is an old PRL paper on this somewhere). Is there a comparable phenomenon in economics?

g. How well do they understand how to model any system, relative to say what an undergrad physics major would know?

h. Are there “defunct economists” in the manner that John Maynard Keynes charges there are “defunct economists”? If so, what do you have to do to earn that designation? And are the defunct sometimes right, or right on some issues? How meta-rational are those who allege defunct-ism? Are they meta-meta-rational? How about meta-meta-meta-rational?

i. How many of them have studied Douglas Hofstadter’s now 40 year old meta-work on emergence and meta-fiction? Meta.

Just to be clear, as ITE readers will know, I have not been criticizing the mainstream macroeconomic recommendations of stimulus. But still those seem to be questions worth asking.

...

** PS This is a mix of parody (because it's risible) and critique (because economics doesn't really work that well compared to even epidemiology) of this.

PPS #NotAllEconomists

PPPS Made a couple edits and slight changes (references to public choice theory, Japan).

PPPPS Update: Cowen is now saying the "debate" is becoming "emotional". That a) is exactly one point I am making here — his preferred approach to economics lacks empathy, morality, and humanity, and b) is what purportedly "rational" men often say about women, which is another.

People are literally lining up in Depression-era food lines, and Tyler wants to debate whether or not epidemiology journals should be colonized by economists.

PPPPPS I do want to emphasize that this is a parody — a physicist adopting the same self-regard and sneering tone Cowen shows towards epidemiology (but with the additional layer of irony being that physicists have produced a lot more empirically accurate theories than macroeconomists have). I think a lot of economists do good work. Unfortunately, a lot of economists (especially those more right & libertarian leaning) need to learn to, in the words of Kendrick Lamar, "be humble / sit down".

Tuesday, April 7, 2020

JOLTS data — and the twig crack that caused the avalanche?

Back from a long hiatus — things were crazy at the real job trying to get set up to work from home for a month or longer. Happy to report my family and I are doing well, and I hope everyone out there is staying healthy.

The drop in the JOLTS job openings rate I noted in the previous post (from February) has continued and it appears we're showing a definite deviation:


While you may be thinking "Yes, the COVID-19 shock", I should point out that this data is from February 2020 — and the deviation starts with data from December 2019. As I put it in a tweet from last month's data: What if there was a recession brewing and COVID-19 just triggered the market, like the old trope of a tree branch breaking causing an avalanche?

I saw that in the 2008 recession the JOLTS measures were some of the earlier indicators in the labor market with job openings being 4-6 months ahead of the shock to the unemployment rate. That was based on a single shock, but the hires data averages about 5 months lead using multiple shocks (in both directions) from the 1990s recession to today.

And last month's unemployment rate showed the first signs of a non-equilibrium shock with March 2020 data by either the Sahm rule or my "recession detection algorithm" threshold:


December 2019 to March 2020 is 4 months — right in line with the previous recession.

Now I understand it seems odd — how could JOLTS data predict a pandemic? Or as I put it in my twitter thread referenced above — how could the yield curve predict a pandemic? Even the "limits to wage growth" [1] hypothesis predicts a recession!

But in this view, the pandemic was just a coordinating signal. Often, these coordinating signals come from the Fed — an interest rate hike, lack of a cut, or even letting a financial institution fail — and coordination causes recessions (we all cut back on spending, we all sell our stocks, etc). Because the pandemic signal was so sudden and so unambiguous, we got a much sharper signal in the unemployment rate than usual and a bit of a compressed period between JOLTS and unemployment. For example, total separations is only barely registering a signal (it's there) while hires shows nothing yet (click to enlarge):


COVID-19 was the twig crack that caused an avalanche that was already building.

I've seen that some people think the recovery will be rapid. I doubt this because we are seeing a shock to the labor market — for example, initial claims spiked into the millions. A typical "surprise information shock" that evaporates has a distinct pattern:


It would look something like the red dashed line in this graph of S&P 500 data (while I show a non-equilibrium shock the size of the 2008 recession as a counterfactual recession path for reference):


However, unemployment is already rising and it falls at basically the same rate over the entire history of the data. This "remarkable recovery regularity" became the basis for the dynamic information equilibrium model (first here, then here). This implies that we are unlikely to see a sudden shift back to low unemployment but rather something more like this:


I added a step response (i.e. "ringing artifacts" or overshooting) to this qualitative non-equilibrium shock because the shock seems pretty sharp, however it is possible it won't happen as the step response has been gradually disappearing over time in US data. It's possible it won't be this big — though some people like James Bullard are are saying 30% is possible so it might be even bigger. But even the rise to 4.4% already in the data will take 3 years to get back to 3.5% along the dynamic equilibrium path.

It's going to be a long slog.

...

Footnotes:

[1] In the past several decades, when wage growth exceeds the nominal GDP growth trend, there has generally been a recession.

Saturday, February 29, 2020

Market updates for a bad week

Now I don't really look at the information equilibrium models for markets as particularly informative (looking at the error band spreads should tell you all you need to know about that) — this should be taken with a grain of salt. And always remember: I'm a crackpot physicist, not a financial adviser.

The stock market and recession shocks

With that out the way, here's what the recent drop in the markets looks like on the S&P 500 model:


Curiously, we seem to be back in the post-Tariff equilibrium after the past few months of out-performing that expectation. We are at the edge of the 90% band estimated on post-recession data (blue), and entering into the 90% band estimated on the entire range of data since 1950 (lighter green).

I should also note that in the past, the recession process is never really a single drop straight down. It's a series of drops over the course of months with some moments of recovery:


That does not mean the recent drop is not the start of such a series, just that it's entirely possible this could turn around.

Is this a prelude to a recession? Maybe, maybe not. For one thing, it will be different from the past two "asset bubble era" recessions (dot-com in 2001 and housing in 2008) given there's no discernible asset bubble in recent GDP data:


It would be an example of a non-Minky recession! (At least if this isn't some kind of shadow economy crisis — the housing boom didn't show up in the stock market, but does show up in GDP, while the dot-com boom shows up in both.)

We are basically reaching my "limits to growth" hypothesis — that recessions happen when wage growth exceeds GDP growth, thus eating into profits. (In fact, that interacts with the asset bubbles as the asset bubbles boost GDP allowing wage growth to go higher than it would have without the bubble.) This graph shows the DIEM trend of GDP (blue line), the wage growth DIEM (green line) as well as projected paths (dashed) and a recession counterfactual (dotted green).


But there's another aspect that I've been carrying along since this blog started — that spontaneous drops in "entropy" (i.e. agents bunching up in the state space by all doing the same thing, like panicking) — are behind recessions. These spontaneous falls make economics entirely different from thermodynamics where they're disallowed by the second law, by the way (atoms don't get scared and cower in the corner). This human social behavior would ostensibly be triggered by news events that serve to coordinate behavior — unexpected Fed announcements, bad employment reports, yield curve inversion, or in this case a possible global pandemic. Or all of the above? I imagine if the Fed comes out of its next meeting in March with no interest rate cut, it might make a recession inevitable.

Interest rates and inversion

The 10-year rate is back at the bottom of the range of expected values from this 2015 (!) forecast:


And the interest rate spreads are trending back down and inverting again:


One thing to note is that while the median daily spread data (red) dipped into the range of turnaround points seen before a recession in the past three recessions several months ago, the monthly (average) spread data (orange) did not go that low (n.b. the monthly average is what I used to derive the metrics). We also didn't see interest rates rise into the range seen before a recession (which tends to be caused by the Fed lowering interest rates in the face of bad economic news). An inversion or near miss followed by another inversion is not exactly unknown in the time series data, either.

JOLTS openings and the market

The latest JOLTS job openings data released earlier this month (data for December 2019) showed a dramatic drop even compared to the 2019 re-estimate of the dynamic equilibrium rate:


This appears to be right on schedule with the market correlation I noticed a few months ago:


The recent drop (December 2019) matches up with the drop at the beginning of that same year (Jan 2019) in the aftermath of the December 2018 rate hike. If this correlation holds up, the job openings rate will rise back up and then fall again in January 2021 (about 11 months after Feb 2020). But the other possibility is that this is the first sign of a recession — the JOLTS measures all appear to lead the unemployment rate (via e.g. Sahm Rule) as an indicator. However, JOLTS is noiser (which requires a higher threshold parameter) and later than the unemployment rate. JOLTS comes out 2 months after the data it's representing (the data for December 2019 came out mid-February 2020, while the unemployment rate for December 2019 came out the first week of January 2020), so whether it's a better indicator than the unemployment rate remains to be seen — there's a complex interplay of noise, data availability, and revisions (!) that makes me think we should just stick to Sahm's rule.

I'll be looking forward to the (likely revised!) JOLTS data coming with the Fed's March meeting.

...

Update 8 March 2019

I updated the graphs with a few more days of data, including a Fed rate cut (visible in the spread data). Click to enlarge:





Friday, February 28, 2020

Dynamic equilibrium: health care CPI

Since I've been looking at a lot of health care data recently, I thought I'd run the US medical care CPI component through the dynamic information equilibrium model (DIEM). It turns out to have roughly the same structure as CPI overall (click to enlarge):


A big difference is that the dynamic equilibrium growth rate is α = 0.035/y, basically a full percentage point above the rate for all items α = 0.025/y. As it's been basically at play since the 50s (at least), medical prices are now twice what prices as a whole have risen in the same period.

I was curious — was the US an outlier (lol)? I ran the model over a bunch of HICP health data from Eurostat (which the UI is at best silly, at worst pathological) for several countries (Sweden, France, the Netherlands, Switzerland, Germany, the UK, Estonia, Italy, Turkey, Denmark, and Spain). This is definitely a graph you have to click to enlarge:


They're all remarkably similar to each other except France, which came out with an equilibrium rate of α = 0. That could be wrong due to the recent data being in the middle of a non-equilibrium shock — time will tell.

I also compared the dynamic equilibrium to the DIEM model of the CPI (HICP) for all items for each country which produces an interesting plot:


It looks like the US is not much of an outlier on that graph — but that's a bit misleading since the possible inflation rates can't really deviate too much above the diagonal y = x line otherwise headline inflation (i.e. all the components) would rapidly be overtaken by health care price inflation (one of those components). In fact, nearly every time it came out that the health care rate was basically equal to the headline rate. You can see it if we plot the difference versus the headline CPI rate:



Most of the countries are clustered right around zero, with outliers being the US and France. France is an outlier because its health care price inflation has been basically zero for the past decade meaning the difference graphed above is essentially the negative of the inflation rate of about 2%. The US is an outlier in the other direction — by 4 standard deviations if we leave out France and the US in the estimate of the distribution.

If this is correct, the US health care prices rise at nearly a percentage point faster than prices overall, meaning prices are nearly 30% higher today from growth over the period from 1996-2020 (the Eurostat data range) than they would be if those prices grew like any other country's. And these are prices — not income, profits, or consumption.

...

Appendix

Here are all the individual graphs. The dashed lines are lines at the dynamic equilibrium rate, but in some of the graphs they appear well off the lines — that's because in some of the cases the different levels add the shocks in different ways (e.g. the 2nd shock is positive but the 3rd shock is negative) so they add or subtract differently for each country. Couple that with the fact that I determine the sign of a shock in the parameter estimation by the sign of the width, not the amplitude but instead of using that sign I set it by hand for the dashed line guides, and well, you see the result — random dashed lines appearing across the graphs (albeit with the right slope). Anyway, click to enlarge.