Friday, April 7, 2023

Monday, December 19, 2022

Where to find me these days


This post acts as a collection of links to find me in various places for various content. This econ blog has been moved (along with the archives) over to my substack Information Equilibrium. It's free to sign up (and definitely more likely to get to you than twitter ever was).

https://infoeqm.substack.com/

I also have an account on mastodon:

@infotranecon@econtwitter.net

I started a bluesky:

@newqueuelure

Deactivated my twitter!

I continue to post on twitter @infotranecon (for econ and politics) and @newqueuelure (for sci fi and game stuff)

Saturday, September 10, 2022

Is the credibility revolution credible?

Noah Smith made a stir with his claim that historians make theories without empirical backing — something I think is a bit of a category error. I mean even if historian's "theories" truly are "it happened in the past, so this can happen again", that the study of history gives us a sense of the available state space of human civilization, then that observation is such a small piece of the available state space as to carry zero probability on its own. You'd have to resort to some kind of historical anthropic principle that the kind of states humans have seen in the past are the more likely ones when you have a range of theoretical outcomes comparable to the string theory landscape [1]. But that claim is so dependent on its assumption it could not rise to the idea of a theory in empirical science.

Regardless of the original point, some of the pushback came in the form of pot/kettle tu quoque "economics isn't empirical either", to which on at least one occasion Noah countered by citing Angrist and Pischke (2010) on the so-called credibility revolution.

Does it counter, though? Let's dig in.

Introduction

I do like the admission of inherent bias in the opening paragraph — the authors cite some critiques of econometric practice and wonder if their own grad school work was going to be taken seriously. However, they never point out that this journal article is something of an advertisement for the authors' book "Mostly Harmless Econometrics: An Empiricist's Companion" (they do note that it's available)

The main thrust of the article is that the authors claim the quality of empirical research design has improved over time with the use of randomization and natural experiments alongside increased scrutiny of confounding variables. They identify these improvements and give examples of where they are used. Almost no effort is made to quantify this improvement or show that the examples are representative (many are Angrist's own papers) — I'll get into that more later. First, let's look at research design.

Research design

The first research design is the randomization experiment. It's basically an approach wholesale borrowed from medicine (which is why the terms like treatment group show up). Randomized experiments have their own ethical issues documented by others that I won't go into here. The authors acknowledge this, so let's restrict our discussion to randomization experiments that pass ethical muster. Randomization relies on major assumptions about the regularity of every other variable — in a human system this is enormous so explicit or implicit theory is often used to justify isolation for the relevant variables. I talk more about theoretically isolating variables in the context of Dani Rodrik's book here — suffice to say this is where a lot of non-empirical rationalization can enter the purportedly empirical randomization experiment.

The second is natural experiments. Again these rely on major assumptions about the regularity of every other variable, usually from theory. The authors discuss Card (1990) which is the good way to do this — showing there was no observable effect on wages from a labor surge in Miami.

However, contra Angrist and Pischke's thesis, Card draws conclusions based on the implicit theory that a surge of labor should normally cause wages to fall so there must be some just-so story about Miami being specifically able to absorb the immigration — the conclusion was not the one you would derive from the empirical evidence. It's true that Card was working in an environment where supply and demand in the labor market was the dominant first order paradigm so he likely had to make some concession to it. This is why this paper is best seen as "no observable effect on wages", but not really a credibility revolution because it doesn't take the empirical data seriously enough to say maybe the theory is wrong. As a side note, it's easily fixed with a better framework for understanding supply and demand.

The authors go on to cite Jacob (2004) which draws the conclusion in the abstract that public housing had no effect on educational outcomes because demolishing public housing projects in Chicago has no effect on educational outcomes purportedly because the people relocate to similar areas. However, this conclusion 1) misrepresents the effect actually seen in the paper (no observable effect on outcomes for children < 14 years, but older students were more likely to drop out), and 2) misses out on the fact that the effort may be insufficient in the first place. It is possible the lack of funding for as well as the subsequent decay and demolition of public housing creates a scenario where the limited effort expended is insufficient to create a positive effect in the first place — therefore it does not mean public housing is ineffective on educational outcomes as a policy. The dose of medicine may be too small but this natural experiment assumes the dose was sufficient in order to draw its conclusion.

There is also the lack of analysis of the macro scale where demolition of public housing projects is one among a myriad of public policy choices that disproportionately negatively impact Black people — that the demolition is just one thing in the way among many such that negative educational outcomes may be due to e.g. a series of similar upsets (public housing demolished, parent loses a job, parents are denied a mortgage because of racism) where not every kid experiences the same subset. The aggregate effect is negative educational outcomes overall, so you need a lot more data to tease out the effect of any single factor. There's a desert with poisonous snakes, no water, no shade, choking sand, and freezing nights — solving one of those problems and seeing people still die does not mean solving one problem was not effective.

The authors proceed in the introduction to look at the historical development of exploiting various "natural experiments" such as the variation across states. However I've pointed out the potential issues here with regard to a particular study of the minimum wage using the NY/PA border as the natural experiment. The so-called credibility revolution often involves making these assumptions ("obviously, straight lines on maps obviously have no effects") without investigating a bit more (like I did at the link using Google maps as a true empiricist).

Credibility and non-empirical reasoning

Empirical credibility is like the layers of an onion. Using natural experiments and randomized quasi-experiments peels back one layer of non-empirical reasoning, but the next layer appears to be the armchair theory assumptions used to justify the interpretations of those "experiments". Per the link above about the issues with isolating variables, it is possible to do this using armchair theory if you have an empirically successful theoretical framework that already tells you how to isolate those variables — but that's not the case in economics.

The biggest things in the empirical sciences that prevent this infinite regress/chicken and egg problem are successful predictions. If you regularly and accurately predict something, that goes a long way towards justifying the models and methods used because it relies on the simple physical principle that we cannot get information from the future (from all us physicists out there: you're welcome). But at a very basic level, what will turn around the credibility of economics is the production of useful results. Not useful from, say, a political standpoint where results back whatever policy prescription you were going to advocate anyway — useful to people's lives. I've called this the method of nascent science.

This does not necessarily include those studies out there that say natural experiment X says policy Y (with implicit assumptions Z) had no effect in the aggregate data. Raising the minimum wage seems to have no observable effect in the aggregate data, but raising the minimum wage is useful to the people who get jobs working at minimum wage at the individual level. Providing health care randomly in Oregon may not have resulted in obvious beneficial outcomes at the aggregate level, but people who have access to health care are better off at the individual level. In general, giving people more money is helpful at the individual level. If there's no observable aggregate effect in either direction (or even a 1-sigma effect in either direction), that's not evidence we shouldn't do things that obviously help the people that get the aid.

The article discusses instrumental variables, but says that economists went from not explaining why instrumental variables were correct at all to creating just-so stories for them or just being clever (even dare I say it, contrarian) about confounding variables. I mean calling it a "credibility revolution" when other economists finally think a bit harder and start to point out serious flaws in research design when there was no reason these flaws couldn't have been pointed out before the 1980s is a bit of an overstatement. I mean from the looks of it, it could be equally plausible that economists only started to read each other's empirical papers in the 80s [2].

You can see the lack of empirical credibility in the way one of these comments in the article is phrased.
For example, a common finding in the literature on education production is that children in smaller classes tend to do worse on standardized tests, even after controlling for demographic variables. This apparently perverse finding seems likely to be at least partly due to the fact that struggling children are often grouped into smaller classes.
It's not that they proved this empirically. There's no citation. It just "seems likely" to be "at least partly" the reason. Credibility revolution! Later they mention "State-by-cohort variation in school resources also appears unrelated to omitted factors such as family background" in a study from Card and Krueger. No citation. No empirical backing. Just "appears unrelated". Credibility revolution! It should be noted that this subject is one of the author's (Angrist) common research topics and has a paper that says smaller class sizes are better discussed below — Angrist could easily be biased in these rationalizations in defense of their own work. Credibility revolution!

There's another one of these "nah, it'll be fine" assurances that I'm not entirely sure is even correct:
... we would like students to have similar family backgrounds when they attend schools with grade enrollments of 35–39 and 41–45 [on either side of the 40 students per class cutoff]. One test of this assumption... is to estimate effects in an increasingly narrow range around the kink points; as the interval shrinks, the jump in class size stays the same or perhaps even grows, but the estimates should be subject to less and less omitted variables bias.  
I wracked my brain for some time trying to think of a reason omitted variable bias would be reduced when comparing sets of schools with enrollments of 38-39 vs 41-42 as opposed to comparing sets of schools enrollments of 35-39 vs 41-45. They're still sets of different schools. By definition your omitted variables do not know about the 40 students per class cutoff, so should not have any particular behavior around this point. It just seems like your error bars get bigger due to using a subsample. Plus, the point where you have the most variation in your experimental design is in the change of class sizes from an average of ~ 40 to an average of ~ 20 in a school with an enrollment going from 40 to 41 meaning that individual students are having the most impact precisely at the point where you are trying to extract the biggest signal of your effect. See figure below. Omitted variable bias is increased at that point due to the additional weight of individual students in the sample! It could be things you wouldn't even think of because they apply to a single student — like an origami hobby or being struck by lightning.

Average class size in the case of a maximum of 40 students versus enrollment.

The authors then (to their credit) cite Urqiola and Verhoogen (2009) showing the exact same method fails in a different case. However, they basically handwave away that it could apply to the former result based on what could only be called armchair sociology about the differences between Israel (the first paper from one of the authors of the "credibility revolution" article [Angrist]) and Chile (the 2009 paper).

After going through the various microeconomic studies, they go through macro, growth econ, and industrial organization where they tell us 1) the empirical turn hasn't really taken hold (the credibility revolution coming soon), and 2) if you lower your standards significantly you might be able to say a few recent papers have the right "spirit".

Conclusion

Angrist and Pischke (2010) is basically "here are some examples of economists doing randomized trials, identifying natural experiments, and pointing out confounding variables" but doesn't make a case where this is a causal factor behind improving empirical accuracy, building predictive models, or producing results that are replicable or generalizable. They don't make the case that the examples are representative. I mean it's a good thing that pointing out obvious flaws in research design is en vogue in econometrics, and increased use of data is generally good when the data is good. However I still read NBER working papers all the time that fail to identify confounding variables and instead read like a just-so story for why the instrumental variable or natural experiment is valid using non-empirical reasoning. Amateur sociology still abounds from journal articles to job market papers. The authors essentially try to convince us of a credibility revolution and the rise of empirical economics by pointing to examples — which is ironic because that is not exactly good research design. The only evidence they present is the increasing use of the "right words", but as we can see from the examples above you can use the right words and still have issues.

In the end, it doesn't seem anyone is pointing out the obvious confounding variable here — the widespread use of computers and access to the internet increased both the amount of data, the size of regressions, and the speed with which they could be processed [3] — could lead to a big increase in the number of empirical papers (figures borrowed from link below) without an increase in the rate of credibility among those results [4]. And don't get me started about the lack of a nexus between "empirical" and "credible" in the case of proprietary data or the funding sources of the people performing or promoting a study.





So is this evidence of a credibility revolution? Not really.

But per the original question, is this a counter to people saying that economics isn't empirically testable science? It depends.

I mean it's not an empirically testable science in the sense of physics or chemistry where you can run laboratory experiments that isolate individual effects. You can make predictions about measured quantities in economics that can be empirically validated, but that isn't what is being discussed here and for the most part does not seem to be done in any robust and accountable way. Some parts of econ (micro/econometrics) have some papers that have the appearance of empirical work, but 1) not all fields, and 2) there's still a lot of non-empirical rationalization going into e.g. justification of the instrumental variables.

I would say that economics is an evidentiary science — it utilizes empirical data and (hopefully) robust research design in some subfields, but the connective tissue of the discipline as a whole remains as always "thinking like an economist" which is a lot of narrative rationalization that can run the gamut from logical argument to armchair sociology to just-so stories used to justify the entire theory or simply an instrumental variable. Data does not decide all questions; data informs the narrative rationalization — the theory of the case built around the evidence.

A lot of the usefulness of looking at data in e.g. natural experiments is where they show no effect — or no possibility of a detectable effect. This can help us cut out the theories that are wrong or useless. Unfortunately, this has not led to a widespread reconsideration of e.g. the supply and demand framework being used in labor markets on topics from the minimum wage to immigration. If economics was truly an empirical science, the economic theory taught in Econ 101 would be dropped from the curriculum.

...

[1] I have more of a "history is part of the humanities" view, that the lessons are essentially more evidentially grounded lessons of fictional stories, fables, myths and legends — you learn about what it is to be a human and exist in human society, but it's not a theory of how humans and institutions behave (that's political science or psychology). A major useful aspect of history in our modern world is to counter nationalist myth-making that is destructive to democracy.

A metaphor I think is useful to extend is that if "the past is a foreign country", then historians write our travel guides. A travel guide is not a "theory" of another country but an aid to understanding other humans.

[2] A less snarky version of this is that the field finally developed a critical mass of economists who both had the training to use computers and could access digital data to perform regressions in a few minutes instead of hours or days — and therefore could have a lot more practice with empirical data and regressions — such that obvious bullshit no longer made the cut. Credibility revolution!

[3] The authors do actually try to dismiss this as a confounding variable, but end up just pointing out flawed studies existed in the 70s and 80s without showing that those flawed results depended on mainframe computers (or even used them). But I will add that programming a mainframe computer (batch processes done overnight with lots of time spent verifying the code lest an exception causes you to lose yet another day and possibly funding dollars spent on another run) does not yet get to the understanding generated by immediate feedback from running a regression on a personal computer.

[4] p-hacking and publication bias are pretty good examples of the possibility of a reverse effect on credibility from increased data and the ability to process it. A lot of these so called empirical papers could not have their results reproduced in e.g. this study.

Friday, April 22, 2022

Outbrief on Dynamic Information Equilibrium as a COVID-19 model

What with the US just sort of giving up on doing anything about COVID-19 and just letting it spread it's become just too depressing to continue to track the models day after day. On the radio yesterday I heard that King county (which includes Seattle area, where I live) isn't going to be focusing on tracking cases anymore — so I imagine the quality of the data is going to drop precipitously in the coming months unless there's a new more deadly variant. Therefore I'm going to stop working on them, and this is going to be an outbrief of the successes and failures of using the Dynamic Information Equilibrium Model (DIEM) for COVID-19.

*  *  *

We'll start with the big failure at the end — the faster than expected rate of decline (the dynamic equilibrium) in several places after the omicron surge. The examples here are New York and Texas. Red points are after the model parameters were frozen, gray points before.



You can see the latest BA.2 surge in March and April of 2022. These of course result in over-predictions of the cumulative cases:




The (purportedly) constant rate of decline was one of the major components of the model which means there is something serious that the model is not helping us understand. There are several possibilities that don't mean the model is useless: 1) the sparse surge assumption is violated, 2) we're seeing a more detailed aspect of the model, 3) ubiquitous vaccination/exposure, 4) omicron is different, or 5) aggregating constituencies introduces more complex behavior.

1. The first possibility I discussed in an update to the original blog post on using the DIEM for COVID-19. I also discussed it in this Twitter thread. The basic idea was that surges were happening too close together to get a good measurement of the dynamic equilibrium rate of decline. Sweden was the prototypical case in the summer of 2020 and it started to become visible in the US in early 2021. The faster rate of decline was like seeing the actual rate for the first time without a another surge happening. We can see a new estimate of the rate of decline from all the data makes NY work just fine:


Early on, this can be a compelling rationale. However, the issue with this is that we can't just keep using that excuse — ok, now we have a good estimate! No wait, now we do. Additionally it didn't change in other countries (see 4) which had just as much sparseness (or rather lack thereof). 

2. The constant rate of decline is actually an approximation in the DIEM. In my original paper, the dynamic equilibrium is related to the information transfer index k which can drift slowly over time as the virus spreads:


Again, the question comes up as to why it changed in US states but not in the EU (see 4) — so this also isn't very compelling.

3. Ubiquitous vaccination or exposure is how outbreaks are limited in epidemiological models such as the SIR models where the S stands for susceptible — i.e. the unvaccinated or those who never had the virus. Again, while the US pretty much let the virus spread unchecked such that it's likely that almost everyone got it providing some protection from getting it again, it doesn't explain why we don't see the faster rate of decline from omicron in e.g. Europe or even in smaller constituencies of the US e.g. King county, WA (see 4).

4. Moving on to the fourth possibility — omicron is different — we can probably discount it to some degree because in several places, the constant dynamic equilibrium prediction worked just fine. For example the EU:


Although the surge size was underestimated, we can see not only does the omicron surge return to the same rate of decline but so does the BA.2 variant surge (indicated by diagonal lines in the graph). We also see it in the EU member France:



While the decline in the omicron surge was interrupted by the BA.2 variant surge, we can see the that the model (which cannot predict new surges, only detect them) was doing fine until that point. So saying "omicron was different" is not a good answer — it would be ad hoc to say it is different for one place and not another. In fact, it wasn't even different in parts of the US — King County in Washington State (which contains Seattle) also appeared to follow the predicted rate of decline until the BA.2 surge:


So that's another excuse we can't get away with in any scientific sense.

5. The last possibility I'm listing is one that I came up with as an alternative to 1) back in early 2021: we're seeing the aggregate of several surges which can have different behavior than a single surge. Part of this is borne out in the data — the surges for smaller constituencies (cities, counties) generally have faster rates of recovery than larger ones (states, countries). The dynamic equilibrium we see at the aggregate level is a combination of these faster surges. In this graph the slow rate is made up of several smaller surges with a faster rate (exponential rates shown with dashed lines) combined with a network structure i.e. some power law in the size of the surges due to starting in big cities and diffusing to smaller ones.

We see the faster rates at the lower level and a slower rate at the aggregate level. If there is some temporal alignment of those local surges — a holiday, a big event, or (in the omicron case) introduction of a faster spreading variant of the virus — it can align some of those "sub-surges" and briefly show us the intrinsic faster rate at the lower level in the aggregate data:


This is probably the best explanation — the US is a lot more spread out and rural than the EU, and so has a lot more subcomponents from a modeling standpoint. This does require more effort to model than just information theory and virus + healthy person → sick person, which means that at best the DIEM is a leading order approximation. This is more satisfying in the sense that the DIEM is supposed to be a leading order approximation — epidemiology and economics are complex subjects and we should be surprised that the DIEM worked as well as it has for COVID-19 and the unemployment rate.

*  *  *

There was one model failure that was just weird: the UK.

In July of 2021 the rate of decline was so fast but ended so quickly that I put in by hand the one and only negative shock — then immediately after that, the case counts just went sideways. The more recent data has given us back the surge structure apparent in the rest of the world where the case counts are high enough, but the second half of 2021 in the UK is just inexplicable in the model with any kind of confidence.

*  *  *

So what is the model good for? Well, first off it's incredibly simple — surges followed by a constant (exponential) rate of decline. And that constant rate of decline seems to be a reasonable first order approximation; it's a starting point. We can see it held ("fixed α" on the graph) in Florida from mid-2021 until recently with the faster rate of decline noted above:


It was also good at detecting surges getting started. The original example was Florida in May of 2020:



And again in June of 2021:


Note that despite getting the slope wrong, you can still see the new surge getting started in late February as a change from the straight line decline (on a log graph) after the omicron surge in this example from New York:


Looking at the log graph for a deviation from exponential (i.e. straight line) decline as a sign of a new surge became more common (at least on Twitter). Those of us monitoring our log plots saw surges getting started while the media seemed to only react when it sees an actual rise in cases — typically 2-3 weeks later. It's the closest I've felt to having an actual crystal ball [1].

So in that sense, the DIEM has been useful in understanding the COVID-19 pandemic. It's a simple first order approximation that can help detect when surges are getting started.

...

Footnotes:

[1] Side note: because of the typical duration of surges (3-4 weeks), the point when the media became focused on a surge tended to be the start of the inflection point signaling the beginning of the surge recovery.


Sunday, November 7, 2021

Comparing the DIEM and the FRB/US model

Per a question in my Twitter DMs, I thought I'd do a comparison between the Dynamic Information Equilibrium Model (DIEM) and the FRB/US model of the unemployment rate. I've not done this comparison that I can recall. I've previously looked at point forecast comparisons between the different Fed models (e.g. here for 2014). In another post, I took the DIEM model through the Great Recession following along with the Fed Greenbook forecasts. And in an even older post (prior to the development of the DIEM), I looked at inflation forecasts from the FRB/US model.

The latest and most relevant Fed Tealbook forecast from the FRB/US model that seems to be available is here [pdf] — it's from December 2015. I've excerpted the unemployment rate forecast (along with several counterfactuals) in the following graphic:


Now something that should be pointed out is that the FRB/US model does a lot more than the unemployment rate — GDP, interest rates, inflation, etc. While there are separate models in the information equilibrium framework covering a lot of those measures, the combination of the empirically valid relationships into a single model is still incomplete (see here). However, in the DIEM the unemployment rate is essentially unaffected by other variables in equilibrium (declining at the equilibrium rate of d/dt log u ≃ −0.09/y [0]). Therefore, whatever the other variables in an eventual information equilibrium macro model, comparing the unemployment rate forecasts alone should be valid — especially since we are going to look at the period 2016 to 2020 (prior to the COVID shock).

Setting the forecast date to be the end of Q3 of 2015 just prior to the meeting, we can see the central forecast for the DIEM does worse at first but is better over the longer run (click to enlarge):


The big difference lies not just in the long run but in the error bands, with the DIEM being much narrower. These are apples-to-apples error bands as we can see the baseline FRB/US forecast in the graph at the top of the post is conditional on a lack of a recessionary shock (those are the red and purple lines above) [1].

Additional differences come in what we don't see. Looking at the latest 2018 update to the FRB/US model, we get information about the impulse responses to a 100 bp increase in the Fed funds rate. Now the Fed usually doesn't do 100 bp changes (typically 25 bp), so this is a large shock. But it also creates a forecast path that looks nothing like anything we have seen in the historical data [3]:

We really don't see increases in the unemployment rate that look like this:


And while yes this 0.7 pp increase in the unemployment rate is following a 100 bp increase in Fed funds rate when we usually see only 25 bp increases [4], this would, per the Sahm Rule, indicate a recession — and therefore almost certainly further increases beyond the initial 0.7 pp.

Now it is true we don't see the unemployment rate falling continuously, asymptotically approaching zero, as would be indicated in the DIEM. However, we also haven't had a period of 40 years uninterrupted by a recession required for it to happen.

The FRB/US model has hundreds of parameters for hundreds of variables — however, it doesn't even qualitatively capture the behavior of the empirical data for the unemployment rate. This is likely due to what Noah Smith called "big unchallenged assumptions" — in order to get an unemployment rate path to look more like the data, trade-offs would have to be made on other variables that make them look far worse. You could probably come up with something that looks a lot like the FRB/US model with a giant system of linear equations with several lags. Simultaneously fitting all of variables you chose to model can create results that look not entirely implausible when you look at all the model outputs as a group, but individually do not qualitatively describe what we see. The reason? You chose (and probably constrained) several variables to have relationships that are empirically invalid — therefore any fit is going to have variables that come out looking wrong.

I imagine the impact of an increase in the Fed funds rate on the unemployment rate is one of those chosen relationships. Looking through the historical data, there is no particular evidence that raising rates causes unemployment to rise nor vice versa. In fact, it seems rising unemployment causes the Fed to start lowering rates [5]! But forcing such a relationship, after estimating all the model parameters, likely contributes to not just forecasting error, but making the unemployment rate do strange things.

...

Footnotes

[0] See also Hall and Kudlyak (2020) which arrives at the same rate of decline.

[1] Here's what a recession shock of the same onset would look like in the DIEM (click to enlarge):

I should add there is a bit of nuance here as this is under the assumption of an historical level of temporary layoffs in a recession. The COVID recession had the largest surge of temporary layoffs in the entire time series data set — and that created different dynamics [2] (click to enlarge):


[2] The latest data appears to be showing a positive shock due to the stimulus.

[3] I based the counterfactual "no rate increase" on the typical flattening we see in other FRB/US forecasts. It's really not a big difference to just use the last 2020 data point and say the future is constant as the counterfactual (click to enlarge):

[4] Here's an estimate from a 25 bp increase in the Fed funds rate where I just took the shock and multiplied it by 0.25 (click to enlarge):


[5] More here.


Saturday, July 31, 2021

The recession of 2027

 From my "Limits to wage growth" post from roughly three years ago:

If we project wage growth and NGDP growth using the models, we find that they cross-over in the 2019-2020 time frame. Actually, the exact cross-over is 2019.8 (October 2019) which not only eerily puts it in October (when a lot of market crashes happen in the US) but also is close to the 2019.7 value estimated for yield curve inversion based on extrapolating the path of interest rates. ...

This does not mean the limits to wage growth hypothesis is correct — to test that hypothesis, we'll have to see the path of wage growth and NGDP growth through the next recession. This hypothesis predicts a recession in the next couple years (roughly 2020).

We did get an NBER declared recession in 2020, but since I have ethical standards (unlike some people) I will not claim this as a successful model prediction as the causal factor is pretty obviously COVID-19. So when is the next recession going to happen? 2027.

Let me back up a bit and review the 'limits to wage growth' hypothesis. It says that when nominal wage growth reaches nominal GDP (NGDP) growth, a recession follows pretty quickly after. There is a Marxist view that when wage growth starts to eat into firms' profits, investment declines, which triggers a recession. That's a plausible mechanism! However, I will be agnostic about the underlying cause and treat it purely as an empirical observation. Here's an updated version of the graph from the original post (click to enlarge). We see that recessions (beige shaded regions) occur roughly where wage growth (green) approaches NGDP growth (blue) — indicated by the vertical lines and arrows.


Overall, the trend of NGDP growth gives a pretty good guide to where these recessions occur with only the dot-com bubble extending the lifetime of the 90s growth in wages. In the previous graph, I also added some heuristic paths prior to the Atlanta Fed time series as a kind of plausibility argument of how this would have worked in the 60s, 70s, and 80s. If we zoom in on the recent data (click to enlarge) we can see how the COVID recession decreased wage growth:


This is the most recent estimate of the size of the shock to wage growth with data through June 2021 (the previous estimate was somewhat larger). If we show this alongside trend NGDP growth (about 3.8%, a.k.a. the dynamic equilibrium) we see the new post-COVID path intersects it around 2027 (click to enlarge):


Now this depends on a lack of asset boom/bust cycles in trend NGDP growth — which can push the date out by years. For example, by trend alone we should have expected a recession in 1997/8; the dot-com boom pushed the recession out to 2001 when NGDP crashed down below wage growth. However, this will be obvious in the NGDP data over the next 6 years — it's not an escape clause for the hypothesis.

Epilogue

One reason I thought about looking back at this hypothesis was a blog post from David Glasner, writing about an argument about the price stickiness mechanism in (new) Keynesian models [1]. I found myself reading lines like "wages and prices are stuck at a level too high to allow full employment" — something I would have seen as plausible several years ago when I first started learning about macroeconomics — and shouting (to myself, as I was on an airplane) "This has no basis in empirical reality!"

Wage growth declines in the aftermath of a recession and then continues with its prior log growth rate of 0.04/y. Unemployment rises during a recession and then continues with its prior rate of decline −0.09/y [2]. These two measures are tightly linkedInflation falls briefly about 3.5 years after a decline in labor force participation — and then continues to grow at 1.7% (core PCE) to 2.5% (CPI, all items).

These statements are entirely about rates, not levels. And if the hypothesis above is correct, the causality is backwards. It's not the failing economy reducing the level of wages that can be supported at full employment — the recession is caused by wage growth exceeding NGDP growth, which causes unemployment to rise, which then causes wage growth to decline about 6 months later.

Additionally, since both NGDP and wages here are nominal monetary policy won't have any impact on this mechanism. And empirically, it doesn't. While the social effect of the Fed may stave off the panic in a falling market and rising unemployment, once the bottom is reached and the shock is over the economy (over the entire period for which we have data) just heads back to its equilibrium −0.09/y log decline in unemployment and +0.04/y log increase in wage growth.

Of course this would mean the core of Keynesian thinking about how the economy works — in terms of wages, prices, and employment — is flawed. Everything that follows from The General Theory from post-Keynesian schools to the neoclassical synthesis to new Keynesian DSGE models to monetarist ideology is fruit of a poisonous tree.

Keynes famously said we shouldn't fill in the values:

In chemistry and physics and other natural sciences the object of experiment is to fill in the actual values of the various quantities and factors appearing in an equation or a formula; and the work when done is once and for all. In economics that is not the case, and to convert a model into a quantitative formula is to destroy its usefulness as an instrument of thought. 

No wonder his ideas have no basis in empirical reality!

...

Update 19 November 2021

The stimulus of 2021 seems to have pushed up both GDP growth and wage growth. In fact, wage growth appears to have returned to its prior equilibrium:

If this trend continues and the BIF (and/or BBB, if passed) doesn't bring GDP growth above its historical 3.8% average outside of shocks, then that brings the recession date back to ... around now. Looking at PCE (consumption) instead of GDP (as the former is updated more frequently than the latter, but both show almost the exact same structure), we are back to being above that long run growth limit (click to enlarge):


Zooming in on the more recent years (click to enlarge):




PS: New arctan axes just dropped.

...

Footnotes:

[1] Also, wages / prices aren't individually sticky. The distribution of changes might be sticky (emergent macro nominal rigidity), but prices or wages that change by 20% aren't in any sense "sticky".

[2] Something Hall and Kudlyak (Nov 2020) picked up on somewhat after I wrote about it (and even used the same example).

Sunday, April 25, 2021

Implicit assumptions in Econ 101 made explicit

One of the benefits of the information equilibrium approach to economics is that it makes several of the implicit assumptions explicit. Over the past couple days, I was part of an exchange with Theodore on twitter that started here where I learned something new about how people who have studied economics think about it — and those implicit assumptions. Per his blog, Theodore says he works in economic consulting so I imagine he has some advanced training in the field.

The good old supply and demand diagram used in Econ 101 has a lot of implicit assumptions going into it. I'd like to make a list of some of the bigger implicit assumptions in Econ 101 and how the information transfer framework makes them explicit.

I. Macrofoundations of micro

Theodore doesn't think the supply and demand curves in the information transfer framework [1] are the same thing as supply and demand curves in Econ 101. Part of this is probably a physicist's tendency to see any isomorphic system in terms of effect as the same thing. Harmonic oscillators are basically the same thing even if the underlying models — from a pendulum, to a spring, to a quantum field [pdf] — result from different degrees of freedom.

One particular difference Theodore sees is that in the derivation from the information equilibrium condition $I(D) = I(S)$, the supply curve has parameters that derive from the demand side. He asks:

For any given price you can draw a traditional S curve, independent of [the] D curve. Is it possible to draw I(S) curve independent of I(D)?

Now Theodore is in good company. A University of London 'Econ 101' tutorial that he linked me to also says that they are independent:

It is important to bear in mind that the supply curve and the demand curve are both independent of each other. The shape and position of the demand curve is not affected by the shape and position of the supply curve, and vice versa.

I was unable to find a similar statement in any other Econ 101 source, but I don't think the tutorial statement is terribly controversial. But what does 'independent' mean here?

In the strictest sense, the supply curve in the information transfer framework is independent of demand independent variables because you effectively integrate out demand degrees of freedom to produce it, leaving only supply and price. Assuming constant $S \simeq S_{0}$ when integrating the information equilibrium condition:

$$\begin{eqnarray}\int_{D_{ref}}^{\langle D \rangle} \frac{dD'}{D'} & = & k \int_{S_{ref}}^{\langle S \rangle} \frac{dS'}{S'}\\
& = & \frac{k}{S_{0}} \int_{S_{ref}}^{\langle S \rangle} dS'\\
& = & \frac{k}{S_{0}} \left( \langle S \rangle - S_{ref}\right)\\
\log \left( \frac{\langle D \rangle}{D_{ref}}\right) & = & \frac{k}{S_{0}} \Delta S
\end{eqnarray}$$

If we use the information equilibrium condition $P = k \langle D \rangle / S_{0}$, then we have an equation free of any demand independent variables [2]:

$$\text{(1)}\qquad \Delta S = \frac{S_{0}}{k} \log \left(\frac{P S_{0}}{k D_{ref}}\right)
$$

There's still that 'reference value' of demand $D_{ref}$, though. That's what I believe Theodore is objecting to. What's that about?

It's one of those implicit assumptions in Econ 101 made explicit. It represents the background market required for the idea of a price to make sense. In fact, we show this more explicitly by recognizing the the argument of the log in Eq. (1) is dimensionless. We can define a quantity with units of price (per the information equilibrium condition) $P_{ref} = k D_{ref} / S_{0}$ such that:

$$
\text{(2)}\qquad \Delta S = \frac{S_{0}}{k} \log \left(\frac{P}{P_{ref}}\right)
$$

This constant sets the scale of the price. What units are prices measured in? Is it 50 € or 50 ¥? In this construction, the price is set around a market equilibrium price in that reference background. The supply curve is the behavior of the system for small perturbations around that market equilibrium when demand reacts faster than supply such that the information content of the supply distribution stays approximately constant at each value of price (just increasing the quantity supplied) where the scale of prices doesn't change (for example, due to inflation).

This is why I tried to ask about what the price $P$ meant in Theodore's explanations. How can a price of a good in the supply curve mean anything independently of demand? You can see the implicit assumptions of a medium of exchange, a labor market, production capital, and raw materials in his attempt to show that the supply curve is independent of demand:

The firm chooses to produce [quantity] Q to maximize profits = P⋅Q − C(Q) where C(Q) is the cost of producing Q. [T]he supply curve is each Q that maximizes profits for each P. The equilibrium [market] price that firms will actually end up taking is where the [supply] and [demand] curves intersect.

There's a whole economy implicit in the definition profits $ = P Q - C(Q)$. What are the units of $P$? What sets its scale? [4] Additionally, the profit maximization implicitly depends on the demand for your good.

I will say that Theodore's (and the rest of Econ 101's) explanation of a supply curve is much more concrete in the sense that it's easy for any person who has put together a lemonade stand to understand. You have costs (lemons, sugar) and so you'll want to sell the lemonade for more than the cost of the lemons based on how many glasses you think you might sell. But one thing it's not is independent of a market with demand and a medium of exchange.

Some of the assumptions going into the Theodore's supply curve aren't even necessary. The information transfer framework has a useful antecedent in Gary Becker's paper Irrational Behavior in Economic Theory [Journal of Political Economy 70 (1962): 1--13] that uses effectively random agents (i.e. maximum entropy) to reproduce supply and demand. I usually just stick with the explanation of the demand curve because it's far more intuitive, but there's also the supply side. That was concisely summarized by Cosma Shalizi:

... the insight is that a wider range of productive techniques, and of scales of production, become profitable at higher prices. This matters, says Becker, because producers cannot keep running losses forever. If they're not running at a loss, though, they can stay in business. So, again without any story about preferences or maximization, as prices rise more firms could produce for the market and stay in it, and as prices fall more firms will be driven out, reducing supply. Again, nothing about individual preferences enters into the argument. Production processes which are physically perfectly feasible but un-profitable get suppressed, because capitalism has institutions to make them go away.

Effectively, as we move from a close-in production possibilities frontier (lower prices) to a far-out one (higher prices), the state space is simply larger [5]. This increasing size of the state space with price is what is captured in Eqs. (1) and (2), but it critically depends on setting a scale of the production possibilities frontier via the background macroeconomic equilibrium — we are considering perturbations around it. 

David Glasner [6] has written about these 'macrofoundations' of microeconomics, e.g. here in relation to Econ 101. A lot of microeconomics makes assumptions that are likely only valid near a macroeconomic equilibrium. This is something that I hope the information transfer framework makes more explicit.

II. The rates of change of supply and demand

There is an assumption about the rates of change of the supply and demand distributions made leading to Eq. (1) above. That assumption about whether supply or demand is adjusting faster [2] when you are looking at supply and demand curves is another place where the information transfer framework makes an implicit Econ 101 assumption explicit — and does so in a way that I think would be incredibly beneficial to the discourse. In particular, beneficial to the discussion of labor markets. As I talk about at the link in more detail, the idea that you could have e.g. a surge of immigration and somehow classify it entirely as a supply shock to labor, reducing wages, is nonsensical in the information transfer framework. Workers are working precisely so they can pay for things they need, which means we cannot assume either supply or demand is changing faster; both are changing together. Immediately we are thrown out of the supply and demand diagram logic and instead are talking about general equilibrium.

III. Large numbers of inscrutable agents

Of course there is the even more fundamental assumption that an economy is made up of a huge number of agents and transactions. This explicitly enters into the information transfer framework twice: once to say distributions of supply and demand are close to the distributions of events drawn from those distributions (Borel law of large numbers), and once to go from discrete events to the continuous differential equation.

This means supply and demand cannot be used to understand markets in unique objects (e.g. art), or where there are few participants (e.g. labor market for CEOs of major companies). But it also means you cannot apply facts you discern in the aggregate to individual agents — for example see here. An individual did not necessarily consume fewer blueberries because of a blueberry tax, but instead had their own reasons (e.g. they had medical bills to pay, so could afford fewer blueberries) that only when aggregated across millions of people produced the ensemble average effect. This is a subtle point, but comes into play more when behavioral effects are considered. Just because a behavioral explanation aggregates to a successful description of a macro system, it does not mean the individual psychological explanation going into that behavioral effect is accurate.

Again, this is made explicit in the information transfer framework. Agents are assumed to be inscrutable — making decisions for reasons we cannot possibly know. The assumption is only that agents fully explore the state space, or at least that the subset of the state space that is fully explored is relatively stable with only sparse shocks (see the next item). This is the maximum entropy / ergodic assumption.

IV. Equilibrium

Another place where implicit assumptions are made explicit is equilibrium. The assumption of being at or near equilibrium such that $I(D) \simeq I(S)$ is even in the name: information equilibrium. The more general approach is the information transfer framework where $I(D) \geq I(S)$ and e.g prices fall below ideal (information equilibrium) prices. I've even distinguished these in notation, writing $D \rightleftarrows S$ for an information equilibrium relationship and $D \rightarrow S$ for an information transfer one.

Much like the concept of macrofoundations above, the idea behind supply and demand diagrams is that they are for understanding how the system responds near equilibrium. If you're away from information equilibrium, then you can't really interpret market moves as the interplay of supply and demand (e.g. for prediction markets). Here's David Glasner from his macrofoundations and Econ 101 post:

If the analysis did not start from equilibrium, then the effect of the parameter change on the variable could not be isolated, because the variable would be changing for reasons having nothing to do with the parameter change, making it impossible to isolate the pure effect of the parameter change on the variable of interest. ... Not only must the exercise start from an equilibrium state, the equilibrium must be at least locally stable, so that the posited small parameter change doesn’t cause the system to gravitate towards another equilibrium — the usual assumption of a unique equilibrium being an assumption to ensure tractability rather than a deduction from any plausible assumptions – or simply veer off on some explosive or indeterminate path.

In the dynamic information equilibrium model (DIEM), there is an explicit assumption that equilibrium is only disrupted by sparse shocks. If shocks aren't sparse, there's no real way to determine the dynamic equilibrium rate $\alpha$. This assumption of sparse shocks is similar to the assumptions that go into understanding the intertemporal budget constraint (which also needs to have an explicit assumption that consumption isn't sparse).

Summary

Econ 101 assumes a lot of things — from the existence of a market and a medium of exchange, to being in an approximately stable macroeconomy that's near equilibrium, to the rates of change of supply and demand in response to each other, to simply the existence of a large number of agents.

This is usually fine — introductory physics classes often assume you're in a gravitational field, near thermodynamic equilibrium, or even a small cosmological constant such that condensed states of matter exist. Econ 101 is trying to teach students about the world in which they live, not an abstract one where an economy might not exist.

The problem comes when you forget these assumptions or try to pretend they don't exist. A lot of 'Economism' (per James Kwak's book) or '101ism' (see Noah Smith) comes from not recognizing the conclusions people drawn from Econ 101 are dependent on many background assumptions that may or may not be valid in any particular case.

Additionally, when you forget the assumptions you lose understanding of model scope (see here, here, or here). You start applying a model where it doesn't apply. You start thinking that people who don't think it applies are dumb. You start thinking Econ 101 is the only possible description of supply and demand. It's basic Econ 101! Demand curves slope down [7]! That's not a supply curve!

...

Footnotes:

[1] The derivation of the supply and demand diagram from information equilibrium is actually older than this blog — I had written it up as a draft paper after working on the idea for about two years after learning about the information transfer framework of Fielitz and Borchardt. I posted the derivation on the blog the first day eight years ago.

[2] In fact, a demand curve doesn't even exist in this formulation because we assumed the time scale $T_{D}$ of changes in demand is much shorter than the time scale $T_{S}$ of changes in supply (i.e. supply is constant, and demand reacts faster) — $T_{S} \gg T_{D}$. In order to get a demand curve, you have to assume the exact opposite relationship $T_{S} \ll T_{D}$. The two conditions cannot be simultaneously true [3]. The supply and demand diagram is a useful tool for understanding the logic of particular changes in the system inputs, but the lines don't really exist — they represent counterfactual universes outside of the equilibrium.

[3] This does not mean there's no equilibrium intersection point — it just means the equilibrium intersection point is the solution of the more general equation valid for $T_{S} \sim T_{D}$. And what's great about the information equilibrium framework is that the solution, in terms of a supply and demand diagram, is in fact a point because $P = f(S, D)$ — one price for one value of the supply distribution and one value of the demand distribution.

[4] This is another area where economists treat economics like mathematics instead of as a science. There are no scales, and if you forget them sometimes you'll take nonsense limits that are fine for a real analysis class but useless in the real world where infinity does not exist.

[5] For some fun discussion of another reason economists give for the supply curve sloping up — a 'bowed-out' production possibilities frontier — see my post here. Note that I effectively reproduce that using Gary Becker's 'irrational' model by looking at the size of the state space as you move further out. Most of the volume of a high dimensional space is located near its (hyper)surface. This means that selecting a random path through it, assuming you can explore most of the state space, will land near that hypersurface.

[6] David Glasner is also the economist who realized the connections between information equilibrium and Gary Becker's paper.

[7] Personally like Noah Smith's rejoinder about this aspect of 101ism — econ 101 does say they slope down, but not necessarily with a slope $| \epsilon | \sim 1$. They could be almost completely flat. There's nothing in econ 101 to say otherwise. PS — had a conversation about demand curves with our friend Theodore as well earlier this year.