Saturday, July 31, 2021

The recession of 2027

 From my "Limits to wage growth" post from roughly three years ago:

If we project wage growth and NGDP growth using the models, we find that they cross-over in the 2019-2020 time frame. Actually, the exact cross-over is 2019.8 (October 2019) which not only eerily puts it in October (when a lot of market crashes happen in the US) but also is close to the 2019.7 value estimated for yield curve inversion based on extrapolating the path of interest rates. ...

This does not mean the limits to wage growth hypothesis is correct — to test that hypothesis, we'll have to see the path of wage growth and NGDP growth through the next recession. This hypothesis predicts a recession in the next couple years (roughly 2020).

We did get an NBER declared recession in 2020, but since I have ethical standards (unlike some people) I will not claim this as a successful model prediction as the causal factor is pretty obviously COVID-19. So when is the next recession going to happen? 2027.

Let me back up a bit and review the 'limits to wage growth' hypothesis. It says that when nominal wage growth reaches nominal GDP (NGDP) growth, a recession follows pretty quickly after. There is a Marxist view that when wage growth starts to eat into firms' profits, investment declines, which triggers a recession. That's a plausible mechanism! However, I will be agnostic about the underlying cause and treat it purely as an empirical observation. Here's an updated version of the graph from the original post (click to enlarge). We see that recessions (beige shaded regions) occur roughly where wage growth (green) approaches NGDP growth (blue) — indicated by the vertical lines and arrows.


Overall, the trend of NGDP growth gives a pretty good guide to where these recessions occur with only the dot-com bubble extending the lifetime of the 90s growth in wages. In the previous graph, I also added some heuristic paths prior to the Atlanta Fed time series as a kind of plausibility argument of how this would have worked in the 60s, 70s, and 80s. If we zoom in on the recent data (click to enlarge) we can see how the COVID recession decreased wage growth:


This is the most recent estimate of the size of the shock to wage growth with data through June 2021 (the previous estimate was somewhat larger). If we show this alongside trend NGDP growth (about 3.8%, a.k.a. the dynamic equilibrium) we see the new post-COVID path intersects it around 2027 (click to enlarge):


Now this depends on a lack of asset boom/bust cycles in trend NGDP growth — which can push the date out by years. For example, by trend alone we should have expected a recession in 1997/8; the dot-com boom pushed the recession out to 2001 when NGDP crashed down below wage growth. However, this will be obvious in the NGDP data over the next 6 years — it's not an escape clause for the hypothesis.

Epilogue

One reason I thought about looking back at this hypothesis was a blog post from David Glasner, writing about an argument about the price stickiness mechanism in (new) Keynesian models [1]. I found myself reading lines like "wages and prices are stuck at a level too high to allow full employment" — something I would have seen as plausible several years ago when I first started learning about macroeconomics — and shouting (to myself, as I was on an airplane) "This has no basis in empirical reality!"

Wage growth declines in the aftermath of a recession and then continues with its prior log growth rate of 0.04/y. Unemployment rises during a recession and then continues with its prior rate of decline −0.09/y [2]. These two measures are tightly linkedInflation falls briefly about 3.5 years after a decline in labor force participation — and then continues to grow at 1.7% (core PCE) to 2.5% (CPI, all items).

These statements are entirely about rates, not levels. And if the hypothesis above is correct, the causality is backwards. It's not the failing economy reducing the level of wages that can be supported at full employment — the recession is caused by wage growth exceeding NGDP growth, which causes unemployment to rise, which then causes wage growth to decline about 6 months later.

Additionally, since both NGDP and wages here are nominal monetary policy won't have any impact on this mechanism. And empirically, it doesn't. While the social effect of the Fed may stave of the panic in a falling market and rising unemployment, once the bottom is reached and the shock is over the economy (over the entire period for which we have data) just heads back to its equilibrium −0.09/y log decline in unemployment and +0.04/y log increase in wage growth.

Of course this would mean the core of Keynesian thinking about how the economy works — in terms of wages, prices, and employment — is flawed. Everything that follows from The General Theory from post-Keynesian schools to the neoclassical synthesis to new Keynesian DSGE models to monetarist ideology is fruit of a poisonous tree.

Keynes famously said we shouldn't fill in the values:

In chemistry and physics and other natural sciences the object of experiment is to fill in the actual values of the various quantities and factors appearing in an equation or a formula; and the work when done is once and for all. In economics that is not the case, and to convert a model into a quantitative formula is to destroy its usefulness as an instrument of thought. 

No wonder his ideas have no basis in empirical reality!

...

Footnotes:

[1] Also, wages / prices aren't individually sticky. The distribution of changes might be sticky (emergent macro nominal rigidity), but prices or wages that change by 20% aren't in any sense "sticky".

[2] Something Hall and Kudlyak (Nov 2020) picked up on somewhat after I wrote about it (and even used the same example).

Sunday, April 25, 2021

Implicit assumptions in Econ 101 made explicit

One of the benefits of the information equilibrium approach to economics is that it makes several of the implicit assumptions explicit. Over the past couple days, I was part of an exchange with Theodore on twitter that started here where I learned something new about how people who have studied economics think about it — and those implicit assumptions. Per his blog, Theodore says he works in economic consulting so I imagine he has some advanced training in the field.

The good old supply and demand diagram used in Econ 101 has a lot of implicit assumptions going into it. I'd like to make a list of some of the bigger implicit assumptions in Econ 101 and how the information transfer framework makes them explicit.

I. Macrofoundations of micro

Theodore doesn't think the supply and demand curves in the information transfer framework [1] are the same thing as supply and demand curves in Econ 101. Part of this is probably a physicist's tendency to see any isomorphic system in terms of effect as the same thing. Harmonic oscillators are basically the same thing even if the underlying models — from a pendulum, to a spring, to a quantum field [pdf] — result from different degrees of freedom.

One particular difference Theodore sees is that in the derivation from the information equilibrium condition $I(D) = I(S)$, the supply curve has parameters that derive from the demand side. He asks:

For any given price you can draw a traditional S curve, independent of [the] D curve. Is it possible to draw I(S) curve independent of I(D)?

Now Theodore is in good company. A University of London 'Econ 101' tutorial that he linked me to also says that they are independent:

It is important to bear in mind that the supply curve and the demand curve are both independent of each other. The shape and position of the demand curve is not affected by the shape and position of the supply curve, and vice versa.

I was unable to find a similar statement in any other Econ 101 source, but I don't think the tutorial statement is terribly controversial. But what does 'independent' mean here?

In the strictest sense, the supply curve in the information transfer framework is independent of demand independent variables because you effectively integrate out demand degrees of freedom to produce it, leaving only supply and price. Assuming constant $S \simeq S_{0}$ when integrating the information equilibrium condition:

$$\begin{eqnarray}\int_{D_{ref}}^{\langle D \rangle} \frac{dD'}{D'} & = & k \int_{S_{ref}}^{\langle S \rangle} \frac{dS'}{S'}\\
& = & \frac{k}{S_{0}} \int_{S_{ref}}^{\langle S \rangle} dS'\\
& = & \frac{k}{S_{0}} \left( \langle S \rangle - S_{ref}\right)\\
\log \left( \frac{\langle D \rangle}{D_{ref}}\right) & = & \frac{k}{S_{0}} \Delta S
\end{eqnarray}$$

If we use the information equilibrium condition $P = k \langle D \rangle / S_{0}$, then we have an equation free of any demand independent variables [2]:

$$\text{(1)}\qquad \Delta S = \frac{S_{0}}{k} \log \left(\frac{P S_{0}}{k D_{ref}}\right)
$$

There's still that 'reference value' of demand $D_{ref}$, though. That's what I believe Theodore is objecting to. What's that about?

It's one of those implicit assumptions in Econ 101 made explicit. It represents the background market required for the idea of a price to make sense. In fact, we show this more explicitly by recognizing the the argument of the log in Eq. (1) is dimensionless. We can define a quantity with units of price (per the information equilibrium condition) $P_{ref} = k D_{ref} / S_{0}$ such that:

$$
\text{(2)}\qquad \Delta S = \frac{S_{0}}{k} \log \left(\frac{P}{P_{ref}}\right)
$$

This constant sets the scale of the price. What units are prices measured in? Is it 50 € or 50 ¥? In this construction, the price is set around a market equilibrium price in that reference background. The supply curve is the behavior of the system for small perturbations around that market equilibrium when demand reacts faster than supply such that the information content of the supply distribution stays approximately constant at each value of price (just increasing the quantity supplied) where the scale of prices doesn't change (for example, due to inflation).

This is why I tried to ask about what the price $P$ meant in Theodore's explanations. How can a price of a good in the supply curve mean anything independently of demand? You can see the implicit assumptions of a medium of exchange, a labor market, production capital, and raw materials in his attempt to show that the supply curve is independent of demand:

The firm chooses to produce [quantity] Q to maximize profits = P⋅Q − C(Q) where C(Q) is the cost of producing Q. [T]he supply curve is each Q that maximizes profits for each P. The equilibrium [market] price that firms will actually end up taking is where the [supply] and [demand] curves intersect.

There's a whole economy implicit in the definition profits $ = P Q - C(Q)$. What are the units of $P$? What sets its scale? [4] Additionally, the profit maximization implicitly depends on the demand for your good.

I will say that Theodore's (and the rest of Econ 101's) explanation of a supply curve is much more concrete in the sense that it's easy for any person who has put together a lemonade stand to understand. You have costs (lemons, sugar) and so you'll want to sell the lemonade for more than the cost of the lemons based on how many glasses you think you might sell. But one thing it's not is independent of a market with demand and a medium of exchange.

Some of the assumptions going into the Theodore's supply curve aren't even necessary. The information transfer framework has a useful antecedent in Gary Becker's paper Irrational Behavior in Economic Theory [Journal of Political Economy 70 (1962): 1--13] that uses effectively random agents (i.e. maximum entropy) to reproduce supply and demand. I usually just stick with the explanation of the demand curve because it's far more intuitive, but there's also the supply side. That was concisely summarized by Cosma Shalizi:

... the insight is that a wider range of productive techniques, and of scales of production, become profitable at higher prices. This matters, says Becker, because producers cannot keep running losses forever. If they're not running at a loss, though, they can stay in business. So, again without any story about preferences or maximization, as prices rise more firms could produce for the market and stay in it, and as prices fall more firms will be driven out, reducing supply. Again, nothing about individual preferences enters into the argument. Production processes which are physically perfectly feasible but un-profitable get suppressed, because capitalism has institutions to make them go away.

Effectively, as we move from a close-in production possibilities frontier (lower prices) to a far-out one (higher prices), the state space is simply larger [5]. This increasing size of the state space with price is what is captured in Eqs. (1) and (2), but it critically depends on setting a scale of the production possibilities frontier via the background macroeconomic equilibrium — we are considering perturbations around it. 

David Glasner [6] has written about these 'macrofoundations' of microeconomics, e.g. here in relation to Econ 101. A lot of microeconomics makes assumptions that are likely only valid near a macroeconomic equilibrium. This is something that I hope the information transfer framework makes more explicit.

II. The rates of change of supply and demand

There is an assumption about the rates of change of the supply and demand distributions made leading to Eq. (1) above. That assumption about whether supply or demand is adjusting faster [2] when you are looking at supply and demand curves is another place where the information transfer framework makes an implicit Econ 101 assumption explicit — and does so in a way that I think would be incredibly beneficial to the discourse. In particular, beneficial to the discussion of labor markets. As I talk about at the link in more detail, the idea that you could have e.g. a surge of immigration and somehow classify it entirely as a supply shock to labor, reducing wages, is nonsensical in the information transfer framework. Workers are working precisely so they can pay for things they need, which means we cannot assume either supply or demand is changing faster; both are changing together. Immediately we are thrown out of the supply and demand diagram logic and instead are talking about general equilibrium.

III. Large numbers of inscrutable agents

Of course there is the even more fundamental assumption that an economy is made up of a huge number of agents and transactions. This explicitly enters into the information transfer framework twice: once to say distributions of supply and demand are close to the distributions of events drawn from those distributions (Borel law of large numbers), and once to go from discrete events to the continuous differential equation.

This means supply and demand cannot be used to understand markets in unique objects (e.g. art), or where there are few participants (e.g. labor market for CEOs of major companies). But it also means you cannot apply facts you discern in the aggregate to individual agents — for example see here. An individual did not necessarily consume fewer blueberries because of a blueberry tax, but instead had their own reasons (e.g. they had medical bills to pay, so could afford fewer blueberries) that only when aggregated across millions of people produced the ensemble average effect. This is a subtle point, but comes into play more when behavioral effects are considered. Just because a behavioral explanation aggregates to a successful description of a macro system, it does not mean the individual psychological explanation going into that behavioral effect is accurate.

Again, this is made explicit in the information transfer framework. Agents are assumed to be inscrutable — making decisions for reasons we cannot possibly know. The assumption is only that agents fully explore the state space, or at least that the subset of the state space that is fully explored is relatively stable with only sparse shocks (see the next item). This is the maximum entropy / ergodic assumption.

IV. Equilibrium

Another place where implicit assumptions are made explicit is equilibrium. The assumption of being at or near equilibrium such that $I(D) \simeq I(S)$ is even in the name: information equilibrium. The more general approach is the information transfer framework where $I(D) \geq I(S)$ and e.g prices fall below ideal (information equilibrium) prices. I've even distinguished these in notation, writing $D \rightleftarrows S$ for an information equilibrium relationship and $D \rightarrow S$ for an information transfer one.

Much like the concept of macrofoundations above, the idea behind supply and demand diagrams is that they are for understanding how the system responds near equilibrium. If you're away from information equilibrium, then you can't really interpret market moves as the interplay of supply and demand (e.g. for prediction markets). Here's David Glasner from his macrofoundations and Econ 101 post:

If the analysis did not start from equilibrium, then the effect of the parameter change on the variable could not be isolated, because the variable would be changing for reasons having nothing to do with the parameter change, making it impossible to isolate the pure effect of the parameter change on the variable of interest. ... Not only must the exercise start from an equilibrium state, the equilibrium must be at least locally stable, so that the posited small parameter change doesn’t cause the system to gravitate towards another equilibrium — the usual assumption of a unique equilibrium being an assumption to ensure tractability rather than a deduction from any plausible assumptions – or simply veer off on some explosive or indeterminate path.

In the dynamic information equilibrium model (DIEM), there is an explicit assumption that equilibrium is only disrupted by sparse shocks. If shocks aren't sparse, there's no real way to determine the dynamic equilibrium rate $\alpha$. This assumption of sparse shocks is similar to the assumptions that go into understanding the intertemporal budget constraint (which also needs to have an explicit assumption that consumption isn't sparse).

Summary

Econ 101 assumes a lot of things — from the existence of a market and a medium of exchange, to being in an approximately stable macroeconomy that's near equilibrium, to the rates of change of supply and demand in response to each other, to simply the existence of a large number of agents.

This is usually fine — introductory physics classes often assume you're in a gravitational field, near thermodynamic equilibrium, or even a small cosmological constant such that condensed states of matter exist. Econ 101 is trying to teach students about the world in which they live, not an abstract one where an economy might not exist.

The problem comes when you forget these assumptions or try to pretend they don't exist. A lot of 'Economism' (per James Kwak's book) or '101ism' (see Noah Smith) comes from not recognizing the conclusions people drawn from Econ 101 are dependent on many background assumptions that may or may not be valid in any particular case.

Additionally, when you forget the assumptions you lose understanding of model scope (see here, here, or here). You start applying a model where it doesn't apply. You start thinking that people who don't think it applies are dumb. You start thinking Econ 101 is the only possible description of supply and demand. It's basic Econ 101! Demand curves slope down [7]! That's not a supply curve!

...

Footnotes:

[1] The derivation of the supply and demand diagram from information equilibrium is actually older than this blog — I had written it up as a draft paper after working on the idea for about two years after learning about the information transfer framework of Fielitz and Borchardt. I posted the derivation on the blog the first day eight years ago.

[2] In fact, a demand curve doesn't even exist in this formulation because we assumed the time scale $T_{D}$ of changes in demand is much shorter than the time scale $T_{S}$ of changes in supply (i.e. supply is constant, and demand reacts faster) — $T_{S} \gg T_{D}$. In order to get a demand curve, you have to assume the exact opposite relationship $T_{S} \ll T_{D}$. The two conditions cannot be simultaneously true [3]. The supply and demand diagram is a useful tool for understanding the logic of particular changes in the system inputs, but the lines don't really exist — they represent counterfactual universes outside of the equilibrium.

[3] This does not mean there's no equilibrium intersection point — it just means the equilibrium intersection point is the solution of the more general equation valid for $T_{S} \sim T_{D}$. And what's great about the information equilibrium framework is that the solution, in terms of a supply and demand diagram, is in fact a point because $P = f(S, D)$ — one price for one value of the supply distribution and one value of the demand distribution.

[4] This is another area where economists treat economics like mathematics instead of as a science. There are no scales, and if you forget them sometimes you'll take nonsense limits that are fine for a real analysis class but useless in the real world where infinity does not exist.

[5] For some fun discussion of another reason economists give for the supply curve sloping up — a 'bowed-out' production possibilities frontier — see my post here. Note that I effectively reproduce that using Gary Becker's 'irrational' model by looking at the size of the state space as you move further out. Most of the volume of a high dimensional space is located near its (hyper)surface. This means that selecting a random path through it, assuming you can explore most of the state space, will land near that hypersurface.

[6] David Glasner is also the economist who realized the connections between information equilibrium and Gary Becker's paper.

[7] Personally like Noah Smith's rejoinder about this aspect of 101ism — econ 101 does say they slope down, but not necessarily with a slope $| \epsilon | \sim 1$. They could be almost completely flat. There's nothing in econ 101 to say otherwise. PS — had a conversation about demand curves with our friend Theodore as well earlier this year.

Saturday, April 24, 2021

Eight years of blogging

I spent the past week on Twitter putting up a list of my favorite posts on the blog as a kind of Irish wake for the form for the blog's 8th anniversary. As is typical on these anniversaries, a bit of statistical analysis or visualization — this time, the years of the selected favorites:


Looks like 2016-2017 was my peak by the number in my own opinion. It's not just bias by the quantity of posts, either. The most prolific year was 2015 with an average rate of a post per day. Favorites per total number has been increasing steadily — at least in terms of my own opinion, quality has been rising:

Here's the list (not in order) for posterity:

"It's people. The economy is made out of people."

"Solow has science backward"

"Good ideas do not need lots of invalid arguments in order to gain public acceptance"

"Maximum entropy better than game theory"

"Lazy econ critique critiques"

"Can a macro model be good for policy, but not for forecasting?"

"Remarkable recovery regularity and other observations"

"A Solow Paradox for the Industrial Revolution"

"Macro criticism, but not that kind"

"Ceteris paribus and method of nascent science"

"Things that changed in the 90s"

"The economic state space: a mini-seminar"

"Stocks and k-states"

"The ISLM model (reference post)"

"An information transfer traffic model"

"My introductory chapter on economics"

"Keen, chaos, and equilibrium"

"What mathematical theory is for"

"The Phillips curve and The Narrative"

"UK productivity and data interpretation"

"Qualitative economics done right, part 1"

"Goldilocks complexity"

"Neo-Fisherism and causality"

"Resolving the Cambridge capital controversy with MaxEnt"

"The irony of Paul Romer's mathiness"

"More like stock-flow inconsistent"

"Stock-flow consistency is tangential to stock-flow consistent model claims"

"Should the left engage with neoclassical economics?"

"Milton Friedman's Thermostat, redux"

"Efficient markets and the Challenger disaster"

"The irony of microfoundations fundamentalism"

"What if money was made of vinegar?"

"The 'quantity theory of labor' and dynamic equilibrium"

"No one saw this coming: Bezemer's misleading paper" ("Letter to Dirk Bezemer")

"Keen"

"DSGE, part 5 (summary)"

"Yes, I've read Duncan Foley. *Have you?*"

"Utility maximization and entropy maximization"

"Is the market intelligent?"

"DSGE Battle Royale: Christiano v. Stiglitz"

"The philosophical motivations"

"Wage growth in NY and PA

"Thought experiment"

"Dynamic equilibrium: unemployment rate"

"Labor market update: external validity edition"

"Efficient markets and the Challenger disaster"



Sunday, February 28, 2021

Ongoing evolution of time series after the COVID-19 shock

I haven't been blogging much the past two years — due to a) work taking nearly all of my energy, and b) the near daily update of data around COVID-19 being more conducive to Twitter than blogging. However, I thought it was time for a longer-term assessment of the economic time series in the context of Dynamic Information Equilibrium Models (DIEMs).

First, let's look at the consumer spending data from tracktherecovery.org (Raj Chetty and John Friedman's project using with proprietary credit card data and only bulk credit to the "OI team" of undergrad RAs) — beginning with a little history (and links to Twitter). I originally put together several models back in early June of 2020 to describe the shock to consumer spending data. About two months later (end of July 2020) I added in long run growth because it would start to become an important factor as the recovery dragged on. Towards the end of October, I decided to rank the performance of the four different models. Basically, all of them performed about equally — except for the "entropic shock" with a complete return to equilibrium. This means that there was some persistent gap in spending that wasn't made up until after the most recent round of stimulus checks in January of 2021.

Here are the four models (click to enlarge).

Positive + negative shock

Step response (see here)

Negative "entropic" shock and return to equilibrium

Negative "entropic" shock and return to a different equilibrium

I illustrate both the "entropic" shock models in this post on evaporating information. The basic idea is that there's a shock to the time series and it either evaporates entirely or leaves some residual:



While the full return to equilibrium model, if looked at in a vacuum, does look like it describes the entire series more accurately; however, the errors are correlated and it did poorly at forecasting up until a specific event (the January stimulus payments). We can chalk that up to luck.

The entropic shock with return to a different equilibrium is the most parsimonious model (fewer independent parameters, lower RMS error, so lower AIC), but the two shock model is probably the best explanation — if we attribute the January 2021 deviation to stimulus payments we should include some identifiable effect from the April 2020 stimulus.

Speaking of that January 2021 stimulus, I posited a couple weeks ago that it looked like an evaporating shock itself. That seems less likely once we include more data:


In fact, a classic DIEM shock seems more appropriate:


Time will tell.

In the aggregate PCE data, it's hard to tell if we can see the January 2021 shock at all:



It's only updated monthly, so doesn't have the temporal currency or resolution of the Opportunity Insights data. In this time series we can also see the effect of the TCJA implemented at the end of 2017. The other impact of the TCJA seems to have been a negative shock to median sales prices of houses (last update here). There has been a recent bump up that may reflect lack of inventory since the COVID shock hit, but again that's another place where time will tell.

I never got around to turning the reasoning for the TCJA (and its changes to the mortgage interest deduction) to be the source of that shock into a blog post after posting it on Twitter at the end of January of 2020 — just before the COVID shock. The shock begins about a year before the Fed rate increase of December 2018, the other (at least theoretically) plausible culprit. I'm sure there are just-so models where people expected the future rate increase because of the higher deficit spending brought on by the TCJA, but I personally prefer causality in my models.


On to another four letter acronym: ICSA (Initial Claims Seasonally Adjusted). I pretty much accurately described this "entropic shock" path back in June of 2020 (see previous post here), but now we have some additional evidence for some slight deviations in the model. First the big picture:


And now zooming in a bit, we can see two correlated deviations in the summer of 2020 and fall/winter of 2020 into 2021 pretty much match up with the surges in COVID-19 cases in the US (and the accompanying layoffs):


Although I talked about it in the previous post, I don't think I've shown my latest view of the unemployment rate based on Jed Kolko's "core" unemployment rate (U5 minus temporary layoffs) that I refined in a Twitter thread (U3 minus temporary layoffs, in order to compare to "headline" unemployment). New data is coming out next week, so now would be a good time to document my forecast.

Here's U3 and the U3 "core" rate with temporary layoffs removed:


As you can see, the relationship between "core" U3 and U3 has been relatively stable until the COVID-19 shock with its mass "temporary" (?) layoffs. The core rate continues to be well-described by the DIEM with its standard logistic (approximately Gaussian derivative) shocks, but the temporary layoffs required — like many of the time series here — an "entropic" shock in order to be able to describe headline (U3) inflation. I believe this forecast will be pretty accurate barring any additional shocks:



That's it for the updates. One of the major lessons for modeling economic time series in the COVID era has been accounting for these rapidly evaporating "entropic" shocks — everything from the S&P 500 to the unemployment rate. Recession and demographic shocks are extremely slow moving by comparison.

Thursday, December 10, 2020

Initial claims and other COVID-19 shocks

Back in June of 2020, I posted an estimate of the future path of initial claims [1] on Twitter (click to enlarge):


While the rate of improvement was overestimated, it captured the qualitative behavior quite well:


Being able to predict the qualitative behavior of the time series in the future is pretty good confirming evidence for a hypothesis — not the least of which being there's no way you could have had access to data in the future without travelling through time. The underlying concept was that the rate of improvement after the initial spike would gradually fall back to the long term equilibrium (logarithmic rate) of about −0.1/y (which shows up as the line that is almost at zero):


The hypothesis is that while the initial part of the non-equilibrium shock was a sharp spike, there is an underlying component that is a more typical, more gradual, shock. One way to visualize it is in the unemployment rate via "core" unemployment (per Jed Kolko):


Here's a cartoon version. In the current recession, we're seeing something that hasn't been that apparent (or at least as rapid) in the data [2]. There's the normal recession (solid line) as well as a sharp spike (dashed):


Instead of the usual derivative that's a single (approximately Gaussian) shock (solid line), we have a more complex structure with a smoothly falling return to the usual dynamic equilibrium (here exaggerated to −0.2/y so it looks different from zero):


Zooming in on the box in the previous graph, we get the cartoon version of the data above (dashed curve) that eventually asymptotes to the long run dynamic equilibrium rate:


Since we haven't had a shock of this type before in the available data with mass temporary layoffs, it's at least not entirely problematic to suggest an ad hoc model like this one. The underlying "evaporation" of the temporary shock information is based on the entropic shocks that appear in the stock market (including for this exact same COVID-19 event as well as the December 2018 Fed rate hike): 





...

Footnotes:

[1] This is not what would be the technically correct model in terms of dynamic equilibrium, but over this short time scale the civilian labor force has been roughly constant since June. It doesn't really change the shape except for the initial slope which is lower because it is undersampled using only monthly CLF measurements instead of weekly ICSA measurements:


The "real" model isn't that different:


[2] It's possible the "step response" in the unemployment rate in the 1950s and 60s is a similar effect, but nowhere near as rapid.


Sunday, December 6, 2020

Qualitative economics done right, part N [1]


I seem to have involved myself in a Twitter dispute with economist Roger Farmer about what it means to make macro models — or more broadly "the nature of the scientific enterprise", as Roger the economist kindly tried to explain to me, a physicist. Unfortunately, due to his prolific use of the quote tweet the argument is likely impossible to follow. You can see the various threads via this search.

Background

This started when I noted that Roger Farmer's claims about unemployment — in particular in papers supporting his claims that he cites here Farmer (2011) and here Farmer (2015) — are inconsistent with the long run qualitative behavior of the unemployment rate data. That is to say the models are not consistent with the empirical fact that the unemployment rate between recessions falls at a logarithmic rate of about −0.09/y in the US (BYDHTTMWFI: here's a recent NBER paper).

Let me say right off that I actually appreciate Roger Farmer's work — he does seem to think outside the box compared to the DSGE approach to macro that has taken over the field.

I am going to structure this summary in a series of claims that I am not making because it seems many people have confused requiring qualitative agreement with data with precise measurements of the electron magnetic moment.

Here we go!

I am not saying models with RMS error ε ≥ x must be rejected.

The funniest part about this is that in my figure that I use to show the Roger's model's lack of qualitative agreement actually shows the DIEM has worse RMS error over the range of the data Roger shows in his graph [2].

The thing is it's easy to get low RMS error on past data simply by adding parameters to a fit. This does not necessarily work with projected data, but in general more parameters often yield a better fit to past data and sometimes a better short run projection.

However, my original claim that started this off was that his model based on shocks to the stock market in the supporting papers was "disconnected from the long run empirical behavior of the unemployment rate". It's true that if you take the shocks to the unemployment rate and add the dynamic equilibrium of the S&P 500 model, you get a short run correlation that lasts from 1998 to about 2010:



This correlation around the 2008 recession is pointed out in Farmer (2011) Figure 2. However, you only have to go back to the early 90s recession to get a counterexample to the idea that shocks to the S&P 500 match up with shocks to the unemployment rate.

Second, there is also no particular empirical evidence that the unemployment rate will flatten out at any particular level (be it the natural rate in neoclassical models, or in Farmer's models a rate based on asset prices).Third, Farmer's models do not show log-linear decline between recession shocks. 

It is these three basic empirical facts about the unemployment rate that I was referencing when I made my claim in that initial tweet. Even if the RMS error is bad, a model of the unemployment rate is at least qualitatively consistent with the data if 1) the shocks are not entirely dependent on the stock market, 2) the rate does not flatten out at any level except possibly u = 0, or 3) shows an average log-linear decline of −0.09/y between recessions (a fact that was called out in a recent NBER paper, BYDHTTMWFI).

What I am saying is that Roger's models are not qualitatively consistent with the data — think a model of gravity where things fall up — and should be rejected on those grounds. The unemployment rate literally levitates in his models. Additionally there exist models with lower RMS error and qualitative agreement with the data; the existence of those models should give us pause when considering Roger's models.

I am not calling for Roger Farmer to stop working on his models.

It's fine by me if he wants to give talks, write blog posts about his model, or think about improving it in the privacy of his own research notebook. I would prefer that he grapple with the fact that the models are not qualitatively consistent with the data instead of getting defensive and saying that they don't have to pass that low bar. I believe models that are not qualitatively consistent with the data should not be used for policy, though — and that is one of Roger's aims.

It's true that a lot of ideas start out kind of wrong — it's unrealistic to expect a model to match the data exactly right out of the gate. And that's fine! I've had a ton of bad ideas myself! But there is no reason we should expect half-baked ideas lacking qualitative agreement with the data to be taken seriously in the larger marketplace of ideas. 

So many comments on the feed were about working towards an insight or the models being just an initial idea that could be improved. Most of us don't get a chance to put even really good ideas in front of a lot of people, so why should we accept something that's apparently not ready for prime time just because it's from a tenured professor? I have a Phd and a lot of garbage models of economic systems that aren't even qualitatively accurate in my Mathematica notebook directory — should we consider all of those? In any case, "it may lead to future progress" is not a reason to say "oh, fine then" to models that aren't qualitatively consistent with empirical data.

What I am saying is that we should set the bar higher for what we consider useful models in macro than "it might qualitatively agree with data one day". We can leave discussion of those models out of journals and policy recommendations.

I am not saying we should apply the standards of physics to economics.

This goes along with people saying I shouldn't be applying "Popperian rejection" to economic models. First off, this misconstrues Popper who was talking about falsifiability as a condition for scientific theories as opposed to pseudoscience. Roger's models are falsifiable — I don't think they are pseudoscience. However, Popper didn't really say much about models being falsified despite the fact that lots of people think he did.

General Relativity is a better model than Newtonian gravity, but both models are falsifiable. We consider Newtonian gravity to be incorrect for strong gravitational fields, precise enough measurements in weak fields, or velocities close to the speed of light. We still use good old Newton all the time — I did just the other day for an orbital dynamics question at work. I fully understand the difference between a model that is an approximation and one that is supposed to be a precise representation of reality.

Popper, however, did not say anything about models that don't qualitatively agree with the data. That's because in most of science, such models are thrown out before they are ever published. Economics, especially macro, operates in a different mode where I guess they consider models that look nothing like the data. Ok, I know the time series data is an exponentially increasing amplitude sine wave and this model says it's a straight line, but hear me out!

If the standards for agreement with the data are below qualitative agreement with the data, then there's really no reason to throw out Steve Keen's models [3]. But that's the problem — there are models that agree with the data! David Andofatto's simple model matches the data fairly well qualitatively! (It gets points 1 and 3 above and could be set to u* = 0 to get 2.) The existence of those models should set the bar for the level of empirical accuracy we should accept in macro models.

What I am saying is that there are existing models that more precisely match the data — and that is the standard I am using. It's not physics, but rather the performance other economic models. If you have a model that has worse RMS error, but has better qualitative agreement with the data, then that's ok to bring to the table. Overall, there seems to be far too much garbage that is allowed in macro because, well, there apparently wouldn't be any macro papers at all if some basic standards were enforced. When I say these models that aren't even qualitatively consistent with the data should be thrown out, I'm not talking about Popperian rejection, I am talking about desk rejection.

One last point ... what is the use of a model that doesn't qualitatively agree with data?

I didn't have a way to phrase this one as something I'm not saying. I literally cannot fathom how you can extract anything useful from a model that does not qualitatively agree with the data. This is lowest bar I can think of.

Yes this model looks nothing like the data but it's useful because I can use it to understand things based on ...

That ellipsis is where I cannot complete the sentence. Based on gut feelings? Based on divine revelation? If the model looks nothing like the data, what is anything derived from it derived from? The pure mathematical beauty of its construction?

It's like someone saying "Here's my model of a car!" and they show you a cat. Yes, this cat isn't qualitatively consistent with a car, but it's a useful first step in understanding a car. The cat gives me insights into how the car works. And you really shouldn't be using Popperian rejection of the cat model of a car because automobile engineering is not the same as physics. Making a detailed car model is unnecessary for figuring out how it works — a cat is perfectly acceptable. Eventually, this cat model will be improved and will get to a point where it matches car data well. The cat model also allows me to make repair recommendations for my car. You see the cat has a front and a back end, where the front has two things that match up with the car headlights, and yes the fuel goes in the front of the cat while it goes in the side of a car but that's at least qualitatively similar ...

...

Update 7 December 2020

Also realized Roger has made a major stats error here:

Jason Jason. @infotranecon I really don’t know where to start.  1. The unemployment rate is I(1) to a first approximation. 2. The S&P measured in real units is I(1) to a first approximation. The two series are cointegrated. The S&P Granger causes the unemployment rate.

Here's Dave Giles, econometrician emeritus extraordinaire:

If two time series, X and Y, are cointegrated, there must exist Granger causality either from X to Y, or from Y to X, both in both directions.

...

Footnotes

[1] The title is a reference to my old series that led, among other places, to realizing Wynne Godley has been maligned by people who ostensibly support him, and that Dirk Bezemer fabricated quotes in his widely cited paper.

[2] I do find it problematic that Roger not only cuts off the data early compared to data that was available at the time Farmer (2015) was published, but also cuts of data that was available at the time that would appear in the domain of his graph — data that emphasizes that the model does not qualitatively match the data. He also uses quarterly unemployment data which further reduces the disagreement.

[3] I mean c'mon!








Sunday, October 18, 2020

The four failure modes of Enlightenment values


I don't write about process as much these days — in part because I'm no longer working my previous project that had me effectively commuting across the country every month to the middle of nowhere, and in part because I'm now working a much bigger project that barely leaves me enough time to update even the existing dynamic information equilibrium model forecasts. But recently there seems to be an upswing in calls for civility, declarations of incivility, and long sighs about about how to criticize the "correct" way. I saw George Mason economist Peter Boettke tweet out this the other day that includes a list of "rules" for how to criticize:
How to compose a successful critical commentary:
  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, "Thanks, I wish I’d thought of putting it that way."
  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  3. You should mention anything you have learned from your target.
  4. Only then are you permitted to say so much as a word of rebuttal or criticism.
It seems fitting that Boettke would tweet this out given his defense of the racist economist/public choice theorist James Buchanan. It's pure "Enlightenment" rationalism — the same Enlightenment that gave us many advances in science, but also racism and eugenics. These rules are in general a great way to go about criticism — but if and only if certain norms are maintained. If these norms aren't maintained, these rules inculcate us with a vulnerability to what I've called viruses of the Enlightenment. To put in the terms of my job: this process has not been subjected to failure mode effect analysis (FMEA) and risk management.

This isn't intended to be a historical analysis of what the "Enlightenment" was, how it came to be, or its purpose, but rather how the rational argument process aspect is used — and misused — in discourse today. I've identified a few failure modes — the vulnerabilities of "Enlightenment" values.

Failure mode 1: Morally repugnant positions

I'm under the impression that like bioethics, medical ethics, or scientific ethics, someone needs to convene an interdisciplinary ethics of rational thought. There are still occasions when science seems to think the pursuit of knowledge is an aim higher than any human ethics, and failures run the gamut from the recent protests to building another telescope on Mauna Kea (part of a longer series of protests) to unethical human experiments.

Rationalism seems to continue to hold this view — that anything should be up for discussion. But we've long since discovered that science can't just experiment on people without considering the ethics, so why should we believe rationalism can just say whatever it wants?

Unfortunately, since we are humans and not rational robots, the discussion of some ideas themselves might spread or exacerbate morally repugnant beliefs. This is contrary to the stated purpose of "Enlightenment values" — open discussion that leads to the "best" ideas winning out in the "marketplace of ideas". And if that direct causality breaks (open discussion → better ideas), the rationale for open discussion is weakened [0]. Simply repeating a lie or conspiracy theory is known to strengthen the belief in it — in part from familiarity heuristic. And we know that simply changing the framing of a question on polls can change people's agreement or disagreement. Right wing publications try to launder their ideas by simply getting mainstream publications to acknowledge them, pulling them out the "conservative ecosystem" — as Steve Bannon has specifically talked about (see here).

Rule #1 fails to acknowledge our humanity. Simply repeating a morally repugnant idea can help spread it, and in the very least requires the critic to carry water for a morally repugnant idea. I cannot be required to restate someone's position that's favorable racism because that requires giving racism my voice, and immorally helping the cause of racism.

For example, Boettke's defense of Buchanan requires him to carry water for Buchanan. If we consider the possibility that Nancy MacLean's claims of a right-wing conspiracy to undermine democracy and promote segregation are true (I am not saying they are, and people I respect — e.g. Henry Farrell — strongly disagree with that interpretation of the evidence), then carrying that water should be held to a level of ethical scrutiny a bit higher than, say, discussing the differences between Bayesian and frequentist interpretations of probability.

This is not to say we shouldn't talk about Buchanan or racism. It's not like we don't experiment with human subjects (e.g. clinical trials). It's just that when we do, there are various ethical questions that need to be formally addressed from informed consent to what we plan to learn from that experiment. A human experiment where we ask the question about whether humans feel pain from being punched in the face is not ethical even if we have consent from the subjects because the likelihood of learning something from it is almost zero. "I'm just asking questions" here is not a persuasive ethical argument.

This is in part why I think shutting down racists from speaking on college campuses isn't problematic in any way. Would we authorize a human experiment where we engage in a campaign of intimidation of minorities just to measure the effects? We already know about racist thought — it's not like these are new ideas. They're already widely discussed — that's how students on campuses know what to protest. And in terms of ethical controls, we might well consider that the moral risk managed solution consistent with intellectual discourse is to have these “speakers” write their “ideas” down, have the forum led by someone who is not a famous racist, or possibly is even opposed to the “ideas” [3].

Failure mode 2: Over-representation of the elite

I criticized Roger Farmer's acceptance of Hayek's interpretation that prices contain information on Twitter a year or so ago (for more detail on my take, you can check out my Evonomics article). Farmer subsequently unfollowed me on Twitter which likely decreases the engagement I get through Twitter’s algorithms.

Now my point here is not that one is obligated to listen to every crackpot (such as myself) and engage with their “ideas”. It’s that we cannot feasibly exist in a world where all expression is heard and responded to — regardless of how misguided or uninformed. And who would want that?

But it does mean participation via the (purportedly) egalitarian Enlightenment ideals of “free speech” and “free expression” in the marketplace of ideas is already limited. And the presumption of “equals” engaging in mutual criticism behind Bottke's “rules” artificially limits the bounds of criticism further. Already elites pick and choose the criticism they engage with — giving them an additional power of “permission” distorts the power balance even more.

Unfortunately public speech and public attention ends up being rationed the same way most scarce resources are rationed — by money. The elite gatekeepers at major publications push the opinions and findings of their elite comrades through the soda straw of public attention. We hear the opinions of millionaires and billionaires as well as people who find themselves in circles where they occasionally encounter billionaires far more often than is academically efficient. Bloomberg and Pinker talking about free speech. MMT. Charles Murray.

Bloomberg writing at bloomberg.com is a particularly egregious example of breaking the egalitarian norm. Bloomberg's undergraduate education is in electrical engineering from the 1960s and he has a business degree from the same era. He has no particular qualifications to judge the quality of discourse, the merits of the freedom of speech, or who should be forced to tolerate right wing intimidation on college campuses. He is in the position he is in because he made a great deal of money which enabled him to take a chance on running for office and becoming mayor of New York.

That said, I don't have particular expertise in this area — but then I don't get to write at bloomberg.com.

As such, “cancelling” the speech of these members of the elite mitigates this bias almost regardless of the actual reason for the cancellation simply because they’re over-represented.

More market-oriented people might say having billions of dollars must mean you’ve done at least something right and therefore could result in being over-represented in the marketplace of ideas. That's an opinion you can argue — in the marketplace of ideas — not implement by fiat. Now this is just my own opinion, but I think having too much money seems to make people less intelligent. Maybe life gets too easy. Maybe you lose people around you that disagree with you because they're dependent on your largess. Lack of intellectual challenge seems to turn your brain to mush in the same way lack of physical activity turns your body to mush. You might have started out pretty sharp, but — whatever the reason — once the cash piles up it seems to take a toll. I mean, have you listened to Elon Musk lately? However, even if you believe having billions of dollars means you have something worthwhile to say, that is not the Enlightenment's egalitarian ethos. King George III had a lot more money than any of the founders of the United States, but it's not like they felt compelled to invite him or his representatives to speak at the signing of the Declaration of Independence.

While everyone has a right to say what they want, that right that does not grant everyone a platform. The “illiberal suppression” of speech can be a practical prioritization of speech. "Cancelling" can mitigate systemic biases, enabling a less biased, more genuine discourse. Why should we have to listen to the same garbage arguments over and over again? Even if they aren’t garbage, why the repetition? And even if the repetition is valid, why must we have the same people doing the repeating? [1] An objective function optimized for academic discussion should prioritize novel ideas, not the same people rehashing racism, sexism, or even “enlightenment” values for 30 years.

It's true that novelty for novelty's sake creates its own bias in academia — journals are biased towards novel results rather than confirmation of last year's ideas creating a whole new set of problems. In addition to novel ideas, verifiability and empirically accuracy would also be good heuristics. Expertise or credentials in a particular subject is often a good heuristic for priority, but like the other heuristics it is just that — a heuristic. Knowing when to break with a heuristic is just as valuable as the heuristic itself.

In any case, just assuming elites and experts should be free from criticism unless it meets particular forms of "civility" or that their "ideas" should be granted a platform free from being "cancelled" do not further the spirit of the Enlightenment values that most of us agree on — that what's true or optimal ought to win out in the marketplace of ideas.

Failure mode 3: Rational thought and academic research is not free speech

Something obvious in the norms in Boettke's list is that he appears to recognize rational argument differs from free speech. "Free speech" does not require you to speak in some proscribed manner — that would ipso facto fail to be free speech. 

However, the ordinary process by which old ideas die off through rational argument seems to be conflated with suppressing free speech these days. Having your paper on race and IQ rejected for publication because it rehashes the old mistakes and poor data sets is normal rational progress, not the suppression of free speech. "Just asking questions" needs to come to grips with the fact that lots of those questions have been asked before and have lots of answers. Just as we don't need to continuously rehash 19th century aether theory, we don't need to continuously rehash 19th century race science [2].

When shouts of "free speech" are used as a cudgel to force academic discussion of degenerative research programs in Lakatos' sense, it represents a failure mode of "Enlightenment" values and science in general. In order for science and the academy to function, it needs to rid itself these degenerative research programs regardless of whether rural white people in the United States continue to support them. If these research programs turn out to not be degenerative — well, there's a pretty direct avenue back into being discussed via those new results showing exactly that. Assuming they follow ethical research practices, of course.

Failure mode 4: People don't follow the spirit of the rules

Failure to follow the spirit of these rules tends to be rampant in any "school of thought" that claims to challenge orthodoxy from race science to Austrian economics. Feynman's famous "cargo cult science" commencement address is a paean to the spirit of the rules of science (and "Enlightenment" values generally), but unlike Boettke's rules for others Feynman asks fledgling scientists to direct the rules inward — "The first principle is that you must not fool yourself — and you are the easiest person to fool."

This failure mode is far less intense than discussing racism, unethical human experiments or plutocracy, but is far more common. Certainly, the "straw man" application of Rule #1 falls into this. But one of the most frustrating is the one many of us feel when engaging with e.g. MMT acolytes — never acknowledging that you have "re-express[ed] your target’s position ... clearly, vividly, and fairly."

Randall Wray or William Mitchell (e.g.) simply never acknowledge any criticism is valid or accurate. Criticism is dismissed as ad hominem attacks instead of being acknowledged. If "successful" critical commentary (per the "rules") requires the subjects to grant you permission, any criticism can be shut down by a claim that the critic doesn't know what they are talking about.

This failure to follow the spirit of the rules appears in numerous ways, from claims that simply expressing a counterargument isn't civil discourse to the failure of someone espousing racist views to admit that those views are actually racist [4] to general hypocrisy. However, the end effect is that failure to follow the spirit of the rules is an attempt to enable the speaker with the ability to grant permission to which facts or counterarguments are allowed and which aren't. That's not really how "Enlightenment values" are supposed to work.

Being granted permission by the subject of criticism is also generally unnecessary to actual progress. Humans — especially established public figures — rarely listen to criticism. Upton Sinclair, Bertrand Russell, and Max Planck captured different dimensions of this (a rationale, a mechanism, and a real course of progress) in pithy quotes (respectively):
It is difficult to get a man to understand something, when his salary depends upon his not understanding it! 
If a man is offered a fact which goes against his instincts, he will scrutinize it closely, and unless the evidence is overwhelming, he will refuse to believe it. If, on the other hand, he is offered something which affords a reason for acting in accordance to his instincts, he will accept it even on the slightest evidence. 
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
This is how the world has always been. Your audience for your criticism is never the subjects of the criticism, but rather the next generation. Explaining your subject's position before criticizing it is done as part of Feynman's "leaning over backward" — for yourself — not legitimacy.

Other failure modes

I wanted to collect my thoughts on free speech, "cancelling", and the terrible state of "the discourse" in one essay. This list is not meant to be exhaustive, and I may expand it in the future when I have new examples that don't fit in the previous four categories. For example, you might think that academic journals are a form of intellectual gatekeeping — and I'd agree — but I believe that falls under failure mode 2: the over-representation of the elite, not a separate category. There are also genuine workarounds in that case that everyone uses (arXiv, SSRN). You may also disagree with the particular choice of basis — and I'm certain another orthonormal set of failure modes could span the same failure effect space.

Also, because I talk about MMT along with Public Choice and racism, it doesn't mean I equate them. There are similarities (both get a leg up through the support of billionaires), but I am trying to find examples from across a broad spectrum of politics and political economy. There are major failures and minor. However, I think the examples I've chosen most clearly illustrate these failure modes.

I have been sitting on this essay for nearly a year. I was motivated to action by a tweet from Martin Kulldorff, a professor at the Harvard Medical School about how Scott Atlas was "censored" [5] for spreading misinformation about the efficacy of various coronavirus mitigations (from masks to lockdowns). Atlas is on the current administration's "Coronavirus Task Force" and a fellow at the Hoover institution — a front for right wing views funded by billionaires. There is literally no universe in which this is a true egalitarian "Enlightenment" discussion — from the elite over-representation with Harvard and the billionaires at Hoover to the lack of disclosure of conflicts of interest (failure modes 2 and 4, respectively). That far too many people think Atlas being "censored" is against the spirit of the Enlightenment is exactly how it can fail.

... 

Footnotes:

[0] This is similar to the argument against markets as mechanisms for knowledge discovery — information leakage in the causal mechanism breaks it.

[1] More on this here. Why do we have to hear specifically Charles Murray talk about race and IQ? (TL;DR because it's not about ideas, but rather signalling and authority.)

[2] Personally, I think IQ tests should include a true/false question that asks if you think there's nothing wrong with believing the racial or ethnic group to which you belong has on average a higher IQ than others. Answering "true" would indicate you're probably bad at understanding self-bias that is critical to scientific inquiry and should reduce your score by at least 1/2.  As George Bernard Shaw said, “Patriotism is your conviction that this country is superior to all other countries because you were born in it.” Racism is at its heart your conviction that your race is superior to all other races because you were born into it — the rest is confirmation bias.

[3] In Star Trek: The Next Generation "Measure of a Man" (S:2 E:9), Commander Riker is tasked with prosecuting the idea that the android Lt. Commander Data is not a person, but rather Federation property — something with which Riker personally disagrees.

[4] I have never really understood this. Unless you're hopelessly obtuse, you must know if you have racist views. Why would you be upset about other people identifying them as such? The typical argument being supported by racist views is that racism is correct and right! A racist (who happens to be white by pure coincidence) who believes that other non-white people have lower IQs through some genetic effect is trying to support racism. I have so much more respect for racists, like a pudgy white British man who appears in the beginning of The Filth and the Fury (2000) who openly admits he is racist. That's the Enlightenment!

[5] In no way is this censorship and calling it that is risible idiocy. The tweets were removed on Twitter, a private company, not by the US government. And Atlas still has access to multiple platforms — including amplification by elite Harvard professors, which is what is actually happening.