Sunday, February 9, 2020

How should we approach how people think?

A common objection to the information equilibrium approach I've run into over the years is that economics at the micro level is about incentives or at the macro level how people react to policy changes in the economy. My snarky response to exactly that objection earlier today on Twitter was this:
The approach can be thought of as assuming people are too (algorithmically) complex for us to know how they respond to incentives, as opposed to [the] typical [economics] approach where you not only assume you know how individuals think but write down simplistic equations for it.
Let me expand on what this means in a (slightly) less snarky way. We'll set up a simple scenario where people are given 7 choices (the "opportunity set") and must select one.

The typical approach in economics

In economics, one would set up a utility function u(x) that represents your best guess at what a typical person thinks — how much "utility" (worth or value) they derive from each option. Let's say it looks like this:


You've thought about what people think (maybe using your own experience or various thought experiments) and you assign a value u(x) for each choice x. While I've made the above function overly simplistic, it's still assigning a value to each choice.

You would then set up an optimization problem over the choices, derive the first order conditions (which are basically the derivatives in the various dimensions you are considering), and find the maximum (i.e. the location of the zero of the derivatives, or a point on the boundary of your opportunity set).


That's the utility maximizing point and you find out your (often sole "representative") agent selects choice 5. Often, 100% of agents in your model (or 100% of a single representative agent) will select that choice. Everyone is the same. Sometimes you can have heterogeneous agents, and while each type of agent will make different selections each agent of each type will make the same selection.

Of course we can allow error, and there are random utility discrete choice models [pdf] that effectively allow random choices among the various utility options such that in the end we have e.g. most people choosing 5 with a few 4's or 6's (for 1000 agents):


But basically the approach assumes that when confronted with a choice you are able to construct a really good model — a u(x) — of how a person will respond.

This of course sets up a problem called "Lucas critique": if you make changes to policy to try and exploit something you learned this way, people can adapt to make your original model — your original utility function — moot. For example, if you make option 5 illegal, the model as is says people will start choosing 4 or 6 in roughly equal numbers. But maybe agents will adapt and choose 2 instead?

The response to the Lucas critique is generally to get ever deeper inside people's heads — to understand not just their utility functions but how their utility functions will change in response to policy, to get at the so-called deep parameters also known as microfoundations.

The approach in information equilibrium

In the information equilibrium approach, when asked what a person will choose out of 7 options, you furrow your brow, look up to sky, and then give one of these:

‾\_(ツ)_/‾

One agent will choose option 4 (with ex post probability 1):


Another will choose option 1:


If you ask that agent again, maybe they'll go with option 2 now:


Why? We don't know. Maybe they had medical bills between the choices. Maybe that first agent really loves Bernie Sanders and Bernie said to choose option 4. Again:

‾\_(ツ)_/‾

If we have millions of people in an economy (here, only 1000), then you're going to get a distribution over the choices. And if you have no prior information about that choice (i.e. ‾\_(ツ)_/‾), then you're going to get a uniform distribution (with probability ~ 1/7 for 7 choices — about 14%):


In this case, economics becomes more about the space of choices, the opportunity set — not about what individual people are thinking about. And that size of the opportunity set can be measured with information theory, hence information equilibrium (where we equate different spaces of choices). It turns out there is a direct formal mathematical relationship to the utility approach above, except instead of utility being about what individuals value it's about the size of that space of options.

In the information equilibrium approach, we depend on two assumptions that set up the basis of equilibrium:

  1. The distribution (and it doesn't have to be uniform) is stable except for a sparse subset of times.
  2. Agents fully map (i.e. select) the available choices (again, except for a sparse subset of times).

The "sparse subset" is just the statement that we aren't in disequilibrium all the time. If we are, we never see the macroeconomic state associated with that uniform distribution and we can't get measurements about it. We have to see the uniform distribution for long enough to identify it. Agents also have to select the available choices, otherwise we'll miss information about the size of the opportunity set.

But information equilibrium also allows for non-equilibrium. Since we aren't making assumptions about how they think, people could suddenly all make the same choice, or be forced into the same choice. These "sparse non-equilibrium information events", or more simply "economic shocks" cause observables to deviate from their equilibrium values. The dynamic information equilibrium model (DIEM) makes some assumptions about what these sparse shocks look like (e.g. the have a finite duration and amplitude), and it gives us a pretty good model of the unemployment rate [1]:


Those 7 choices above are translated into this toy model as jobs in various sectors (with one "sector" being unemployment).

This approach also gives us supply and demand (this is the connection to Gary Becker's 1962 paper Irrational Behavior in Economic Theory [pdf], see also here). We don't have 7 discrete choices here, but rather a 2-dimensional continuum between two different goods (say, blueberries and raspberries) bounded by a budget constraint (black line). The average is given by the black dot. As the price of one good goes up, on average people consume less of it.


And again, people might "bunch up" (i.e. make similar choices and not fully map the opportunity set) in that opportunity set and that gives us non-equilibrium cases where supply and demand fails:


In both of these "failures" of equilibrium (recessions, bunching up in the opportunity set), I am under the impression that sociology and psychology will be more important drivers than what we traditionally think of as economics [2].

But what about that "algorithmically" in parentheses in my original tweet? That's a reference to algorithmic complexity. The agents in the utility approach are not very algorithmically complex — they choose 5 either all the time or at least almost all the time:

{5, 5, 5, 5, 5, 5, 5, 4, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5}

This could be approximated by a computer program that outputs 5 all the time. The agents in the information equilibrium approach are far more complex:

{4, 7, 3, 4, 1, 6, 6, 5, 4, 2, 3, 1, 1, 3, 6, 3, 4, 2, 2, 6}

As you make this string of numbers longer and longer, the only way a computer program can reproduce it is to effectively incorporate the string of numbers itself. That's the peak of algorithmic complexity — algorithmic randomness. That's what I mean when I say I treat humans as so complex they're random. No computer program could capture a set of choices a real human made except a list of all those choices.

In a sense, you can think of the two approaches, utility and information equilibrium, as starting from two limits of human complexity — so simple you can capture what they think with a function versus so complex that you can't do it at all. I imagine the truth is somewhere in between, but given the empirical failure of macroeconomics (I call getting beaten by an AR process a failure) it's probably closer to the complex side than the simple side.

And that approach turns economics on its head — instead of being about figuring out what choices people will make, it's about measuring the set of choices made by people.

...

Footnotes:

[1] That's been pretty accurate at forecasting the unemployment rate for three years now (click to enlarge, black is post-forecast data):


[2] In fact, I wrote a book about how the post-war evolution of the US economy seems to be more about social changes than monetary and fiscal policy. Maybe it's not correct, but it at least gives some perspective of how different macroeconomic analysis can be from the way it is conducted today.

Thursday, January 2, 2020

"It takes a model to beat a model"


I came across this old gem on Twitter (here), and Jo Michell sums it up pretty well in the thread:
It takes a model to beat a model has to be one of the stupider things, in a pretty crowded field, to come out of economics. ... I don’t get it. If a model is demonstrably wrong, that should surely be sufficient for rejection. I’m thinking of bridge engineers: ‘look I know they keep falling down but I’m gonna keep building em like this until you come up with a better way, OK?’
There are so many failure modes of the maxim "it takes a model to beat a model":

Formally rejecting a model with data. Enough said.

Premature declaration of "a model". It seems that various bits of math in econ are declared "models" before they have been shown to be empirically accurate more often than is optimal. Now empirical accuracy doesn't necessarily mean getting variables right within 2% (although it can) — it can mean 10% or even just getting the qualitative form of the data correct. I have two extended discussions on the failure to do this here (DSGE) and here (Keen). The failure mode here is that something (e.g. DSGE) is declared a model using a lower bar than is applied to, say, cursory inspection of data or linear fits.

Rejecting a model as useless even without formal rejection. I wrote about this more extensively here, but the basic idea is that a model a) can be way too complex for the data it's trying to explain (this inherently makes a model hard to reject because you need as a good heuristic ~ 20 or so data points per parameter to make a definitive call so you can always add parameters and say "we'll wait for more data"), or b) can give the same results as another model that is entirely different (either use Occam's razor, or just give one of these  ¯\_(ツ)_/¯ to both models). The latter case can be seen as a tie goes to no one. Essentially — heuristic rejection.

Rejecting a model with functional fits. Another one I've written more extensively about elsewhere, but if you have a complicated model that has more parameters than a functional fit that more accurately represents the data, you can likely reject that more complicated model. One of the great uses of functional fits is to reduce the relevant complexity (relevant dimension) of your data set. Without any foreknowledge, the dimension d of a data set is on the order of the number n of data points (d ~ n) — worst case is that you describe every data point with a parameter. However, if you can fit that data (within some error) to a function with k parameters with k < d, then any model that describes the same data set with p parameters (within the same error) where kp < d, then you can (informally) reject that model as likely too complex. That functional fit doesn't even have to come from anywhere! (Note, this is effectively how Quantum Mechanics got its first leg up from Planck — lots of people were fitting the blackbody spectrum with fewer and fewer parameters until Planck gave us his one-parameter fit with Planck's constant.)

Failing to accept a model as rejected. One of the most maddening ways the "it takes a model to beat a model" maxim is deployed is by people who just don't accept that a model has been rejected or that another model outperforms it. This is more a failure mode of "enlightenment rationality" which assumes good faith argument from knowledgeable participants [1].

I make no particular argument that these represent an orthogonal spanning set (in fact, the 4th has non-zero projection along the 3rd). However, it's pretty clear that the maxim is generally false. In fact, it's pretty much the converse [2] of a true statement — if you have a better model, then you can reject a model — and as we all learned in logic the converse is not always true.

...

Update 14 January 2020

Somewhat related, there is also the idea that "there's always a least bad model" — to use Michell's analogy, there's always a least bad bridge. But there isn't. Sometimes there's just a shallow bit to ford.

Paul Pfleiderer takes on the compulsion to have something that gets called a "model" in his presentation here:


Making a model that isn't empirically accurate using unrealistic assumptions to make a theoretical argument is basically the same thing as making up data to make an empirical one.

My impression is that this compulsion is deeply related to "male answer syndrome" in the male-dominated field of economics.


...

Footnotes:

[1] Note that this is not necessarily a failure mode of science, which is a social process, but rather the application of that macro-scale social process to individual agents. Science does not require any agent to change their mind, only that on average at the aggregate level more accurate descriptions of reality survive over less accurate ones (e.g. Planck's maxim — people holding onto older ideas die and a younger generation grows up accepting the new ideas). The "enlightenment rationality" interpretation of this is that individuals change their minds when confronted with rational argument and evidence, but there is little evidence this occurs in practice (sure, it sometimes does).

[2] In logical if-then form "it takes a model to beat a model" is if you reject a model, then you have a better model.

Tuesday, December 24, 2019

Random odds and ends from December

I thought I'd put together a collection of some of the dynamic information equilibrium models (DIEMs) that only went out as tweets over the past couple weeks.

I looked at life expectancy in the US and UK (for all these, click to enlarge):


The US graph appears to show discrete shocks for antibiotics in the 40s & 50s, seatbelts in the 70s, airbags in the 90s & 2000s along with a negative shock for the opioid crisis. At least those are my best guesses! In the UK, there's the English Civil War (~ 1650s) and the British agricultural revolution (late 1700s). Again — my best guess.

Another long term data series is share prices in the UK:


Riffing on a tweet from Sri Thiruvadanthai I made this DIEM for truck tonnage data — it shows the two phases of the Great Recession in the US (housing bubble bursting and the financial crisis):


There's also PCE and PI (personal consumption expenditures and personal income). What's interesting is that the TCJA shows up in PCE but not PI — though that's likely due to the latter being a noisier series.


Here's a zoom in on the past few years:


Bitcoin continues to be something well-described by a DIEM, but with so many shocks it's difficult to forecast with the model:


We basically fail the sparseness requirement necessary to resolve the different shocks — the logistic function stair-step fails to be an actual stair-step:


A way to think about this is that the slope of this time series (the "shocks") are a bunch of Gaussians. When they get too close to each other and overlap, it's hard to resolve the individual shocks.

That's all for now, but I might update this with additional graphs as I make them — I'm in the process of a terrible cold and distracting myself with fitting the various time series I come across.

Saturday, December 14, 2019

Dynamic equilibrium: consumer sentiment

I looked at the University of Michigan's consumer sentiment index for signs of dynamic information equilibrium, and it turns out to be generally well described by it in the region for which we have monthly data [1]


The gray dashed lines are the dynamic equilibria. The beige bands are the NBER recessions, while the gray bands are the shocks to consumer sentiment. There might be an additional shock in ~ 2015 (the economic mini-boom) but the data is too noisy to clearly estimate it.

Overall, this has basically the same structure as the unemployment rate — and in fact the two models can be (roughly) transformed onto each other:



The lag is 1.20 y fitting CS to U and −1.24 y fitting U to CS meaning that shocks to sentiment lead shocks to unemployment by about 15-16 months. This makes it comparable to the (much noisier) conceptions metric.

Of course, this is not always true — in particular in the conceptions data the 1991 recession was a "surprise" and in the sentiment data the 2001 recession was a surprise. It's better to visualize this timing with an economic seismogram (that just takes those gray bands on the first graph and puts them on a timeline, colored red for "negative"/bad shocks and blue for "positive"/good shocks):


As always, click to enlarge.

Note that in this part of the data (and as we'll see, the rest of the data), CS seems to largely match up with the stock market. I've added in the impossibly thin shock in the S&P 500 data (along with a boom right before that looks a bit like the situation in early 2018) in October of 1987  — the largest percentage drop in the S&P 500 on record ("Black Monday", a loss of ~ 20%). Previously, I'd left that shock out because it's actually very close to being within the noise (it's a positive and a negative shock that are really close together, so it's difficult to resolve and looks like a random blip).

If we subtract out the dynamic equilibrium for consumer sentiment and the S&P 500, and then scale and shift the latter, we can pretty much match them except for the period between the mid 70s and the late 90s:


Remarkably, that period is also when a lot of other stuff was weird, and it matches up with women entering the workforce. It does mean that we could just drop down the shocks from the S&P 500 prior to 1975 into the consumer sentiment bar in the economic seismogram above.

I don't know if anyone has looked at this specific correlation before over this time scale — I haven't seen it, and was a bit surprised at exactly how well it worked!

...

Update 22 December 2019

Noah Smith tweeted a bunch of time series of surveys, so I took the opportunity to see how well the DIEM worked. Interestingly, there may be signs of running into a boundary (either the 100% hard limit, or something more behavioral — such as the 27% 'crazification factor'). Click to enlarge as always. First, the Gallup poll asking whether now is a good time to get a quality job:


And here is the poll result for the question about the economy being the most important issue in the US:


Both of these series are highly correlated with economic measures — the former with the JOLTS job openings rate (JOR), the latter with the unemployment rate:

 

...

Footnotes:

[1] Since many shocks — especially for recessions & the business cycle — have durations on the order of a few months, if the data is not resolved at monthly or quarterly frequency then the shocks can be extremely ambiguous. As shown later in the post (the S&P 500 correlation), we can look at some of the other lower resolution data as well.