Sunday, February 9, 2020

How should we approach how people think?

A common objection to the information equilibrium approach I've run into over the years is that economics at the micro level is about incentives or at the macro level how people react to policy changes in the economy. My snarky response to exactly that objection earlier today on Twitter was this:
The approach can be thought of as assuming people are too (algorithmically) complex for us to know how they respond to incentives, as opposed to [the] typical [economics] approach where you not only assume you know how individuals think but write down simplistic equations for it.
Let me expand on what this means in a (slightly) less snarky way. We'll set up a simple scenario where people are given 7 choices (the "opportunity set") and must select one.

The typical approach in economics

In economics, one would set up a utility function u(x) that represents your best guess at what a typical person thinks — how much "utility" (worth or value) they derive from each option. Let's say it looks like this:


You've thought about what people think (maybe using your own experience or various thought experiments) and you assign a value u(x) for each choice x. While I've made the above function overly simplistic, it's still assigning a value to each choice.

You would then set up an optimization problem over the choices, derive the first order conditions (which are basically the derivatives in the various dimensions you are considering), and find the maximum (i.e. the location of the zero of the derivatives, or a point on the boundary of your opportunity set).


That's the utility maximizing point and you find out your (often sole "representative") agent selects choice 5. Often, 100% of agents in your model (or 100% of a single representative agent) will select that choice. Everyone is the same. Sometimes you can have heterogeneous agents, and while each type of agent will make different selections each agent of each type will make the same selection.

Of course we can allow error, and there are random utility discrete choice models [pdf] that effectively allow random choices among the various utility options such that in the end we have e.g. most people choosing 5 with a few 4's or 6's (for 1000 agents):


But basically the approach assumes that when confronted with a choice you are able to construct a really good model — a u(x) — of how a person will respond.

This of course sets up a problem called "Lucas critique": if you make changes to policy to try and exploit something you learned this way, people can adapt to make your original model — your original utility function — moot. For example, if you make option 5 illegal, the model as is says people will start choosing 4 or 6 in roughly equal numbers. But maybe agents will adapt and choose 2 instead?

The response to the Lucas critique is generally to get ever deeper inside people's heads — to understand not just their utility functions but how their utility functions will change in response to policy, to get at the so-called deep parameters also known as microfoundations.

The approach in information equilibrium

In the information equilibrium approach, when asked what a person will choose out of 7 options, you furrow your brow, look up to sky, and then give one of these:

‾\_(ツ)_/‾

One agent will choose option 4 (with ex post probability 1):


Another will choose option 1:


If you ask that agent again, maybe they'll go with option 2 now:


Why? We don't know. Maybe they had medical bills between the choices. Maybe that first agent really loves Bernie Sanders and Bernie said to choose option 4. Again:

‾\_(ツ)_/‾

If we have millions of people in an economy (here, only 1000), then you're going to get a distribution over the choices. And if you have no prior information about that choice (i.e. ‾\_(ツ)_/‾), then you're going to get a uniform distribution (with probability ~ 1/7 for 7 choices — about 14%):


In this case, economics becomes more about the space of choices, the opportunity set — not about what individual people are thinking about. And that size of the opportunity set can be measured with information theory, hence information equilibrium (where we equate different spaces of choices). It turns out there is a direct formal mathematical relationship to the utility approach above, except instead of utility being about what individuals value it's about the size of that space of options.

In the information equilibrium approach, we depend on two assumptions that set up the basis of equilibrium:

  1. The distribution (and it doesn't have to be uniform) is stable except for a sparse subset of times.
  2. Agents fully map (i.e. select) the available choices (again, except for a sparse subset of times).

The "sparse subset" is just the statement that we aren't in disequilibrium all the time. If we are, we never see the macroeconomic state associated with that uniform distribution and we can't get measurements about it. We have to see the uniform distribution for long enough to identify it. Agents also have to select the available choices, otherwise we'll miss information about the size of the opportunity set.

But information equilibrium also allows for non-equilibrium. Since we aren't making assumptions about how they think, people could suddenly all make the same choice, or be forced into the same choice. These "sparse non-equilibrium information events", or more simply "economic shocks" cause observables to deviate from their equilibrium values. The dynamic information equilibrium model (DIEM) makes some assumptions about what these sparse shocks look like (e.g. the have a finite duration and amplitude), and it gives us a pretty good model of the unemployment rate [1]:


Those 7 choices above are translated into this toy model as jobs in various sectors (with one "sector" being unemployment).

This approach also gives us supply and demand (this is the connection to Gary Becker's 1962 paper Irrational Behavior in Economic Theory [pdf], see also here). We don't have 7 discrete choices here, but rather a 2-dimensional continuum between two different goods (say, blueberries and raspberries) bounded by a budget constraint (black line). The average is given by the black dot. As the price of one good goes up, on average people consume less of it.


And again, people might "bunch up" (i.e. make similar choices and not fully map the opportunity set) in that opportunity set and that gives us non-equilibrium cases where supply and demand fails:


In both of these "failures" of equilibrium (recessions, bunching up in the opportunity set), I am under the impression that sociology and psychology will be more important drivers than what we traditionally think of as economics [2].

But what about that "algorithmically" in parentheses in my original tweet? That's a reference to algorithmic complexity. The agents in the utility approach are not very algorithmically complex — they choose 5 either all the time or at least almost all the time:

{5, 5, 5, 5, 5, 5, 5, 4, 6, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5}

This could be approximated by a computer program that outputs 5 all the time. The agents in the information equilibrium approach are far more complex:

{4, 7, 3, 4, 1, 6, 6, 5, 4, 2, 3, 1, 1, 3, 6, 3, 4, 2, 2, 6}

As you make this string of numbers longer and longer, the only way a computer program can reproduce it is to effectively incorporate the string of numbers itself. That's the peak of algorithmic complexity — algorithmic randomness. That's what I mean when I say I treat humans as so complex they're random. No computer program could capture a set of choices a real human made except a list of all those choices.

In a sense, you can think of the two approaches, utility and information equilibrium, as starting from two limits of human complexity — so simple you can capture what they think with a function versus so complex that you can't do it at all. I imagine the truth is somewhere in between, but given the empirical failure of macroeconomics (I call getting beaten by an AR process a failure) it's probably closer to the complex side than the simple side.

And that approach turns economics on its head — instead of being about figuring out what choices people will make, it's about measuring the set of choices made by people.

...

Footnotes:

[1] That's been pretty accurate at forecasting the unemployment rate for three years now (click to enlarge, black is post-forecast data):


[2] In fact, I wrote a book about how the post-war evolution of the US economy seems to be more about social changes than monetary and fiscal policy. Maybe it's not correct, but it at least gives some perspective of how different macroeconomic analysis can be from the way it is conducted today.

7 comments:

  1. I'm looking forward to digging into this one a bit more. It seems you're covering some familiar territory (i.e. Gary Becker), which is just fine with me. Also, I like animations!

    ReplyDelete
    Replies
    1. Thanks Tom. Mostly what's different here is the direct comparison between a utility maximization approach and information equilibrium along with comparing both in terms of algorithmic complexity.

      Delete
  2. I love this post. Modelwise, I also believe that you can recover 'optimizing' aggregates from random individual behavior (for large N) through statistical mechanics, something like this
    https://en.wikipedia.org/wiki/Laplace%27s_method

    The aggregate optimizes some quantity not because all individuals are rational optimizers, but because this optimum dominates the expectation.

    I'm still looking for a way to frame this observation that might convince economists...

    ReplyDelete
  3. You have a lot more confidence in academic publishing standards than I do, if you think this sort of problem only shows up in analyses by internet randos.

    Though perhaps things are a little better in economics than in other fields.

    ReplyDelete
    Replies
    1. I think you might have meant this comment for this post?

      https://informationtransfereconomics.blogspot.com/2020/02/leaning-over-backwards-health-care.html

      But I don't believe it's necessarily better in peer reviewed journals — the focus of the "internet rando" comment was that a) we have no idea what RCA's background is and he purposely hides his political motivations, and b) you'd think someone who had some stats and a reputation to keep might realize their mistake when it's pointed out that R^2 doesn't work that way — but an internet rando won't.

      Delete
  4. Jason: “How should we approach how people think?”

    Good question. As I have said, different people think differently, even basing their thinking on different concepts, so who decides what thinking is approved? Perhaps to start, we should look at the answers different people give to specific questions. This would at least help figure out the range of thought and who is arguing with whom on each question.

    Jason: “… [the] typical [economics] approach where you not only assume you know how individuals think but write down simplistic equations for it …”

    The problem here is that “the typical economics approach” is the mainstream economics approach and the libertarian approach. It is not the Keynesian approach or the Marxist approach.

    Try this approach. Write down a relevant economic question of your choice. Write down a list of economic “schools of thought” in order from the political right to the political left. Write down the answer that each school of thought would give to your question. Add in your answer. Discuss the results.

    For example.

    Q: Is thinking based on a representative individual central to macroeconomics?

    A:

    Libertarian: Yes
    Monetarist: Yes
    New Keynesian (Mainstream): Yes
    Post Keynesian: No
    MMT: No
    Marxist: No
    Jason: No

    For this example, I will add in my answer:
    Jamie: No

    You need to understand how other people would answer your question before you criticise them. Note that you are questioning mainstream economics here – not economics in general. Many heterodox economics talk about “methodological individualism” in a disparaging tone. In essence, they are making the same argument as you, using different words.

    Yet one of your most common conclusions is that we should listen to the likes of Noah Smith (from the mainstream) when he dismisses the heterodox even though, in this case, your argument is with Noah Smith and the mainstream, and the heterodox people are on your side. That makes no sense especially as your approach is itself heterodox – as all heterodox means is anything outside the mainstream.

    One of the most interesting questions in economics for me is “why did the New Keynesians and the Post Keynesians fall out when they are both, nominally, in the Keynesian tradition”? One of the prime reasons was methodological individualism. Basically, the New Keynesians got into bed with the libertarians (who are fundamentalist individualists) in an attempt to create a “science” that covered multiple schools of thought.

    ReplyDelete

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.