Friday, March 18, 2016

The irony of microfoundations fundamentalism

This post from Daniel Little on Joshua Epstein gets at the assumptions behind agent based modeling-only approaches:
... the over-reach of the ABM camp comes down to this: the claims of exclusivity and general adequacy of the simulation-based approach to explanation. ABM fundamentalists claim that only simulations from units to wholes will be satisfactory (exclusivity), and they claim that ABM simulations can always be designed for any problem that are generally adequate to grounding an explanation (general adequacy). Neither proposition can be embraced as a general or universal claim.

I am fine with agent based modeling in general (I don't want to exclude any approach), but I not only doubt that it is the only productive approach, but seriously doubt it would ever lead to any result where the agents actually matter. Little misses this latter argument, which applies not only to simulations, but strict requirements for microfoundations.

I've stated this argument many times, most directly here. In general, if you have a microstate model with millions of complex agents that have thousands of parameters, you have a billion dimensional microstate problem (m ~ 1,000 × 1,000,000 = 10⁹). There are three possible outcomes:

The macrostate is a billion dimensional problem (M ~ m)
The macrostate is a bit simpler (M < m)
The macrostate is a much smaller problem (M << m)

Epstein, the ABM fundamentalist, says this is the fundamental question of ABM (requoted from Little's post):
To the generativist, explaining macroscopic social regularities, such as norms, spatial patterns, contagion dynamics, or institutions requires that one answer the following question: How could the autonomous local interactions of heterogeneous boundedly rational agents generate the given regularity?

If the dimension of the macrostate space is M ~ m, how did you find these macroscopic regularities? Not only is the search space large, but the idea of a macro regularity, say X ~ f(Y), is that you've reduced the dimension of your state space. Instead of X, Y and Z, you only have Y and Z because X = f(Y). The existence of regularities imply dimensional reduction.

If the dimension of the problem is reduced at the macro scale (there exist macro regularities), then M < m and some dimensions of the microstate don't matter for the macro state. That the macrostate is tractable at all suggests that it's not just M < m, but M << m. And if you've already ceded M < m, why can't you allow M << m? And if M is not much smaller than m, how are you ever going to have a tractable model?

...

Let me try a Socratic dialog between two characters: Micro, an agent-based or microfoundations fundamentalist, and Macro, someone who is not.

Micro: You have to have agents to explain macroscopic regularities.

Macro: So you've found some regularities?

Micro: Yes -- in my own work on ethnic and civil conflicts, and there are regularities in macroeconomics like Okun's law.

Macro: How did you ever find macro regularities if the search space is billion dimensional?

Micro: What? A billion dimensions?

Macro: For argument's sake, say we have a high fidelity simulation with a million agents with a thousand parameters. That's a billion dimensions.

Micro: Well, there are some behavioral simplifications in the agents themselves that get at the heart of what we are trying to study. They don't have a thousand parameters.

Macro: So there is some dimensional reduction at the micro scale?

Micro: There has to be in order to make the simulations tractable -- we're not going to simulate a million fully intelligent agents.

Macro: So you've found some dimensional reduction in the problem. Why can't you find it at the macro scale?

Micro: I didn't say that.

Macro: Yes, you did. You said it when you said you have to have agents to explain macro regularities. That implies that you can't find the dimensional reduction at the macro scale.

Micro: The dimensional reduction at the micro scale leads to dimensional reduction at the macro scale as well.

Macro: Then I should be able to discover regularities at the macro scale without having a microfounded model?

Micro: No, um ... well, yes you can discover them but you don't really know how they work without agents.

Macro: But you said you make dimensional reductions at the micro scale through assumptions.

Micro: You have to, in order to make the models tractable.

Macro: Well, then how do you know that the reduced micro space that you've selected out of the full micro space for tractability and other arguments maps precisely to the reduced macro space that represents observed macro regularities that's a subset of the full macro space?

Micro: ... um ...

[Ed. note: I can't actually think of any coherent response to this question besides "You're right; I can't know that."]

Macro: One possible way you can be confident this works is if much of the micro state space leads to the same subset of the macro space represented by the regularities.

Micro: ... but that means agents don't matter ...

Macro: Isn't it ironic, don't you think? A possible way that you can be confident that your modeling choices for your agent-based simulations don't impact the macro outcome is if your modeling choices, and hence your agents, don't matter?

...

Update

1. Nice! A few typos perhaps?

"with [a] million agents with a thousand parameters. That's [a] billion dimensions."

"Well, then how to you know" should be "Well, then how do you know"

"Isn't [it] ironic, don't you think?"

Jason, is it possible you could predict the outcome of some of these large million agent models with a formulation of IE designed to do so? That might make for an interesting paper right there.