The title is a bit of a joke as information equilibrium basically assumes humans are pretty ignorant about the details of economic processes ‒ a manifestly realistic assumption in my opinion. Anyway, I was reading Brad DeLong's blog post of contrition; it includes a lot of assumptions that he came to re-think after the financial crisis. This inspired me to write down the assumptions that go into the information equilibrium approach generally, but specifically because a particular point DeLong makes that I will use below.
First, information equilibrium between observable process variables A and B (e.g. GDP and total employment, supply of toilet paper and demand for toilet paper) is not just assumed. It is first shown to be an empirically accurate description in the past and present, and assumed to hold in the future based on what is essentially Hume's uniformity of nature assumption . The Lucas critique is frequently brought up in this context, essentially asserting the opposite. However, the uniformity of nature assumption is bolstered by the generality of the assumptions underlying an information equilibrium relationship.
These assumptions are:
- We are generally ignorant of the micro processes behind the two observable process variables A and B that are in information equilibrium. We only assume that the micro processes fully explore their respective micro process state spaces, are uncorrelated, and represent a large number of selections from those state spaces.
- We are completely ignorant of the micro processes behind the transfer system of information from one process variable to the other.
The first assumption says that if rolling 6-sided dice generates the process variable A, then those 6-sided dice will land on each side and there are lots of dice.
The first assumption is also what I mean when I sometimes say I assume humans are so complex, they can be treated as random. Randomly selecting states in a state space is not functionally different from complex agents that fully explore the state space through some unknown algorithm or algorithms that are algorithmically complex.
The piece about being uncorrelated will raise some eyebrows for lots of reasons, but it really isn't as unrealistic of an assumption as it seems. First, because it really isn't assumed -- it just separates information equilibrium from non-ideal information transfer (below). And second, because what really matters is temporary correlation. A large fraction of people in the US go through a generally correlated lifestyle where they are born, go to school, get a job, and work for some period of time. Many schools in the US are on a schedule with summers off. This causes some of us to be correlated in our graduation dates (high school in May or June year x, college in May or June of year x + 4). This is not the important correlation in the assumption above. The state space for the micro process is in a sense inaccessible.
The important (and regime-switching between ideal and non-ideal information transfer) uncorrelated behavior is that you and I don't buy toilet paper or sell a stock at the same time. If we do, that's important -- in the case of a stock, it can trigger a sell-off. Generally in equilibrium there are buyers and sellers of both toilet paper and stocks.
When I say uncorrelated, I mean agents (micro processes) are going about their business without regard to what a majority of other agents are doing. If they correlate, then we're in "information disequilibrium" (discussed below).
This is a special case of information equilibrium where the large number of micro processes and selections from the state space is growing exponentially in the long run, except for a finite number of disequilibrium shocks. It is not so much assumed as empirically tested. The same assumptions as information equilibrium apply; their generality lends weight to the uniformity of nature assumption underlying the approach to forecasts.
Information disequilibrium (non-ideal information transfer)
Regarding that DeLong post, here is his point that inspired me to write down these assumptions:
... the discovery that the rating agencies had failed in their assessment of lower-tail risk to make the standard analytical judgment: that when things get really bad all correlations go to one.
In a sense, the information transfer framework operationalizes this assumption into a "founding principle": when things get "bad", those micro processes correlate and information equilibrium fails. Generally speaking, there should be only a finite number of discrete "bad" periods. The "bad" periods will show up as deviations from information equilibrium .
Again, it is not so much of an assumption but rather a usefulness criteria: if things are persistently "bad", then information equilibrium isn't a useful framework.
 I would like to point out that the general uniformity of nature assumption is essentially empty. If nature fails to be uniform in the particular way you thought it was uniform (i.e. your model fails), this does not in any sense disprove that some uniformity no one has thought of exists. That is to say the lack of observation of a particular uniformity does not prove some uniformity does not exist. Therefore assuming uniformity exists cannot be disproved except via an exhaustive search over all possible uniformities.
 Non-ideal information transfer lets us say a few things about these "bad" periods ‒ and they will be bounded by the equilibrium solution.
Post a Comment
Comments are welcome. Please see the Moderation and comment policy.
Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.
Note: Only a member of this blog may post a comment.