## Monday, June 30, 2014

### Output and price level behavior across several economies

I thought I'd aggregate the price level vs monetary base (which I've shown before at this link) and nominal output vs monetary base data into graphs and compare them with their expected behavior in the information transfer model (using 100 aggregated economies each built from 100 random markets based on the results at this link).

The result is pretty remarkable:

Note that this not only reproduces the relative curvatures of the theoretical graphs, but their variance (the nominal output vs monetary base being a tighter curve than the price level vs monetary base). Another interesting aspect is that the overall behavior only becomes apparent when looking at the time series for many countries.

This model unifies "the Great Stagnation"/"secular stagnation" and "neo-Fisherite" models in a larger synthesis. The slowing of economic growth and prolonged low interest rates leading to lower inflation are two facets of the same general behavior of economies. The concept of convergence (smaller economies experiencing "catch-up" growth) also falls under this general behavior.

### Hard core information transfer economics

Noah Smith had a link that reminded me of something from the philosophy of science in his post from yesterday. It inspired me to lay out the "hard core" of the information transfer economics research program since it is fairly simple:

1. Demand is a source of information that is transferred to the supply, and a price is a detector of information transfer.

2. The dynamics of supply, demand and the price are governed by the differential equation (and definition):

$$p \equiv \frac{dD}{dS} = \frac{1}{\kappa} \; \frac{D}{S}$$

There are two approaches to macroeconomics I've been taking. One is that macro is just like micro: you can write down aggregate demand and aggregate supply functions just like you'd do for a single good market. The second is that macro is the sum of micro: i.e. macroeconomic observables are expectation values of microeconomic observables in an ensemble of micro markets. The former may be a good approximation to the latter within some realm of validity. The first approach is an "auxiliary hypothesis" in the Lakatosian sense. The second adds some assumptions around the partition function as auxiliary hypotheses.

The idea of a changing $\kappa$ in the price level is another "auxiliary hypothesis"  (this has some support from the sum of micro approach). The idea that the "price" of a treasury bond with interest rate $r$ is $r^{c}$ is another auxiliary hypothesis.

## Saturday, June 28, 2014

### Is the supply curve flat?

Answering the question in the title, no, but during the course of writing the past few posts, I'd looked at the wikipedia article on general equilibrium. I saw this random bit about Sraffa:
Anglo-American economists became more interested in general equilibrium in the late 1920s and 1930s after Piero Sraffa's demonstration that Marshallian economists cannot account for the forces thought to account for the upward-slope of the supply curve for a consumer good.
What follows is a just-so story mechanism about how changes in output should affect the price. It appears as though Sraffa's argument entirely ignores the premise of Marshallian supply and demand diagrams (there is a single good in a single market) by asserting that there are other goods and factors of production. The use of "first order" in the wikipedia explanation of the mechanism is also pretty laughable since Sraffa didn't include a single equation much less any scales which we could use to say that anything is "first order". I tried to read Sraffa but was instantly filled with a grim sense of philosophers talking about what change is. I then Googled around and found this, which helped a bit. I concluded that information theory is the proper way to deal with the entire situation.

What are we talking about when we draw supply and demand curves anyway? Let's go back to the beginnings of this blog. Supply (S) and demand (D) are related by the equation:

$$\text{(1) }P = \frac{dD}{dS} = \frac{1}{\kappa} \; \frac{D}{S}$$

where P is the price. The general solution (general equilibrium) to this is:

$$\text{(2) } \frac{D}{D_{ref}} = \left( \frac{S}{S_{ref}} \right)^{1/\kappa}$$

A demand curve is what you get when you look at the partial equilibrium by holding demand constant $D = D_{0}$ (an "exogenous" constant information source) and relating it back to the price to get a pair of equations:

$$P = \frac{1}{\kappa} \; \frac{D_{0}}{\langle S \rangle}$$

$$\Delta D \equiv D - D_{ref} = \frac{D_{0}}{\kappa} \log \frac{\langle S \rangle}{S_{ref}}$$

Symmetrically, a supply curve is what you get when you look at the partial equilibrium by holding supply constant (an "exogenous" constant information destination)

$$P = \frac{1}{\kappa} \; \frac{\langle D \rangle}{S_{0}}$$

$$\Delta S \equiv S - S_{ref} = \kappa S_{0} \log \frac{\langle D \rangle}{D_{ref}}$$

What are the angle brackets for? They're there to remind us that the variable is "endogenous" (the expected value inside the model) while the other variable is exogenous. These angle bracket variables are important to the discussion above because they parameterize our position along a supply or demand curve. Here are the supply (red) and demand (blue) curves along with the "fully endogenous" general equilibrium solution (gray dotted):

The demand curve seems to make intuitive sense -- at least at first. If the price goes up, the quantity of a good demanded goes down. But you get into trouble if you try this reasoning the other way: if the price goes down, the quantity demanded goes up? Maybe. Maybe not. If you weren't getting enough at the current price, it might. That depends on your utility, though. Sometimes this is referred to as the diminishing marginal utility of a good: the price you are willing to pay goes down the more widely available a good is. But there we've gone and switched up the independent variable again. The first half (effect of a price change) looks at price as independent, while the second half (diminishing marginal utility) looks at $\langle S \rangle$ as the independent variable [1].

Both of these get the partial equilibrium analysis in the wrong order mathematically. What we have is an exogenous (independent) change in demand. If demand increases and is satisfied (which we assume by taking $\langle S \rangle$ as endogenous/dependent), the price goes down.

This seems like a totally non-intuitive way to think about it. What is really going on here?

What we have is a system in contact with a "demand bath" [2] (or better yet, an "aggregate demand bath"). You could also call it an "information bath". If that bath didn't exist, adding (satisfied) demand would make the price go up, per equation (2) above (and it would move along the gray dotted curve in the figure above). What the bath is doing is sucking up any extra demand (source information) that we are creating by moving along the demand curve, so that "demand" is the same before and after the shift. Since there is "no change" in demand (any change is mitigated by the bath), the next variable in the chain, $\langle S \rangle$ effectively becomes the changing independent variable. This means the explanation is that decreasing/increasing the supply increases/decreases the price at constant demand (in the presence of a demand bath).

So why does the supply curve slope upwards? This time we're in contact with a "supply bath", so "the supply" is basically the same after we move along the curve. Moving along the supply curve is a change in $\langle D \rangle$. This means the explanation is that decreasing/increasing the demand decreases/increases the price at constant supply (in the presence of a supply bath).

Therefore there is no actual change in the supply along a supply curve so there is no bidding up factors of production or lack thereof per Sraffa. What we're doing is increasing the demand, so the price goes up.

That is to say supply and demand curves are kind of misnomers. A supply curve is the behavior of the price at constant supply, but is parameterized by increasing or decreasing demand. A demand curve is the behavior of the price at constant demand, but is parameterized by increasing or decreasing supply. [3]

[1] Yes, economists take price to be the independent variable, but in the formulation above it is more natural that either supply or demand (or both) are the independent variable(s). The price is the derivative of the demand with respect to the supply (the marginal change in demand for a marginal change in supply).

[2] This whole description is based on an isothermal expansion/compression of an ideal gas in contact with a thermal bath.

[3] Shifts "of" the supply and demand curves (as opposed to shift "along") are effectively changes in the information bath

## Friday, June 27, 2014

### Towards Arrow-Debreu-McKenzie equilibrium, part N of N

After the results of this post on the macroeconomic partition function, I'm abandoning Arrow-Debreu-McKenzie equilibrium. Not because it is hard, but because it is likely meaningless for macroeconomics.

Let's look at what the ADM equilibrium says with regards to a partition function in thermodynamics. It effectively says there exists some set of occupation numbers so that the energy of the system is the total energy, or more generally, there exists a microstate consistent with an observed macrostate. The SMD theorem then tells us that there are only limited properties of that microstate that survive to the macrostate. In some sense, the SMD theorem should be intuitive: if you have a system with N degrees of freedom, but is described by n << N degrees of freedom at the macro scale, then the subset of properties of N degrees of freedom that follow as properties of the n degrees of freedom, is likely to be smaller.

The other consequence of the SMD theorem should also be intuitive. If your macro system appears to be described by n << N degrees of freedom, then it seems highly likely that among the total number of microstates, large subsets of the microstates are going to be described by a given macro state -- i.e. the equilibrium (the microstate satisfying macro constraints) is not going to be unique. For example, in an ideal gas, you can reverse the direction of the particle velocities and obtain another equilibrium (actually, all spatial, rotational and time-reversal symmetries lead you to other equilibria).

The reason economists think ADM is useful is probably due to their obsession with initial endowments. The ADM theorem goes part way to answering the question: Which set of prices let households and firms reach their final desired endowments given their initial endowments? The theorem says that there exists a set of prices that do that, and that is good to know! But these prices clear the markets in period two and they've finished their job [1]. This is a bit like worrying about how the energy gets redistributed to each atom of in gas when two gasses are mixed.

In an economy, the equilibria are more restricted than energies among the atoms in a gas and it's not trivial to show that they exist (or that they are Pareto efficient). I'm not knocking ADM. However, the existence seems meaningless for a real economy. As soon as a new product is invented, you're heading to another equilibrium. As soon as someone gets paid, you're heading for another equilibrium (if that someone would like to have more goods and services instead of holding cash). In reality, there may be a detailed balance that keeps the equilibria in an equivalence class described by e.g. a given NGDP growth rate. But that's the rub! Macroeconomics is the study of the behavior of those equivalence classes, not the instances of them!

That is to say macroeconomics is the study of the properties of ensemble averages (equivalence classes of microstates). Or another way, what we're interested in is:

$$\langle P \rangle = \langle a m^{a-1}\rangle = \frac{\sum_{i} a_{i} m^{a_{i}-1} e^{-a_{i} \log m}}{\sum_{i} e^{-a_{i} \log m}}$$

$$= \frac{\sum_{i} a_{i} m^{-1}}{\sum_{i} e^{-a_{i} \log m}} = \frac{1}{m} \frac{\sum_{i} a_{i}}{Z(\log m)}$$

not the particular configuration of the $i^{th}$ market.

This is not to say the individual configurations are meaningless in general. You might have very small number of markets. You might have a strongly interacting system. You might care about the effect of some policy or other in a particular market. But inasmuch as you are studying macroeconomics, the existence of an ADM equilibrium does not help you reach understanding.

[1] Footnote added 10/3/2014: David Glasner quotes Franklin Fisher: "To only look at situations where the Invisible Hand has finished its work cannot lead to a real understanding of how that work is accomplished.", which is similar to my sentiment.

### The macroeconomic partition function and the information transfer index

I've made some remarks in the past about how analogies between physics and economics only have merit inasmuch as they are useful. You might think: here he goes down the rabbit hole with partition functions. And that may be so. However, this is going to be a very useful analogy. Continuing from this post where I realized that the sum (defining nominal output or NGDP)

$$\text{(1) } N(m) =\sum_{i} n_{i} = \sum_{i} m^{a_{i}} = \sum_{i} e^{a_{i} \log m}$$

had the form of a partition function

$$Z(\beta) = \sum_{i} e^{-\beta E_{i}}$$

If you check out the link to the previous post, you'll realize there is a change to the first equation (I traded $-\log 1/m$ for $\log m$). The reason for this was because $N(m)$ should be the expectation value of an operator, not the partition function itself. I'll define the macroeconomic partition function to be:

$$\text{(2) } Z(m) \equiv \sum_{i} \frac{1}{n_{i}} = \sum_{i} m^{-a_{i}} = \sum_{i} e^{-a_{i} \log m}$$

Where the $n_{i}$ are the demands in the individual markets and $m$ is the money supply (it doesn't matter which aggregate at this point). The individual markets are the solutions to the equations:

$$\text{(3) }\frac{d n_{i}}{d m} = a_{i} \frac{n_{i}}{m}$$

as was shown in this post. One interesting thing is that the defining quality of these micro markets -- equation (3) leads to supply and demand diagrams -- is homogeneity of degree zero in the supply and demand functions, which is one of the few properties that survive aggregation in the Sonnenschein-Mantel-Debreu theorem (see my notes here).

The expectation value of the exponent $a_{i}$ is:

$$\langle a \rangle = -\frac{\partial \log Z(m)}{\partial \log m} = \frac{\sum_{i} a_{i}e^{-a_{i} \log m}}{\sum_{i} e^{-a_{i} \log m}}$$

which is related to the information transfer index $\kappa = 1/\langle a \rangle$. Additionally, the nominal economy will be the number of markets $N_{0}$ [1] times the expectation value of an individual market $m^{a_{i}}$, i.e.

$$\langle N(m) \rangle = N_{0} \frac{\sum_{i} m^{a_{i}} e^{-a_{i} \log m}}{\sum_{i} e^{-a_{i} \log m}}$$

$$= N_{0} \frac{\sum_{i} 1}{\sum_{i} e^{-a_{i} \log m}} = \frac{N_{0}^{2}}{Z(m)}$$

First there is an interesting new analogy with thermodynamics: $\log m$ is playing the role of $\beta = 1/kT$. As $m$ gets larger the states with higher $a_{i}$ (high growth markets) become less probable, meaning that a large economy (with a large money supply) is more like a cold thermodynamic system. As an economy grows, it cools, which leads to slower growth ("the great stagnation" or "secular stagnation") and as we shall see a bending of the price level vs money curve (low inflation in economies with large money supplies).

Let's take $N_{0} =$ 10 (left) and 100 (right) random markets with uniformly distributed $a_{i} \in [0,2]$ (this basically allows any $\kappa \geq 1/2$, see for instance here) and plot the information transfer index $1/\langle a \rangle$, the price level $\langle a m^{a -1}\rangle$ and the nominal output $\langle N(m) \rangle$:

In the first pair of graphs we can see the economies start out well described by the quantity theory ($\kappa = 1/2$) and move towards higher $\kappa$ as the money supply increases. However, the first part of that observation is because the lowest value of $\kappa$ allowed was 1/2 (i.e. $a_{max} = 2$); the move toward higher $\kappa$ is a more robust observation. We can see the bending of the price level vs money supply in the second pair of graphs. In the third pair of graphs, we can see the trend towards lower growth relative to the growth in the money supply.

The question I had was: how well does this oversimplified picture work with real data? We have $N_{0}$ markets of equal size at $m = 1$ (arbitrary units), how well could it possibly work? Pretty well, actually. In fact, after normalizing the price level and scaling the money supply, the function $P = \langle a m^{a-1} \rangle$ almost exactly matches "the" information transfer model I've been using that incorporates an information transfer index that was only "motivated" by information theory, not actually derived from it. In the current post, $\kappa$ arises out the individual microeconomic markets.

Here is the price level with the basic information transfer model and the partition function version:

There is only a small deviation in the 1960s. Finally, here is the information transfer index:

The green line represents "data", i.e. if we assume the functional form of $P(NGDP,M0)$, what value of $\kappa$ gives us the exact CPI.

With the partition function approach, we can see that reduced inflation with a large money supply (a thermodynamically colder system) as well as reduced growth are emergent properties. They do not exist for the individual markets. An economy with a larger money supply is more likely to be realized as a large number of lower growth states (higher entropy) than a smaller number of high growth states. This approach also resolves some of the issues at the end of this post (the relationship between $\kappa$ and the exponents $a_{i}$).

[1] In the future, I might want to try to treat and economy where the number of markets is changing -- much like in the case of particle number in a quantum field.

## Thursday, June 26, 2014

### Random markets and partition functions

Continuing from this post, I realized that the sum (defining nominal output or NGDP)

$$N(m) =\sum_{i} n_{i} = \sum_{i} m^{a_{i}} = \sum_{i} e^{- a_{i} \log 1/m}$$

has the form of a partition function

$$Z(\beta) = \sum_{i} e^{- \beta E_{i}}$$

Where $\beta = 1/k T$ corresponds to $\log 1/m$ and the energy of the states $E_{i}$ corresponds to the $a_{i}$, which corresponds to the maximum entropy probability distribution. If we take that analogy at face value, the expected value of the random $a_{i}$ with maximum entropy would be

$$\langle a \rangle = - \frac{\partial \log Z(\beta)}{\partial \beta} = \frac{\sum_{i} a_{i} m^{a_{i}}}{\sum_{i} m^{a_{i}}}$$

Since $N \sim m^{\langle a \rangle}$, we now have an exponent that varies with $m$ -- exactly we have observed with the exponent $\kappa(N, M)$! (see here or here). The resulting function $N \sim m^{\langle a \rangle}$ now overestimates the result of adding the markets together. Here are the results for uniformly distributed $a_{i}$ where $a_{i} \in [0,1]$, $[0,2]$ and $[0,4]$ (the plot of $\langle a \rangle$ appears alongside the corresponding graph of $N \sim m^{\langle a \rangle}$):

One thing is that in statistical mechanics, higher temperature means that more of the higher energy states are occupied, it appears as though the observation in economics is that higher $m$ (money supply) means that more of the lower $a_{i}$ states are occupied in order to produce this figure. I'll have to look into this a little more to fully understand how this works. However this may be a pretty important result, at least for information transfer economics. Stay tuned!

### A thousand more random markets

A quick post extending the previous post: if the exponents are log-normally distributed instead of uniformly distributed, you get the same deviation at high and lower values of m for the same reasons. However, the deviation is much smaller:

## Wednesday, June 25, 2014

### A thousand random markets

In this post [1], I set up a framework with a large number of markets $p_{i}: n_{i} \rightarrow s_{i}$ mediated by money so that we obtain (for the individual markets) the differential equations

$$\frac{dn_{i}}{dm} = a_{i} \frac{n_{i}}{m}$$

The solution to these differential equations are

$$\frac{n_{i}}{n_{i}^{ref}} = \left( \frac{m}{m^{ref}} \right)^{a_{i}} + c_{i}$$

In  [1], I made the approximation that the $a_{i}$ could be replaced by their average $\bar{a}$ and therefore the sum of the markets obeyed the approximate differential equation

$$\frac{dN}{dm} \simeq \bar{a} \frac{N}{m}$$

where

$$N = \sum_{i} n_{i}$$

Now I ask the question: how well does this work? First, here is the sum of 10 random markets (with 10,000 random evaluations, blue points) where we take $n_{i} \sim m^{a_{i}}$ with a uniformly distributed $a_{i} \in [0,1]$. The approximate aggregate differential equation has solution $N \sim m^{\bar{a}}$ (shown in red):

A region that represents 10% variation is shown in gray. We can see that this solution works pretty well, but I found that if we summed up 1000 random markets a systematic deviation for higher values $a_{i}$ and higher/lower values of $m$ begin to show up. Here are the results for $a_{i} \in [0, 0.5]$, $a_{i} \in [0,1]$, $a_{i} \in [0,2]$, and $a_{i} \in [0,4]$ which have $\bar{a} =$ 0.25, 0.5, 1.0 and 2.0, respectively.

A systematic deviation appears for small/large values of $m$ that is more apparent for larger values of $a_{i}$.  The source of this is not mysterious: for larger values of $m$, $m^{\text{max } a_{i}}$ tends to dominate while for smaller values of $m$, $m^{\text{min } a_{i}}$ tends to dominate. Still, for 10% shifts from the reference point $(m^{ref}, N^{ref})$, it remains a remarkably good approximation.

The markets with high values of $a_{i}$ would, in the long run, come to dominate the economy (e.g. if the market for apples went as $n_{apples} \sim m^{4}$, the entire economy would quickly become just apples). This doesn't appear to happen in diversified economies [2] (it might be true of e.g. oil-based economies), which implies there is a constraint on the values of the $a_{i}$. The interesting thing is that this constraint appears to make the macro formulation (the aggregate market) more accurate than the sum of individual markets -- i.e. there is an enforcement mechanism that makes the individual markets behave more like the average in diversified economies.

Is there an effect due to

$$a_{i} = \frac{\log \nu_{i}}{\log M}$$

so that we should use

$$\bar{a} = \frac{\log \bar{\nu}}{\log M}$$

instead? Does that then have a relationship with $\kappa (M, N)$? A uniform logarithmic distribution is related to e.g. Benford's law. I will look into all of this in a future post.

[2] This is almost a circular definition: diversified economies are economies that haven't had one commodity or product take over their economy. However, it seems that diversified economies stay diversified -- that is the sense of the statement.

## Monday, June 23, 2014

### Towards Arrow-Debreu-McKenzie equilibrium, part 2 of N

Some random notes on aggregation, especially in light of the Sonnenschein-Mantel-Debreu theorem (not saying anything new about the theorem which has been around a long time, but looking at it from the perspective of the information transfer model). This is not a coherent post.

A.

SMD theorem says that only things inherited from individual demand functions are (from wikipedia):
Note: homogeneity of degree zero is a fundamental aspect of the information transfer framework, and it can be seen as essentially the simplest relationship of supply and demand that maintains homogeneity of degree zero.

This means that an aggregated system of equations that satisfy homogeneity of degree zero:

$$p_{ij} = \frac{\partial n_{i}}{\partial s_{j}} = \frac{1}{\kappa_{ij}}\; \frac{n_{i}}{s_{j}}$$

must result in a function that satisfies

$$P = \frac{d N}{d S} = \frac{1}{\kappa_{1}}\; \frac{N}{S} + \frac{1}{\kappa_{2}}\; \frac{N^{2}}{S^{2}} + \frac{1}{\kappa_{3}} \frac{N}{S} \frac{d N}{d S} + \frac{N}{\kappa_{4}} \frac{d^{2} N}{d S^{2}} + \cdots$$

We keep only the first term in the information transfer model.

Consequence: The consequence of this is that the uniqueness of the equilibrium is not guaranteed : the excess-demand function may have more than one root – more than one price vector at which it is zero (the standard definition of equilibrium in this context). (from wikipedia)

I don't know if this is a correct simplified explanation, but imagine a world where fuel was slightly more expensive and cars were slightly less expensive. Depending on the relative price this could still clear the market with the same amount of money being spent in aggregate on cars and fuel.

However! If there is less fuel and more cars, then there might be fewer ways to associate gallons of fuel with cars (lower entropy since the fuel is fungible) and you could select the actual equilibrium based on maximum entropy production.

More wikipedia: Second, by the Hopf index theorem, in regular economies the number of equilibria will be finite and all of them will be locally unique.

That means you can potentially sort them all by entropy.

B.

I was reading this: New Developments in Aggregation Economics, Pierre Andre Chiappori, Ivar Ekeland (2010) [http://www.columbia.edu/~pc2167/Aggregation.pdf], and I thought this was an odd quote:
They also show that these restrictions, if fulfilled, are sufficient to generically recover the underlying economy - including individual preferences. These results, how- ever, require that individual endowments be observable; indeed, when only aggregate endowments are observable, a non-testability result can be proved.
The conclusion that emerges from this literature is that, in contrast with prior views, general equilibrium theory does generate strong, empirically testable predictions. The subtlety, however, is that tests can only be performed if data are available at the micro (here individual) level. One of the most interesting insights of new aggregation theory may be there - in the general sense that testability seems to be paramount when micro data are available, but does not seem to survive (except maybe under very stringent auxiliary assumptions) in a 'macro' context, when only aggregates can be observed.
Emphasis mine. You can test microfounded models if micro data is available, but there are no testable implications for the macro variables?

C.

aggregation (econometrics), Thomas Stoker (2006) [http://web.mit.edu/tstoker/www/Stoker_Aggregation_Palgrave.pdf

This paper claims only possibilities where a "full schedule" i.e. individual/household information (income transfer and spending result):

1. Each household spends in exactly the same way,
2. The distribution of income transfers is restricted in a convenient way.

This sections makes it seem like the "aggregation problem" is entirely based on the idea that the dot product doesn't factor

$\frac{1}{n} \sum_{i} a_{i}b_{i} \neq \frac{c}{n} \sum_{i} b_{i}$

in general, where $c$ is a constant.

Point 1 above basically says that you can work around the issue by taking all the $a_{i}$ to be constant or equal to their average.

$\frac{1}{n} \sum_{i} a_{i}b_{i} = \frac{\bar{a}}{n} \sum_{i} b_{i}$

Point 2 says that  you can work around the issue by taking $b_{i} = s_{i}\bar{b}$ so that

$\frac{1}{n} \sum_{i} a_{i}b_{i} = \frac{\bar{b}}{n} \sum_{i} a_{i}s_{i}$

And the latter sum is taken to be a new weighted average, $\bar{a}_{w}$. However, there are a couple of other ways to deal with this

$\frac{1}{n} \sum_{i} a_{i}b_{i} = \frac{1}{n} |a| |b| \cos \theta$

where $\cos \theta$ is the angle between the vectors $a$ and $b$ which becomes another macro parameter. There is also the possibility of using the trace:

$\frac{1}{n} \sum_{i} a_{i}b_{i} = \frac{1}{n} \text{tr } a \otimes b$

which is invariant under a wide variety of transformations on the vectors $a$ and $b$. It is also related to a differential volume element of the volume defined by the matrix $a \otimes b$ (i.e. $\det a \otimes b$). I.e. the micro effect on the vector elements can be seen as a macro effect on volumes (i.e. a different theory).

## Saturday, June 21, 2014

### Towards Arrow-Debreu-McKenzie equilibrium, part 1 of N

I was inspired by Noah Smith's review of Big Ideas in Macroeconomics to see what I could do about showing a Arrow-Debreu-McKenzie-style equilibrium in the information transfer model. I'm going to just attack the problem in bits and pieces. This first piece uses some assumptions I would hope to be able to eliminate in a future approach, but I thought I'd put the ideas down so I can reference them later.

The starting point is at this link which describes the basic idea of a money-mediated economy in an information transfer framework. We'll try to show how a macroeconomy is built out of many small markets (indexed with a subscript $i$). We'll start with the equation:

$$\frac{s_{i}}{ds_{i}} \log \sigma_{i} = \frac{c_{i} m}{dm} \log M = \frac{n_{i}}{dn_{i}} \log \nu_{i}$$

where $s_{i}$, $n_{i}$ are the quantities supplied/demanded in the $i^{\text{th}}$ market and $m$ is the total amount of money in the economy. The $\log$'s keep track of the information units. This gives us the differential equations:

$$\text{(1) } \frac{dn_{i}}{dm} = a_{i} \frac{n_{i}}{m}$$

$$\text{(2) }\frac{ds_{i}}{dm} = b_{i} \frac{s_{i}}{m}$$

$$\text{(3) }p_{ij} = \frac{dn_{i}}{ds_{j}} = k_{ij} \frac{n_{i}}{s_{j}} \rightarrow p_{i} = \frac{dn_{i}}{ds_{i}} = k_{i} \frac{n_{i}}{s_{i}}$$

Where

$$\text{(4) } a_{i} = \frac{\log \nu_{i}}{c_{i} \log M} \text{ and } b_{i} = \frac{\log \sigma_{i}}{c_{i} \log M} \text{ and } k_{i} = \frac{\log \nu_{i}}{\log \sigma_{i}}$$

We've already incorporated our first assumptions (one that is key and I think may be key to understanding all of macroeconomics of money): all the $dm_{i} = dm$ so that the infinitesimal element of money is the same across all sub-markets, which implies that all the $m_{i} = c_{i} m + d_{i}$ (linear transformation). If we then assume that the amount of money transferring information for any individual good is small relative to the total amount of money on average, so we can take $m \gg d_{i}/c_{i}$ so that $m \simeq c_{i} m$ and then subsume the $c_{i}$ into the definitions of $a$ and $b$ above. (This is connected to a maximum ignorance assumption and is related to the equipartition theorem: the money is on average equally distributed among the markets so that no $m_{i}$ dominates the distribution.) I'd like to do this more rigorously in the future (e.g. using distributions and integrating over them).

In the last differential equation (3), we defined the prices $p_{i}$ and made the assumption of non-interacting markets ($dn_{i}/ds_{j} = p_{i} \delta_{ij}$, also made at this link) -- i.e. the prices for a particular good don't depend strongly on the prices for other goods or services. I'd like to relax this assumption in the future, but Arrow-Debreu appears to make it as well [pdf]. At the end of this post, I make a hand-waving argument in terms of a geometric interpretation of the trace. That gives you, kind reader, something to look forward to because you are about to get slapped in the face with a bunch of algebra.

Let's define aggregate demand using a weighted sum (with weights $w_{i}$):

$$\text{(5) }N \equiv \sum_{i} w_{i} p_{i} n_{i}$$

I put in the weights because measures of NGDP actually do this (e.g. some weights on food and energy are zero for some measures of CPI). You can also see why I used $n$ for the demand. Now let's take a derivative with respect to money:

$$\frac{dN}{dm} = \frac{d}{dm} \sum_{i} w_{i} p_{i} n_{i}$$

$$\text{(6)} \frac{dN}{dm} = \sum_{i} w_{i} \frac{dp_{i}}{dm} n_{i} + w_{i} p_{i} \frac{dn_{i}}{dm}$$

Now using equations (1) and (3) above we can show

$$\frac{dp_{i}}{dm} = \frac{d^{2}n_{i}}{dm ds_{i}} = \frac{d}{ds_{i}}\frac{dn_{i}}{dm}$$

$$\frac{dp_{i}}{dm} = \frac{a_{i}}{m} \frac{dn_{i}}{ds_{i}} - \frac{a_{i}n_{i}}{m^{2}} \frac{dm}{ds_{i}}$$

with a little algebra, we finally obtain:

$$\frac{dp_{i}}{dm} = \frac{a_{i} n_{i} p_{i}}{m} \left( 1 - \frac{1}{b_{i} k_{i}} \right)$$

so that equation (6) becomes, after substituting the differential equation (1) for the second term and taking $m$ outside the sum:

$$\text{(7) } \frac{dN}{dm} = \frac{1}{m} \sum_{i} w_{i} p_{i} n_{i} \left( 2 a_{i} - \frac{a_{i}}{b_{i} k_{i}} \right)$$

note that

$$\frac{a_{i}}{b_{i} k_{i}} = \frac{\frac{\log \nu_{i}}{c_{i} \log M}}{\frac{\log \sigma_{i}}{c_{i} \log M} \frac{\log \nu_{i}}{\log \sigma_{i}}} = 1$$

so that substituting into equation (7) we obtain the result:

$$\text{(8) } \frac{dN}{dm} = \frac{1}{m} \sum_{i} w_{i} p_{i} n_{i} \left( 2 a_{i} - 1 \right)$$

The piece outside the parentheses is our original aggregate demand $N$, but we can't just ignore the term in the parentheses. We'll resort to an averaging argument. If the number of markets is large, we can use the law of large numbers to say that $a_{i} \simeq \bar{a}$ (maximum ignorance about the actual distribution of the $a_{i}$) so that:

$$\text{(9) } \frac{dN}{dm} \simeq \frac{2 \bar{a} - 1}{m} \sum_{i} w_{i} p_{i} n_{i} = (2 \bar{a} - 1) \frac{N}{m}$$

and we can define $\kappa = 1/(2 \bar{a} - 1)$ to make the connection with aggregate money market equation:

$$P = \frac{dN}{dm} = \frac{1}{\kappa} \; \frac{N}{m}$$

where $P$ is the price level. This shows that a macroeconomy can be built up from a bunch of individual markets in a relatively straightforward way. There are some criticisms brought up by economists regarding the so-called aggregation problem (and aggregate demand in particular). Those appear to come down to challenges to the assumptions that $a_{i} \simeq \bar{a}$ and $dn_{i}/ds_{j} = p_{i} \delta_{ij}$ (i.e. changes in agent preferences change with income and significantly affect the distribution of the $a_{i}$ and relative prices matter, respectively). The first can be defended with a maximum entropy argument: if aggregate models appear to work in the sense that e.g. GDP seems to be meaningful (recessions are a real thing -- e.g. Okun's law appears valid on average), then the n-dimensional space of agents does seem to be reduced to a lower dimensional space consisting of GDP and unemployment rates and your extra dimensions (agents) aren't particularly relevant.

The second challenge is more serious at first glance, hence why I'd like to drop it in the future. However, the trace

$$\text{tr } p = \sum_{i} p_{ii}$$

is an invariant measure for matrices under various transformations. Also via Jacobi's formula, the trace is basically the differential of the determinant which means that it represents an infinitesimal volume measure ($\det p$ is the volume spanned by the vectors of $p$). That volume represents the size of the aggregate economy, so the trace represents an infinitesimal change in the size of the economy -- and that depends only on the diagonal elements of $p$, so the relative prices $p_{ij}$ don't matter. At least that's the reasoning I'd like to use to prove that the $p_{ij}$ don't really matter, only the $p_{ii} \equiv p_{i}$.

PS There probably is some way to represent all of this in terms of fiber bundles with money representing the common differential element moving various vectors of goods over the manifold that is the economy. I'm going to try to keep Chris House happy, though.

PPS The values of $\bar{a}$ are about 3/2 to 7/6 for values of $\kappa$ being 1/2 to 3/4.

PPPS After going through all this, I did the same calculation using the quantity supplied (see picture at the top of the post) and got $\kappa = 1/\bar{a}$. This may have something to do with the right hand side of the supply and demand equations giving us $p_{i} s_{i} = k_{i} n_{i}$ such that the demand $n_{i}$ already contains the price (the units of $n_{i}$ are total value demanded, not total quantity demanded). This makes the whole derivation above from the demand side much easier:

$$\frac{dN}{dm} = \frac{d}{dm} \sum_{i} w_{i} n_{i} = \sum_{i} w_{i} \frac{dn_{i}}{dm}$$

$$= \sum_{i} w_{i} a_{i} \frac{n_{i}}{m} \simeq \bar{a} \frac{N}{m}$$

and gives consistent measures of $\kappa = 1/\bar{a}$ approaching the problem from the supply side and from the demand side. I'm thinking that's actually the right way to do it.

## Wednesday, June 18, 2014

### "The" information transfer model

I thought I'd write down a reference post for "the" information transfer model. By putting "the" in quotes, I mean the specific collection of equations for macroeconomics. This is a particular set of solutions to the differential equations defined by the general information transfer framework.

In the following the notation $p:D \rightarrow S$ is shorthand for saying the demand $D$ transfers information to the supply $S$ that is detected by the price $p$.

The price level and the money market

The price level $P$ is given by the solution to the differential equation for endogenous (see here) $N$ and endogenous $M$ (i.e. the model sets them both together) in the market $P:N \rightarrow M$ where $N$ is NGDP and $M$ is the currency in circulation (empirically, see here) [link works now]. The solution to the differential equation give us $N \sim M^{1/\kappa}$ so that we come to:

$$P(N, M) = \frac{\alpha}{\kappa (N, M)} \left( \frac{M}{M_{0}} \right)^{1/\kappa(N,M) - 1}$$

The function $\kappa$ is taken to be

$$\kappa (N, M) = \frac{\log M/(\gamma M_{0})}{\log N/(\gamma M_{0})}$$

based on empirical results and some motivation from the underlying theory (see here, here). The parameters $\alpha$ and $M_{0}$ are fit to empirical data along with the parameter $\gamma$. However, if $\gamma$ is fit to the price level of one country and kept constant across other countries, all of the countries will be placed on the same two dimensional price level "surface" under a change of variables $P(N, M) \rightarrow P(\kappa(N, M), \sigma (M)) = P(\kappa, \sigma)$ where $\sigma \equiv M/M_{0}$ (see here).

Examples:

The other markets

The remaining markets are all "exogenous" (see here) demand and "exogenous" supply (information source and destination).

The labor market

The markets involving labor are $P:N \rightarrow L$ and $P:N \rightarrow U$ where L is the total number of employed people and U is the total number of unemployed people and result in the equations

$$P = \frac{1}{\kappa_{L}}\; \frac{N}{L}$$

$$P = \frac{1}{\kappa_{U}}\; \frac{N}{U}$$

The first one gives us a form of Okun's law (see here). The "natural rate" of unemployment -- the rate at which $U$ seems to fluctuate around when the above parameters $\kappa_{L}$ and $\kappa_{U}$ are fit to data -- is given by $u^{*} \simeq \kappa_{U}/\kappa_{L}$.

Examples:

[The last graph -- of the UK unemployment rate -- has messed up labels. The model is the blue line (the "natural rate"), the gray line is data and the axis is the unemployment rate, not price level. H/T Tom Brown.]

The interest rate market

The interest rate market is given by $r^{c}:N \rightarrow M_{r}$ where $c$ is a (currently unexplained, update 5/22/2015: explained) fudge factor relating the interest rate price $r^{c}$ to the actual market nominal interest rate $r$. The resuting equation for exogenous $N$ and $M_{r}$ is:

$$r^{c} = \frac{1}{\kappa_{r}}\; \frac{N}{M_{r}}$$

or

$$c \log r = \log \left( \frac{1}{\kappa_{r}}\; \frac{N}{M_{r}} \right)$$

The parameters $c$ and $\kappa_{r}$ are the same for both long and short term interest rates. One fits $c$ and $\kappa_{r}$ to the data for one interest rate market (the long term interest rate market uses currency in circulation for $M_{r_{long}}$ and the 10 year rate for $r = r_{long}$) and the same parameters work for the other market (i.e. the short term interest rate which uses the full monetary base including reserves for $M_{r_{short}}$ and the 3-month secondary market rate, the interbank rate or the effective Fed funds or other short term interest rate for $r_{short}$ depending on the country). When I do the fit, my current modus operandi is to simultaneously fit both markets with the same $c$ and $\kappa_{r}$. Note $M_{r_{long}} = M$ above in the price level/money market.

Examples:

Shifts

I look at the effect of shifts in the variables by taking:

$$M_{r_{long}} \rightarrow M_{r_{long}} + \delta M \text{ and } N \rightarrow N \frac{P(N, M_{r_{long}} + \delta M)}{P(N, M_{r_{long}})}$$

if M is the currency in circulation for a purely monetary shift and

$$M_{r_{long}} \rightarrow M_{r_{long}} \text{ and } N \rightarrow N + \delta N$$

for a purely exogenous NGDP shift (including fiscal expansion or exogenous shocks).

If we use the monetary base (i.e in the short term interest rate market) $MB = M_{r_{short}}$, then in this baseline version of the information transfer model there is no effect on the price level or NGDP. This doesn't rule out an impact on output via the IS-LM model (see e.g. here), in that case the shift "re-appears" in the model above as an exogenous NGDP shock (I will devote a future blog post to explaining that better). However, I am not as confident in that conclusion so I'm leaving the details out of this reference post

Examples:

### Great review of Big Ideas in Macroeconomics on Noahpinion

In other words, maybe Prescott is wrong, and an emergent "macroeconomics" does exist that can't easily be predicted by modeling people as "purposeful decision makers", as Athreya insists we do throughout most of Big Ideas. If that's the case, then departures from the Walrasian equilibrium base case can't be modeled by imagining how individuals would behave in response to the missing markets.
The whole of part 3 is really interesting stuff. It makes me want to see what I can do with using the information transfer framework and Arrow-Debreu-McKenzie equilibrium ... Does maximum entropy select a particular macro equilibrium when they aren't unique? There are actually  gazillions (a technical term) of equilibria in an ideal gas (e.g. trade spatial positions of any subset of atoms and you get another equilibrium), but they're all described by a single set of macro state variables related by the ideal gas law: p V = n R T.

## Tuesday, June 17, 2014

### What if money was made of vinegar?

 This is not vinegarworld. Figure reference.
Yep. Weird title. This post is largely based on Roderick Dewar's 2009 paper Maximum entropy production as an inference algorithm that translates physical assumptions into macroscopic predictions: Don't shoot the messenger. Yep. Long title. This post was largely motivated by an Anonymous commenter commenting on my comment:
If [microfoundations survive aggregation to into a macroeconomic model as anything other than a coefficient], the resulting model is likely intractable.
Anonymous gave me some great references on heterogeneous agent models. But he or she also prompted me to start thinking about what I meant by that statement more carefully. Turns out Dewar already did the heavy lifting on that, so all I have to do is apply what he was say about complex dynamical systems like the climate to economics.

We observe that the pressure and volume of a gas are related in such a way that, when we keep the temperature constant, p ~ 1/V. Since we are able to reliably reproduce this macroscopic behavior through the control of a small number of macroscopic degrees of freedom even though we have absolutely no control over the microstates, the microstates must be largely irrelevant. This is not to say you can't derive the ideal gas law from assumptions about atoms. It's just that nature has found a computationally efficient model for dealing with the N-dimensional (number of atoms) problem with just two dimensions (p, V).

We observe that the price level P grows with money M in the long run such that log P ~ log M (the quantity theory of money). Since this is observed across many natural experiments, I mean, countries where we have no control over the micro degrees of freedom, I mean, people, the microfoundations must be largely irrelevant. This is not to say that a microfounded theory won't get you log P ~ log M, it's just that the information theory approach that leads you there is more computationally efficient, reducing the N heterogeneous agents to two degrees of freedom (P, M).

This is what I mean by intractable in that long-ago post. If the dimension of the problem doesn't fall significantly from d ~ N to d ~ 1, you're not going to have enough computational resources.

Another way to say this is that if macroeconomics is a universal discipline, then microfoundations don't matter. George Robert Lucas said it does matter; e.g. the German policy environment matters and Germans behave in a different way because of it. Lucas is saying you really should study German Macroeconomics or Ethiopian Macroeconomics much like universities have departments of German Studies. [The microfoundations really do seem to matter in politics and culture.]

That last paragraph was a bit hyperbolic to make a point. It should really apply to particular macroeconomic relationships. If you think the quantity theory of money is universal, then you think microfoundations for the quantity theory are irrelevant. If you think the Phillips curve isn't universal, then microfoundations could potentially be relevant. But the fact that it was observed to exist in several countries is evidence that the microfoundations are irrelevant (and they seem to be).

Ah, but what if money was made of vinegar?

Um, wait. Let me back up and start again with Dewar. Climate scientists were able to get remarkably accurate results about cloud cover, polar ice cover and thermal gradients while ignoring the microfoundations (the physics and chemistry of water) and assuming maximum entropy production (MEP) by the Earth's water cycle. See the picture at the top of this post. An enterprising climate scientist asked: but what if the ocean was made of vinegar? Since MEP enthusiasts got the result without using the properties of water, isn't MEP wrong because it would give the same result when using vinegar and we know oceans of vinegar would give drastically different results? That's where Dewar says no. MEP is a tool; it's not wrong or right on its own. The fact that you can use MEP to create an accurate picture of the water cycle means that the chemical details of water don't matter that much. MEP must be capturing the essential physics of the water for relevant to climate. The fact that the same calculation in vinegarworld (probably) gets macrostate wrong means some of the microfoundations of vinegar matter.

So what if money was made of vinegar -- such that e.g. people at home could make it themselves and central banks ceased to matter? Well, that would change the system, and it's possible you'd end up with something besides the quantity theory of money.

However, if you can derive results like log P ~ log M that seem to have some universal validity without specifying what money is, but rather from information theory (our version of MEP) just using the facts that money carries information and there is a lot of it around, then it must not really matter how money "works" (i.e. the microfoundations of money). Information theory must be capturing the essential economics of fiat currency.

And the converse is if you can't capture the essential economics of fiat currency in this way (or via some other dimensional reduction), the real model of fiat currency is likely intractable.

## Saturday, June 14, 2014

### Some prediction fun (ECB edition)

Neo-Fisherite Edward Lambert makes a prediction of lower inflation as the ECB lowers interest rates via the "Fisher effect" (inflation is low because nominal interest rates are low and expected to stay low), which is similar to what I called the Fisher effect where expected inflation influences nominal interest rates -- however, I think the effect was largely confined to the 1970s and 1980s and to long term interest rates.

I'll jump in and make a prediction of lower inflation in the Euro zone (I was saying we're in for lower inflation in the EU since last November ) as well, but via the diminishing effect of monetary expansion (aka the "information trap"). Here is another post on the idea in relation to the neo-Fisherite rebellion. And this is another one.

However, due to the short time scale the EU has been in its current configuration, it's quite a bit more uncertain where the EU is in terms of the information transfer model. Check out the wild behavior of the monetary base and the currency component (the Euro was introduced in 2000 as a non-physical currency and was introduced as a physical currency in 2002, but other currencies continued to be used for awhile afterwards):

It appears only the post-crisis (2009 and beyond) Euro data has an EU and a Eurozone area in a stable configuration, so I just ran the models for that. Here are the CPI and the long term (10 year) and short term (interbank) interest rates:

And here are the resulting IS curves for the long term interest rate (analogous to this analysis, each line is taken at the start of each year, with the red one being 2014):

You can see the IS curves and the "zero bound" sliding backwards from 2009 (curves 2005 through 2014 are shown), a sign of being in a liquidity trap. But they are still downward sloping, not yet vertical.

But I still want to jump in and make a prediction that we'll get disinflation in the Eurozone as they go forward with their policies.

### Reconciling expectation and information

There was a brief back and forth between Scott Sumner and me in comments on his post on whether money is tighter or looser given a drop in interest rates. Sumner said (the edited version):
The fed funds change is different. Whereas a change in IOR affected the demand for base money, a change in the few funds rate is an effect of a change in the supply of base money. That’s easier money in the short run, ceteris paribus. But whether it is actually easier money depends on how the action impacts the expected future path of monetary policy.
The short run effect is the liquidity effect, which I describe in more detail at this link. I commented that the long run effect was the same result as I gave in this post, except the "depends" clause was different -- in the information transfer (IT) model result the depending is measured by the size of the base relative to the size of the economy (e.g. NGDP). Sumner disagreed with it initially, but then agreed with my contention that if the base is small, the future path of monetary policy is likely expansionary (i.e. looser money, leading to higher interest rates).  I'd like to say more about that with this post.

But first, I have to correct some sloppy language on my part. I have a tendency to say "the size of the base relative to NGDP" or write "MB/NGDP" when I actually mean a) not MB but M0, the currency component of the base (i.e. without reserves) and b) not the ratio but the ratio of the logarithms log M0/log NGDP (aka the IT index, denoted by the Greek letter kappa κ elsewhere in this blog). You can see what I mean when I plot the two ratios here:

We can see that κ goes from a smaller value in the 1960s to a larger value today, whereas the simple ratio goes down and then up (it is normalized to 2011). The IT index κ is more relevant in the model because the price level is determined by the base via log P ~ (1/κ - 1) log M0 which means that P ~ constant for κ = 1. Another way to look at the IT index is that it is the fraction of a unit of source information that can be transferred with the unit of account, i.e. 1/logM0 NGDP = log M0/log NGDP (using my poor man's subscripting technique to represent the logarithm in base M0) analogous to Sumner's share of the economy that can be bought with one dollar (value of money = 1/NGDP).

Now how do we reconcile the expectation of future base growth ("loose money") with the size of the base M0? Well, if we plot the numerator (blue) and denominator (red) of the IT index and assume a uniform probability distribution (maximum ignorance) using the denominator (log NGDP, i.e. where κ = 1 if log M0 = log NGDP) as the upper bound and the quantity theory of money (κ = 1/2) [1] as the lower bound (dashed gray) for the possible values of log M0, we can see that if the IT index is below κ = 3/4, more states in log M0 space exist above than below the current value of the base in 1970. See the relative length of the green line segments the diagram below:

We can put this in Sumner's language in the quote above. In 1970, there are more places for log M0 to be above its current value than below, meaning a larger base and therefore looser money is expected. That means interest rates will tend to rise as an effect of base expansion. In 2005, a larger or smaller base are closer to being equally likely (i.e. that base expansion is less likely than it was before in the 1970s), meaning that interest rates will tend to stay constant or fall as an effect of base expansion.

The uniform distribution over the values of κ ∈ [1/2, 1], of course, is an oversimplified picture. The actual probability distribution over the possible expected values would likely involve more understanding of human behavior (in response to Mike Freimuth's comments here) -- the distribution is probably peaked at the value of the (log of) the base and goes to zero at (the log of) NGDP as well as zero (log 0 = -∞), but its exact form would depend on how humans discount rare events (see e.g. prospect theory).

The actual cross-over point, based on the behavior of IS curves (i.e. the effect of expansionary policy on interest rates), seems to be in the late 1970s or early 1980s. This is where the IS curves turn over and loose money means lower interest rates.

[1] If we have log P ~ (1/κ - 1) log M0 then κ = 1/2 means that  (1/(1/2) - 1) log M0 = (2 - 1) log M0 = log M0 = log P, i.e. P ~ M0 (the quantity theory of money) and the rate of growth of the price level is equal to the rate of growth of the base. [Added in update 6/14/2014]

## Friday, June 13, 2014

### Is this what Noah Smith is referring to?

Noah Smith wrote a piece for Bloomberg View:
[Shinzo Abe] went the opposite direction of Europe, and -- unlike the U.S. -- he gave every indication that the shift toward monetarism was permanent. The result: Japan has escaped deflation.

It took me awhile to track down the deflation victory point (it's not yet in the data from FRED):

See it? It's the one point at the end above the 1-sigma model error band from April 2014. It is actually quite remarkable (a 2-sigma jump), and for the sake of the people of Japan I hope it holds up. I mentioned in an earlier post that this rise over the past few months could be due to the impact of fiscal policy. Before someone says I scaled the axes to make the jump look small, note that the US CPI traversed the entire range in the same time period (2% inflation would take you from 0.9 in 1994 to 1.1 in 2014).

A jump in CPI due to monetary policy should affect interest rates, but nothing remarkable seems to be happening. Interest rates are low given the model result (again 1-sigma error band shown):

## Thursday, June 12, 2014

### Krugman, Keynes and the liquidity trap

There was some lively discussion in the comments on this post about the liquidity trap. I realized 1) I hadn't worked out the details of what happens in the information transfer model to the level of detail of that post and 2) I found the liquidity trap model from Krugman's 1998 Brookings paper to be essentially mute on the reason I wrote the post in the first place: I wanted to find out if the ECB raising rates in 2011 or keeping them above zero after 2008 had anything to say about the efficacy of monetary policy or monetary offset.

To that end, I thought I'd try to reproduce the IS curve with the information transfer model (ITM). The results were pretty neat, and they follow in a long line of IT model results where everyone (monetarists, Keynesians) is correct but only in at certain times.

I started from this post and looked at the effect of monetary expansion (red arrows in the graph on the left below) on the interest rate (red lines). The model also incorporates the impact of monetary expanion on the price level -- it's a highly nonlinear system. The "information trap" criterion ∂P/∂MB = 0 is represented as a dotted line. At this line is where monetary policy has no impact on the price level or nominal output -- effectively the liquidity trap Krugman describes in his 1998 paper. I sampled points given by the rainbow dots in the graph below on the right (and the colors correspond in the rest of this post as well).

At each of the points in the graph on the right above, I evaluated the interest rate and nominal output (NGDP) for various values of monetary expansion (and contraction) and used those values to trace out an IS curve (interest rate vs output). One difference from Krugman is that I used nominal output instead of real output. I did run the numbers for real output, and the results were basically the same, but the graphs didn't look as nice. Maybe I'll put them in a follow up.

Here was the result:

The black points on the rainbow curves represent the starting value and the curves trace out monetary expansion and contraction. The gray points represent data from the US. Note that the curves are not required to follow the data exactly; one way to think of the curves is as the effect of steering the economy by central bank near the given point, but things like population growth steadily moves the economy to the right. However, when the points do follow one of the curves closely, that gives an indication of how much power monetary policy has over the economy. In the graphic below, I show each curve in a normalized window (+/- 30% of the center value) that gives a better view of the changing IS curves over time:

In the graph above, we can clearly see 1) the monetarist view (higher interest rates are a sign monetary policy has been loose) as the purple and violet curves in the beginning, 2) the Keynesian view of a downward sloping IS curve (greens) where lowering interest rates boosts the economy and 3) the liquidity trap (orange and red), where the IS curves become almost vertical -- output becomes almost independent of interest rates and raising or lowering interest rates has no impact. Since this effect enters via the impact of monetary policy on the price level, it means that monetary policy is ineffective in the AD/AS framework and there is no monetary offset.

This answers two questions. First, the non-zero ECB interest rates (and raising or lowering them) probably had no monetary impact (they might have had a fiscal impact on the member states). This is how I interpreted the liquidity trap in the earlier post. Second, the economy doesn't need to be at the zero lower bound (ZLB) to experience a liquidity trap -- but it does have to have "low" interest rates close to the information trap criterion line in the graphs at the top of this post. This is the connection with Keynes -- the original liquidity trap he describes does not have to occur at the ZLB.

But what actually happens at the ZLB? We can use the last two red/orange graphs in the series if I show them with a linear scale:

It looks like pushing a piece of uncooked spaghetti against the countertop with the spaghetti bending away. Pushing down towards the ZLB produces a negative impact on NGDP i.e. "deflationary monetary expansion". Notice that there is actually a peak output for the curve on the left -- I imagine this is Krugman's point 2 in Figure 2 in his paper. In fact, here is one of the IS curves derived here (blue) alongside a schematic of the "traditional" IS curve (dashed) from e.g. Krugman's 1998 paper (along with the labels from that paper):

This a possible resolution of the discussion I was having with Mark Sadowski in comments on the earlier post.