Monday, April 20, 2015

Do macro models need a financial sector?

Dan Davies had this analogy on twitter for macro without a financial sector:
@dsquareddigest: @Frances_Coppola @ericlonners it's as if there was an epidemic of hepatitis and half the doctors had to look up what the liver was for.
And if we look at e.g. the data presented in this blog post, we can see that the financial sector is indeed a sizable chunk of NGDP at 19.6% in 2013.


Let's try to build a picture of what such a "financialized" economy looks like starting from the maximum entropy view of the information equilibrium model. Borrowing pictures from that post, a snapshot of an ordinary economy looks something like this:


Each box represents a business or industry, and at any time they might find themselves on the decline or growing, with most growing near the average rate of economic growth for the whole economy.

As we can see in the pie chart, though, there should be at least one very large box that moves in concert: the government. Generally, due to coordination by the legislative branch, government spending can move as a single unit -- e.g. across the board spending or tax cuts.

Now inside a single industry, the individual units won't necessarily be coordinated -- in fact, e.g. Ford and GM might be anti-correlated with one surging in profits while the other loses market share. Domestic manufacturing in general can rise or decline (crowded out by imports), but overall the result will be a lot less coordinated than government spending (except e.g. in a recession when all growth slows).

But what if the other big slice in the pie chart is coordinated? The financial sector could be as coordinated as government spending with markets effectively acting as the legislative branch. If we put government and the financial sector (to scale) in our snapshot, we get something that looks like this:


I've put government (in blue) growing at roughly the modal rate and the financial sector (in gray) outperforming it.

Now I've already looked at what happens when the government sector moves around; what we're concerned with today is the financial sector. If the financial sector is coordinated (through market exchanges or inter-dependencies), a big financial crisis can make the entire sector enter a low growth (or declining) state like this:


This is a far more serious loss of entropy than an uncoordinated sector of the same size with 50% (coordinated fraction = 0.5) of the states going from growth states to declining states, pictured here:


A calculation using the Kullback-Liebler divergence has the former version resulting in a loss of entropy of 4%, while the latter loses only 1%. In general, it looks like this:


One way to visualize the uncoordinated case is the dot-com bust where there were many different actors in the sector as opposed to the relatively smaller number of  financial companies (q.v. "contagion") in the highly coordinated case.

Simply because it represents a large fraction of the US economy and is highly coordinated by exchanges (a significant bad day on the S&P500 is usually a significant bad day on other exchanges -- even around the world), it is plausible to posit the financial sector can move as a single unit, much like the government sector (coordinated instead by political parties).

We can think of the financial sector F as analogous of a second "government" sector and write:

NGDP = C + I + F + G + NX

(This is a heuristic designation -- F would specifically be carved out of C, I, G and NX as appropriate, and the exact definition would take some econometrics work.)

Financial crises would be much like government austerity, except they would be by definition procyclical -- being more likely when a recession happens, being the cause of a recession [1], or even being synonymous with a recession. A surging market is not very different from a surge in government spending; a collapsing market is not very different from a fall in government spending. That is to say a good model of the financial sector would simply be to dust off those old models of the government sector.

Footnotes:

[1] I still think of recessions as avalanche events, but the financial sector can be the large rock that precipitates the cascade.

Sunday, April 19, 2015

Diamond-Dybvig as a maximum entropy model


I'm pretty sure this is not the standard way to present Diamond-Dybvig (which seems more commonly to be presented as a game theory problem).  However, this presentation will allow me to leverage some of the machinery of this post on utility and information equilibrium. I'm also hoping I haven't completely misunderstood the model.

Diamond-Dybvig is originally a model of consumption in 3 time periods, but we will take that to be a large number of time periods (for reasons that will be clear later). Time $t$ will be between 0 and 1.

Let's define a utility function $U(c_{1}, c_{2}, ...)$ to be the information source in the markets

$$
MU_{c_{i}} : U \rightarrow c_{i}
$$

for $i = 1 ... n$ where $MU_{c_{i}}$ is the marginal utility (a detector) for the consumption $c_{i}$ in the $i^{th}$ period (information destination). We can immediately write down the main information transfer model equation:

$$
MU_{c_{i}} = \frac{\partial U}{\partial c_{i}} = k_{i} \; \frac{U}{c_{i}}
$$

Solving the differential equations, our utility function $U(c_{1}, c_{2}, ...)$ is

$$
U(c_{1}, c_{2}, ...) = a \prod_{i} \left( \frac{c_{i}}{C_{i}} \right)^{k_{i}}
$$

Where the $C_{i}$ and $a$ are constants. The basic timeline we will consider is here:


Periods $i$ and $k$ are some "early" time periods near $t = 0$ with consumption $c_{i}$ and $c_{k}$ while period $j$ is a "late" time period near $t = 1$ with consumption $c_{j}$. We introduce a "budget constraint" that basically says if you take your money out of a bank early, you don't get any interest. This is roughly the same as in the normal model except now period 1 is the early period $i$ and period 2 is the late period $j$. We define $t$ to be $t_{j} - t_{i}$ with $t_{j} \approx 1$ so the bank's budget constraint is

$$
\text{(1) }\;\; t c_{i} + \frac{(1-t) c_{j}}{1+r} = 1
$$

The total available state space is therefore an $n$-dimensional polytope with vertices along axes $c_{1}$, $c_{2}$, ... $c_{n}$. For example, in three dimensions (periods) we have something that looks like this:


Visualizing this in higher dimensions is harder. Each point inside this region is taken to be equally likely (equipartition or maximum information entropy). Since we are looking at a higher dimensional space, we can take advantage of the fact that nearly all of the points are near the surface ... here, for example is the probability density of the location of the points in a 50-dimensional polytope (where 1 indicates saturation of the budget constraint):


Therefore the most likely point will be just inside the center of that surface (e.g. the center of the triangle in the 3D model above). If we just look at our two important dimensions -- an early and late period -- we have the following picture:


The green line is Eq. (1) the bank's budget constraint (all green shaded points are equally likely, and the intercepts are given by the constraint equation above) and the blue dashed line is the maximum density of states just inside the surface defined by the budget constraint. The blue 45 degree line is the case where consumption is perfectly smoothed over every period -- which is assumed to be the desired social optimum [0]. The most likely state with equal consumption in every period is given by E in the diagram.

The "no bank" solution is labeled NB where consumption in the early period is $c_{i} \approx 1$. The maximum entropy solution where all consumption smoothing (and even consumption "roughening") states are possible because of the existence of banks is labeled B.

The utility level curves are derived from the Cobb-Douglas utility function at the top of this post. You can see that in this case we have B at higher utility than E or NB and that having banks allows us to reach closer to E than NB.

If people move their consumption forward in time (looking at time $t_{k} < t_{i}$), you can get a bank run as the solution utility (red, below) passes beyond the utility curve that goes through the NB solution. Here are the two cases where there isn't a run (labeled NR) and there is a run (labeled R):


Of course, the utility curves are unnecessary for the information equilibrium/maximum entropy model and we can get essentially the same results without referencing them [1], except that in the maximum entropy case we can only say a run happens when R reaches $c_{i} \approx 1$ (the condition dividing the two solutions becomes the consumption in the early period is equal to the consumption in the case of no banks, rather than the utility of the consumption in the first period is equal to the utility of the consumption in the case of no banks).

I got into looking at Diamond Dybvig earlier today because of this post by Frances Coppola, who wanted to add in a bunch of dynamics of money and lending with a central bank. The thing is that the maximum entropy approach is agnostic about how consumption is mediated or the source of the interest rate. So it is actually a pretty general mechanism that should be valid across a wide array of models. In fact, we see here that the Diamond Dybvig mechanism derives mostly from the idea of the bank budget constraint (see footnote [1], too), so in any model where banks have a budget constraint of the form Eq. (1) above, you can achieve bank runs. Therefore deposit insurance generally works by alleviating the budget constraint. No amount of bells and whistles can help you understand this basic message better.

It would be easy to add this model of the interest rate so that we take (allowing the possibility of non-ideal information transfer)

$$
r \leq \left( \frac{1}{k_{p}} \; \frac{NGDP}{MB} \right)^{1/k_{r}}
$$

This would be equality in the ideal information transfer (information equilibrium) case. Adding in the price level model, we'd have two regimes: high and low inflation. In the high inflation scenario, monetary expansion raises interest rates (and contraction lowers them); in the low inflation scenario, monetary expansion lowers interest rates (and contraction raises them). See e.g. here. I'll try to work through the consequences of that in a later post ... it mostly moves the bank budget constraint Eq. (1).

Footnotes:

[0] Why? I'm not sure. It makes more sense to me that people would want to spend more when they take in more ... I guess it is just one of those times in economics where this applies: ¯\_(ツ)_/¯

[1] In that case the diagrams are much less cluttered and look like this:




Saturday, April 18, 2015

Micro stickiness versus macro stickiness

The hot topic in the econoblogosphere appears to be nominal rigidity. Here's Scott Sumner. Here's David Glasner. Here's my take on Glasner. Here are some other bits from me.

Anyway, I think it would be worthwhile to discuss what is meant by "sticky wages".

One way to see sticky wages is as individual wages that don't change: total microeconomic stickiness. This position is approximately represented by a paper from the SF Fed Sumner discusses at the link above. However the model they present not only doesn't look like the data at all, but is better representative of completely sticky wages than just sticky downward wages. It's so bad, I actually made a mistake looking at the model -- I didn't read the axes correctly as the central spike is plotted against the right axis. Here is a cartoon of what that graph should look like if the spike and the rest of the distribution were plotted on the same axes:


The light gray bars represents the distribution of wage changes at a time before the recession, the dark bars represent the same thing after a recession. Basically a big spike at zero wage change in both cases.

Another way to see sticky wages is as being sticky downward. This is how I originally looked at the model from the SF Fed. The picture you have is very few wage decreases -- mostly wage increases and zero changes -- and it represents individual sticky-downward wages (at the micro level):


These are the two sticky microeconomic cases.

Now what would sticky macroeconomic wages look like? There are two possibilities here: 1) wages are individually sticky and 2) wages are collectively sticky, but individually flexible. Case 1 looks like the SF model above -- a spike at zero -- or the downward rigidity in the second graph. 

Case 2 looks like a distribution with constant mean -- total nominal wages keep the same average growth before and after the recession. Individual wage changes fluctuate around from positive to negative. Case 2 is a bit harder to visualize with a single graph, so here is an animation:


The mean I am showing is the mean of the flexible individual wages, not the ones dumped into the zero wage change bin at the onset of the recession (I also exaggerated the change in the normalization at the onset of the recession so it is more obvious what is happening).

Here is what that case looks like in the same style as the previous graphs:


You may be curious as to why, even with the spike at zero wage change, I still consider wages to be "flexible" individually. In the case of the SF model, ~ 60-90% of wages are in the zero change bin; that's sticky. In all of the others, only ~10% of wages are in the zero change bin -- ~90% of wages are changing by amounts up to 20% or more. I wouldn't call that individually sticky at all. Additionally, before and after a recession, the fraction in the zero bin only goes up by a few percentage points.

And that is really what is happening! Here is the data from the SF Fed paper:


That looks like sticky macro, flexible micro wages (no change in the mean, individual changes of up to 20%).

Note also that this data looks nothing like the model presented in the paper (the first graph from the top above) or sticky downward individual wages (second graph from the top above).

There remains the question of whether there is any macro wage flexibility -- let's look at the case of flexible macro, flexible micro wages, again best seen as an animation. In this case the mean of the flexible piece of the distribution goes up and down:


How does this look if there's a recession and wage growth slows in the style of the graphs above?


This actually qualitatively looks a bit more like the data than the sticky macro, flexible micro case -- there are some light gray bars sticking out above the distribution on the right side as they do in the data. However that effect is pretty small; to a good approximation we have sticky macro, flexible micro wages.


The animation of the flexible macro, flexible micro case illustrates the theoretical problem brought up in Glasner's post:
This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists.
In a sense, he is saying we have no idea how the wages collectively move to restore equilibrium through individual changes. Nothing is guiding the changing location of the mean in the animation -- there is no Walrasian auctioneer steering the economy.

The sticky macro, flexible micro case solves this problem -- but only if the equilibrium price vector is an entropy maximizing state and not e.g. a utility maximizing state (see here for a comparison). Since the distribution doesn't change, there is no change requiring coordination of a Walrasian auctioneer. The process of returning to an equilibrium from disequilibrium is simply the process of going from an unlikely state to a more likely state.

Let me use an analogy from physics. Consider a box of gas molecules, initially in equilibrium at constant density across the box (figure on the left). If we give the box a jolt, you can set up a density oscillation such that more molecules (higher pressure) are on one side than the other (figure on the right):


Eventually the molecules return to the equilibrium (maximum entropy) state on the right guided only by the macro properties (temperature, volume, number of molecules). The velocity distribution doesn't change very much (i.e. the temperature doesn't change very much). We simply lose the information imparted by the shock as entropy increases.

The disequilibrium state with higher pressure on one side of the box is analogous to the disequilibrium price vector described by Glasner. The macro properties are NGDP and its growth rate. The velocity distribution is analogous to the wage change distribution. And the process of entropy increasing to its maximum is the process of tâtonnement.

The key idea to remember here is that there is nothing that violates the microscopic laws of physics in the box on the right -- that state can be achieved by chance alone! It's just very very unlikely and you need the coordination of the jolt to the box to induce it.

You may have noticed that I didn't discuss the spike at zero wage change very much [1]. I think it is something of a red herring and the description of wage stickiness would be qualitatively the same without it. In this old blog post of mine, I argue that the spike at zero (micro wage stickiness) and involuntary unemployment are two of the most efficient ways for an economy to shed entropy (i.e. NGDP) during an adverse shock/recession.

In the end, the process looks like this:

  1. An economic shock hits, reducing NGDP
  2. The economy must shed this 'excess' NGDP though the options open to it
  3. There are sticky macro prices, so the shock can't manifest as a significant change in the distribution of wage changes
  4. Therefore some of the NGDP is shed through microeconomic stickiness (spike at zero) and involuntary unemployment (effectively reducing the normalization of the distribution of wage changes)
  5. As the economy grows (entropy increases), the information in the economic shock fades away until the maximum entropy state consistent with NGDP and other macro parameters is restored
Footnotes:

[1] The spike at zero makes me think of a Bose-Einstein condensate ...



Friday, April 17, 2015

Macro prices are sticky, not micro prices

Two not very sticky prices ...

David Glasner, great as always:
While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness ... 
The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. ... 
Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. ... 
This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists.

Calvo pricing is an ad hoc attempt to model an entropic force with a microeconomic effect (see here and here). As I commented below his post, assuming ignorance of this process is actually the first step ... if equilibrium is the most likely state, then it can be achieved by random processes:
Another way out of requiring sticky micro prices is that if there are millions of prices, it is simply unlikely that the millions of (non-sticky) adjustments will happen in a way that brings aggregate demand into equilibrium with aggregate supply. 
Imagine that each price is a stochastic process, moving up or down +/- 1 unit per time interval according to the forces in that specific market. If you have two markets and assume ignorance of the specific market forces, there are 2^n with n = 2 or 4 total possibilities 
{+1, +1}, {+1 -1}, {-1, +1}, {-1 -1} 
The most likely possibility is no net total movement (the “price level” stays the same) — present in 2 of those choices: {+1 -1}, {-1, +1}. However with two markets, the error is ~1/sqrt(n) = 0.7 or 70%. 
Now if you have 1000 prices, you have 2^1000 possibilities. The most common possibility is still no net movement, but in this case the error (assuming all possibilities are equal) is ~1/sqrt(n) = 0.03 or 3%. In a real market with millions of prices, this is ~ 0.1% or smaller.
In this model, there are no sticky individual prices — every price moves up or down in every time step. However, the aggregate price p = Σ p_i moves a fraction of a percent. 
Now the process is not necessarily stochastic — humans are making decisions in their markets, but those decisions are likely so complicated (and dependent e.g. on their expectations of others expectations) that they could appear stochastic at the macro level. 
This also gives us a mechanism to find the equilibrium price vector — if the price is the most likely (maximum entropy) price though “dither” — individuals feeling around for local entropy gradients (i.e. “unlikely conditions” … you see a price that is out of the ordinary on the low side, you buy). 
This process only works if the equilibrium price vector is the maximum entropy (most likely) price vector consistent with macro observations like nominal output or employment. 
http://informationtransfereconomics.blogspot.com/2015/03/entropy-and-walrasian-auctioneer.html

The foundation


In writing the previous post, I looked up an old blog post I'd read by Cosma Shalizi about econophysics that I found very influential (listed below). That got me to thinking about the foundation of this blog along with what inspired my thinking and approach.

Aside from the thermodynamics I learned in school (from Reif and Landau and Lifshitz), my economics and information theory mostly comes from the internet (and some work-related stuff ... Terry Tao is pretty awesome). The Feynman Lectures on Computation are good too. This article [pdf] and the related paper cited in it are swimming around in the background, too. When I was in graduate school I considered going to into finance, as did many physicists in the late 90s and early 2000s and this book was my reference before my emails and interviews.

These links alone don't necessarily cover all of the technical details, but they do at least point to (or give some important search terms for) the resources and therefore were my starting points.

Here is the list:

Claude Shannon

You really don't need much more than this in terms of information theory to understand the next paper or this blog ...

Peter Fielitz and Guenter Borchardt

This paper is the basis of the information equilibrium model available at the time; the latest version (with a different title) is here.

Noah Smith

I started my blog a week after that post.

Noah Smith

This let me know the list of things I needed to learn before making a fool of myself, but presented in Noah's snarky style.

Cosma Shalizi

This forms the basis for the history of physicists attempting to point out how economists are wrong and largely being incorrect or ignored.

Cosma Shalizi

The greatest blog post ever written; also an excellent way to think about markets as human-created algorithms solving an optimization problem.

Scott Sumner

This came out two weeks before I started my blog. See also here (especially the footnote).

Paul Krugman

The macro of Paul Krugman and a good history lesson. The following few links as well ...

Paul Krugman

Paul Krugman

Paul Krugman

Brad DeLong

I don't link to this very much, but it is behind much of the presentation of the information equilibrium model in terms of changing curves (e.g. here and here).

There's no natural constituency for information equilibrium

An impasse to the uptake of the information equilibrium framework is that it has no natural constituency. I allude to this in this Socratic dialog (and present a list of things that go against the grain here as well as what the approach says about common topics in the econoblogosphere here), but I thought I'd talk more about it as I said in the previous post.

• It is a new approach

This would upset the "macro works fine" people like Paul Krugman.

• It gives credence to a lot of economic orthodoxy

This would upset the so-called heterodox people such as MMT and post-Keynesians as well as macro reform people who thought the 2008 financial crisis should up-end economists apple cart. These pieces by Munchau and Coppola are in the latter vein.

• It is a very simple theoretical framework

This would upset anyone who assumes macroeconomies are complex (pretty much everyone).

• It says that the quantity theory of money (and 'monetarist' economics) is a good approximation in particular limits

This would upset the people who aren't monetarists.

• It says the IS-LM model is a good approximation in particular limits  (along with 'Keynesian' economics) 

This would upset most economists who aren't Paul Krugman, Brad DeLong or Mark Thoma. Even they think of the IS-LM model as an aid to explanation rather than a real model. Here's Simon Wren-Lewis extolling the virtues of a new macro textbook that gets rid of the LM curve. (Not that the information transfer model couldn't reconstruct the newer diagram based model in the text.)

• There is no specific role for expectations

This would upset pretty much any economist, but particularly market monetarists like Scott Sumner and Nick Rowe. You can construct them (here, here), however they seem to be the same as other market forces.

• There is no specific need of microfoundations until you see market failures

This would upset both the "even wrong microfoundations are useful" like Stephen Williamson and the agent based model people which unfortunately includes most econo-physicists (see also here for a great round-up). 

• There is no representative agent

This would upset the people who use representative agents to get around the SMD theorem, i.e. everyone not named Alan Kirman [pdf].

• There is no micro reason for some macro effects

This would upset the "story" people who need to hear a plausible story to believe in a particular model -- something said by both Paul Krugman and Scott Sumner.

• It is a mathematical, axiomatic approach (in the style of Newtonian mechanics)

This would upset the people who refer back to the old writings to figure out what Keynes (or Hayek or Hume or Ricardo or ...) "really meant" (Krugman's 'Talmudic scholars') as well as the people who think there's too much math (or the wrong kind of math) in economics.

...

Basically, there is something for everyone to dislike! ... blog posts taken individually can alienate left and right, reform and status quo.

Of course, if you're doing something different you're going to ruffle at least a few feathers. And having pieces that different sides dislike also means you have pieces that different side like ... at least one commenter (Ben Kloester) referred to this as allowing "your model to be all things to all people".

I do hope that the multi-faceted nature gives some assurance that this approach isn't ideological.

Thursday, April 16, 2015

What did I miss?

I'm back from my (too short) vacation.

I. More links!

Tom Hickey discusses one of my posts and has a good analogy:
The analogy of the "scissors" of supply and demand can be called upon to summarize [markets] albeit simplistically. As long as each blade is functioning as it should, e.g., sharp enough to do the work, and the scissors is working correctly as a system, with the blades operating in alignment, the mechanism cuts as it should. However, if one of the blades is not functioning as it should, .e.g., is dull, or the scissors is out of alignment, e.g., the fulcrum screw loosens, then the scissors no longer operates correctly and the cuts are either off or done happen at all, that is, the system fails.
The discussion in the comments is also very worthwhile (I added to it a bit), and illustrates the fact that there is no natural constituency to embrace (or in some cases even look at) the information transfer/information equilibrium framework in economics ... I'll go into more detail in a new post.

II. Linear models!


Paul Krugman has a post about linear models
nonlinear modeling all too easily turns into a game with no rules

I'd add that a lot of things turn macro into a game with no rules, including expectations and ad hoc microfoundations with representative agents. Part of the reason I started on this blog was to create a framework to help eliminate possibilities!

One of the articles that sparked Krugman's post is this one by Wolfgang Münchau, which I commented on, but am having trouble seeing the replies that I received on FT ... anyway, Münchau says:

The second is linearity — the idea of a straight-line relationship between events. Standard macroeconomic models are complex, and their system of equations is linear. But if you want to understand why the economy did well before 2007, why there was a break in 2008 and why the path of economic output never returned to its previous trajectory, one would require models that incorporate the notion of non-linearity, and even chaos.

The complexity of macroeconomics is frequently presented without proof. Simply because we know that an economy is made up of millions of complicated pieces (humans, firms, etc) and there hasn't been a linear model that's been unequivocally declared successful doesn't mean that macroeconomics is complex.

Here, for example, I've put together a linear model that describes this lack of return to the pre-2007 path without chaos. Just because you can't think of the linear model doesn't mean it doesn't exist!

In general, though, the problem with nonlinear models is that there isn't enough data to eliminate nonlinear models. Macro data is only a little bit informative for simple linear models -- anything more is like string theory: models without data to prove or disprove them.


I did like this follow up letter from Michael Kuczynski:
Macroeconomics is about economic systems moving around to live within their relatively fluid accounting constraints: the physics of gases may be a better starting point than rarefied requirements in mathematical proof.
III. Groupthink!


This is terrible news for dither.