## Wednesday, July 1, 2015

### The Sadowski Theory of Money

Tom Brown mentioned Mark Sadowski has another post over at Marcus Nunes' blog; it includes what might be the most hilarious monetarist volley yet. But intriguingly it points to an opening for full scale acceptance of the information transfer model. I'll call it the Sadowski Theory of Money (STM). Mark first shows us a plot of the monetary base (SBASENS) and CPI. I have no idea what happens next or in the follow up post because this is effectively what he shows:

When I was back in college in the 1990s, I once was watching some local cable access program late at night. In it there was some suited presenter -- likely at one of the Helium wells out near Amarillo -- going through a model of how Helium escapes from underground traps. It was quite detailed. In the end, he came out with the result that the measured levels of Helium meant that the Earth couldn't be older than 6000 years old. I started weeping.

Regardless of how adept people are at mathematics or statistics, it does not indicate of how good they are at the pursuit of knowledge. The reasons range from being blinded by ideology or religion to a lack of curiosity about the results they produce. I think Mark is part of the former.

I'm not quite as emo as I was back in college, so the reason I couldn't get past the first graph in Mark's post was that I busted out laughing. If you zoom out from the graph, you can see why:

Over the course of the history of economic thought (starting from Hume and continuing through Milton Friedman and beyond), there was a theory that was called the Quantity Theory of Money. In its most rudimentary form, it said that increases in the amount of money (say, the monetary base MB) led to an increase in the price level (P),

P ~ MB

or, taking the log of both sides (to compare growth rates):

log P ~ log MB

This is actually somewhat successful as an initial stab at a macroeconomic model that persists to this day as at least a starting point. Mark says, "Balderdash! The QTM is so Eighteenth century." He says we really need a new model. His model is this:

log P ~ k log MB

And we are living in a new modern era of monetary policy effectiveness, so only data since 2009 is relevant! So Mark studies the correlation between log P and log MB, scaling the variables and the axes in order to derive a value for k. An excellent fit is given by [1]

log P ~ 0.125 log MB

"Monetary policy is (a tiny bit) effective!" Mark shouts from the hilltops (after doing some rather unnecessary math I guess so he doesn't have to come out and state the equation above), "We are governed by the Sadowski Theory of Money!"  We can see how Mark has thrown hundreds of years of economic thought out the window by putting the STM on this graph of the QTM from David Romer's Advanced Macroeconomics (along with a point representing the US from 2009 to 2015):

Ok, enough with the yuks. Because in truth Mark Sadowski might be my first monetarist convert. That's because the model

log P ~ k log MB

is effectively an information transfer model (I had the codes ready to fit that data above) ... but just locally fit to a single region of data. You could even find support to change the value of k, allowing k to change from about 0.763 to about 0.125 [2] going through the financial crisis. Here is the fit to 1960-2008:

You're allowed to do what Mark did in his graph in the information transfer model. But then you have to ride the trolley all the way to the end. That change in k from 0.763 to 0.125 over the course of the financial crisis  would be interpreted as a monetary regime change. Let's explore what happened to the relationship bewtween a variation in P and MB:

δ (log P) ~ δ (k log MB)

δP/P ~ k δMB/MB

So the fractional change in P (inflation) is k time the fractional change in MB (monetary base growth rate). Between 2008 and 2010, according to the STM, that dropped by a factor of about 0.763/0.125 ~ 6. That is to say monetary policy suddenly became six times less effective than it was before the financial crisis.

Before 2008, a 100% increase (a doubling) of the monetary base would have lead to a 70% increase in the price level. After 2008, it leads to a 9% increase in the price level.

Monetary policy suddenly becoming far less effective ... sounds exactly like a liquidity trap to me.

The Sadowski Theory of Money is an information transfer model with a sudden onset of a liquidity trap during the financial crisis [3].

Footnotes:

[1] There is a constant scale factor for the price level of about 84.1 for the CPI given in units of 1984 = 100; I will call it a "Sadowski" in honor of the new theory and its discoverer. The equation is shown in the graphs.

[2] Note that k = (1/κ - 1) so k = 0.763 is κ = 0.57 (near the QTM limit of 1/2) and k = 0.125 is κ = 0.89 (near the IS-LM limit of 1).

[3] Of course, 'M0' (MB minus reserves) works best empirically and has no sudden onset of the liquidity trap, but rather a gradual change from the 1960s to today.

### Paul Krugman's definition of a liquidity trap

In the discussion in this post, I mentioned how I viewed the liquidity trap and zero lower bound:
[the phrase "zero lower bound"] is used to mean the appropriate target nominal interest rate (from e.g. a Taylor rule for nominal rates, or estimate of the real rate of interest after accounting for inflation) is less than zero (i.e. because of the ZLB, you can't get interest rates low enough). I've usually stuck to [that] definition ...
I am happy to report that it is essentially the same as Paul Krugman's definition [pdf] of a liquidity trap:
Can the central bank do this? Take the level of government purchases G as given; from (2) and (5) this will tell you the level of C needed to achieve full employment; (4) will tell you the real interest rate needed to get that level of C; and since we already have both the current and future price levels tied down, this implies a necessary level of the nominal interest rate. So all the central bank has to do is increase the money supply until the rate is at the desired level. But what if the required nominal rate is negative? In that case monetary policy can’t get you there: once the interest rate hits zero, people will just hoard any additional cash – we’re in the liquidity trap.
Bold emphasis was italic emphasis in the original. As Paul Krugman says: the required (target) nominal rate is negative. The observed nominal rate is unimportant except as an indicator of the point where people hoard cash.

H/T to Robert Waldmann for pointing me to the reference.

## Tuesday, June 30, 2015

### The Euler equation as a maximum entropy condition

In the discussion of the RCK model on these two posts I realized the Euler equation could be written as a maximum entropy condition. It's actually a fairly trivial application of the entropy maximizing version of the asset pricing equation:

$$p_{i} = \frac{\alpha_{i}}{\alpha_{j}} \frac{\partial U/\partial c_{j}}{\partial U/\partial c_{i}} p_{j}$$

To get to the typical macroeconomic Euler equation, define $\alpha_{i}/\alpha_{j} \equiv \beta$ and re-arrange:

$$\frac{\partial U}{\partial c_{i}} = \beta \; \frac{p_{j}}{p_{i}} \; \frac{\partial U}{\partial c_{j}}$$

The price at time $t_{j}$ divided by the price at time $t_{i}$ is just (one plus) the interest rate $R$ (for the time $t_{j} - t_{i}$), so:

$$\frac{\partial U}{\partial c_{i}} = \beta (1 + R) \; \frac{\partial U}{\partial c_{j}}$$

And we're done.

The intuition behind the traditional economic Euler equation is (borrowed from these lecture notes [pdf])
The Euler equation essentially says that [an agent] must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other [if utility is maximized].
The intuition for the maximum entropy version is different. It does involve the assumption of a large number of consumption periods (otherwise the intertemporal budget constraint wouldn't be saturated), but that isn't terribly important. The entropy maximum is actually given by (Eq. 4 at the link, re-arranged and using $p_{j}/p_{i} = 1 + R$):

$$c_{j} = c_{i} (1 + R)$$

The form of the utility function $U$ allows us to transform it into the equation above, but this is the more fundamental version from the information equilibrium standpoint. This equation says that since you could be anywhere along the blue line between $c_{j}$ maximized and $c_{i}$ maximized on this graph:

the typical location for an economic agent is in the middle of that blue line [1]. Agents themselves might not be indifferent to their location on the blue line (or even the interior of the triangle), but a maximum entropy ensemble of agents is. Another way to put it is that the maximum entropy ensemble doesn't break the underlying symmetry of the system -- the interest rate does. If the interest rate was zero, all consumption periods would be the same and consumption would be equal. A finite interest rate transforms both the coordinate system and the location of maximum entropy point. You'd imagine deforming the n-dimensional simplex so that each axis was scaled by $(1 + r)$ where $r$ is the interest rate between $t_{i}$ and $t_{i + 1}$.

Footnotes:

[1] The graph shown is actually for a large finite dimensional system (a large, but finite number of consumption periods); the true entropy maximum would fall just inside the blue line/intertemporal budget constraint.

## Monday, June 29, 2015

### The importance of transversality conditions (more on the Ramsey model)

There has been some fun and interesting discussion of the Ramsey-Cass-Koopmans [RCK] model on this post of mine. Sorry to those in the discussion that I haven't gotten back to the comments yet -- I've been taking my time to think about what's been brought up. I noted at the end of my post that:
Now there is some jiggery-pokery in the [RCK] model -- economists include "transversality conditions" that effectively eliminate all other possible paths [in the phase diagram].
This was not some throw-away line in the conclusion; it was the key point of the post. I think LAL's comment is a really useful way to understand how Nick Rowe and I ended up talking past each other:
I think you should pay more attention to the transversality conditions...there is a lot more economic content to them than you are realizing...
I completely agree that there is a lot of economic content! I think this is what Nick Rowe thought I kept missing when he metaphorically threw the eraser at me sitting in the back of the class ("What's the difference between pendulums and people? People have plans and expectations about the future, that affect their current actions."). However, my main point was that the transversality conditions (enforcing those plans and expectations about the future) are practically all of the economic content of the RCK model -- the RCK equations are somewhat superfluous.

This system of differential equations has a "saddle path" solution that runs from from a pair of initial conditions for capital and consumption to the equilibrium point. In the next graph I show the saddle path (black), the (approximate) equilibrium (black point) along with 2000 paths randomly distributed within 1% of the initial conditions that lead to the saddle path:

As you can see, most of these paths diverge from the saddle path -- and that's just for being 1% off. So given measurement error and random events, you are unlikely to find yourself exactly on the saddle path.

One of the main purposes of the transversality conditions is to say nearly all of those 2000 paths don't make economic sense. I borrowed this particular description of the argument/intuition from these lecture notes [pdf], but in general this is what Nick was getting at:
Imagine a path along which consumption is falling and k is therefore growing very large. Along such a path the product u'(c) k would grow rapidly, probably causing the limit [to violate the transversality conditions]. Such a path could not be optimal, however, because the economy is accumulating excessive hoards of capital, the output of which never gets consumed because it is reinvested instead. It would pay for the economic planner to slightly and permanently increase consumption, an option that is perfectly feasible given the rapid growth in k.
What this means in the context of the RCK model is that if the economy finds itself on one of those 2000 paths that aren't the saddle path, the economic agents realize a better deal can be had by reducing capital (or reducing consumption) in order to bring the economy back to the optimal saddle path. The result of that (exaggerated for clarity) is a set of path segments of the RCK model (blue) along with corrections (orange dashed) intended to bring the economy back to the saddle path (black line):

You can think of the blue segments as the times when the economy is obeying the RCK model and the orange loops as the times when the economics enforcing the transversality conditions is driving the economy. And this is where we come to my point: most paths would consist of mostly those loops since nearly all paths in the neighborhood of the saddle path diverge from the equilibrium point.

That is to say the typical path would be incredibly jagged [1]. Most of the time it would not be following the RCK saddle path -- or even obeying the RCK model equations, but instead would be on some correction jog taking the economy back to the saddle solution because of the transversaility conditions [2]. A typical path would look like this:

It would be entirely orange corrections (due to transversality conditions), rather than blue RCK solution paths. An ensemble (or path integral) of such paths would average (integrate) to the RCK saddle solution (which I mentioned in my reply to Nick Rowe). But an ensemble would also do that without the transversality conditions. If we just average all the 2000 paths [3] in the graph at the top of this post, we get the result we want (the saddle path, approximately) without assuming the transversality conditions or the economics they entail:

That means the necessary transversality conditions that end up representing most of the economics of the RCK model if understood in the neoclassical sense (i.e. why only 1 of those 2000 paths turns out to be valid) are actually unnecessary. The RCK model equations (at the top of the post) should be understood as establishing all possible ways to consistently divvy up capital and consumption over time. The transversality conditions say that only one of those ways is valid by fiat. An ensemble approach says that all ways are valid, but observations should be consistent with the most likely path.

Update 6/30/2015

If I understand Nick Rowe's comment below ("If rational, [Robinson Crusoe] would jump [to the Saddle path] immediately, and stay on it forever.") the picture looks more like this:

I agree that is the rational (model-consistent) expectations view, but in that view the transversality conditions do next to nothing. They apply once when Robinson Crusoe is first stranded and calculates how much rum he has. After that, Crusoe lives a 'feet up, mind in neutral' beach bum life -- not having to worry about having drunk too much of the rum or too little. He's no longer an optimizing agent, but passively obeying the differential equations at the top of this post.

There are also the questions of a) how do you know what the saddle path is? and b) how do you determine if you're on it? Robinson Crusoe has to spend a finite amount of time figuring out the path and his location relative to it to a given (finite) accuracy ... something that gets infinitely harder as you approach the equilibrium. And valuable drinking time to boot!

Footnotes:

[1] This is actually what happens in Feynman path integrals -- the set of smooth paths has measure zero, so a typical path contributing to the path integral is a noisy path "near" the classical solution.

 Graph borrowed from here.

[2] These corrections would be the inputs to control the inverted triple pendulum in the example Tom Brown mentioned. In that example, the controller spends most of its time making tiny corrections (analogous to the orange paths in the graphs above), not letting the pendulum follow the laws of physics (analogous to the blue paths).

[3] It becomes numerically unstable in the brute force way that I've implemented the model and the actual solution wasn't exact so I was unable to show the whole path on account of an outbreak of laziness in finding the exact solution and making sure the random path initial conditions weren't biased (as they seem to be ... towards the all-consumption solutions).

## Sunday, June 28, 2015

### The Keynesian monetarists

 From my post Keynesian economics in three graphs

There seem to be a lot of closet Keynesian monetarists out there. I noticed that a large fraction of the massive deficit reduction in the U.S. from 2012 to 2013 was due to dividends paid by Fannie Mae and Freddie Mac, and the big question that's come out of it is whether it should count as a negative ∆G. That is to say we've apparently figured out it's relevant and now we're just arguing over the multiplier. And now the monetarists are the ones arguing that its multiplier should be large (in order to show that the Keynesian view is wrong, of course, but still).

Now I'm not really an economist; I just play one on the internet. I'm a physicist. But I have constructed an economic framework that allows you to build some basic monetarist and Keynesian models from information theory (the main purpose of this blog). In the most empirically accurate versions of the Keynesian model, all that really matters in a recession are strongly coordinated movements of government output. The negative impacts of tax increases would be diffused. Effectively the multipliers are large for spending and small for taxes, As a result you get the "paleo-Keynesian" finding that even deficit neutral expansions of government spending are expansionary in a recession. But the framework also allows you to construct a monetarist model (it's not as empirically accurate over the past few decades, though [1]), so that puts me right where ... well, where many modern Keynesians are as they believe in monetary policy when outside a liquidity trap and fiscal policy when in one.

The more traditional analysis of multipliers looks at their effect on people's behavior -- in particular current consumption. For example, government spending on a building a bridge employs someone who might not be otherwise employed (in a recession), financing their consumption which is spent at stores who then don't have to cut back the hours of their employees which finances their consumption, and so on. Integrating this effect results in a multiplier greater than 1 ... a bigger bang for your government spending buck.

Tax cuts have less of an impact in most (Keynesian) analyses. They apply to people already with jobs, and since the problem of a recession (in the Keynesian view) is an outbreak of frugality, they tend to get saved. At least if you're already saving money; tax cuts for people living paycheck to paycheck are more likely to get spent.

Roughly the opposite reactions with the same multipliers should occur for government "austerity" (increasing taxes and cutting spending).

So how do we treat the dividend payout from Fannie and Freddie?

Matt Yglesias viewed it at the time as being wasted -- it could have been better paid out as government spending, tax cuts or a "helicopter drop" of cash. And that's true.

Sumner says Matt said the "GSE dividends were a contractionary “disaster.”", but I think Matt's view is better characterized as a comparison between a positive and a zero multiplier use of the dividends -- between high multiplier spending and zero multiplier deficit reduction (he said thw word disaster, but not the word contractionary):
The only problem is that this gusher of federal revenue is actually an economic disaster. ...
... the profits aren’t letting us spend more, they aren’t letting us tax less, and they aren’t freeing up private investment capital either. They’re doing nothing. It’s as if the money were sitting around as cash in a storage locker somewhere.
Some commenters and even Sumner suggest that somehow the dividends should be treated as contractionary with a multiplier greater than zero.

If we view the dividends as a confiscatory tax on those who would have done something with the money if they had held the stock, the multiplier would be small. It's a progressive tax increase applied to people already "saving" (holding the stock). It is unlikely to have financed current consumption. In that case it would be very slightly contractionary.

It could also be viewed as a tax on people paying their home loans, but unlike income taxes in the U.S. the people paying these taxes are getting something directly (housing) in return. It is not a tax that requires people to cut back their consumption. The contractionary effect would be very small indeed -- approximately zero.

Commenter LAL puts forward the idea that the money is leaving the flow of money around the economy and something like this is probably the best argument that it is contractionary. The effect would come though the "shortage of safe assets" view of a liquidity trap -- the dividends mean that additional US treasury bonds (safe assets) aren't issued. Every treasury bond gets us that much closer to a sufficient stock of safe assets that we exit the liquidity trap so in not issuing bonds we're deeper in the trap than we would be if we had.

However, all of these "it's a tax" views are based on a counterfactual world in which the dividends exist and end up in the hands of the private economy and we're looking at the contractionary effect of the opportunity cost.

My original take in a comment on Sumner's post was that the money could be treated as simply a 20 dollar bill on the sidewalk snatched up by the government. Fannie and Freddie were bankrupt and were bailed out. The money otherwise wouldn't have existed ... not otherwise be financing consumption. In that sense, it almost seems like seigniorage: the money booked by the Treasury when it prints physical currency. Before the currency existed, there was no money. Before the US government took over Fannie and Freddie completely, there was no (actually negative) company value or stream of profits.

As I said I am not an economist, so I'm not sure what the correct treatment should be. I will ask Robert Waldmann, the only Keynesian who'll (potentially) take my questions at this point. From my rudimentary knowledge (and the workings of the information transfer model) it seems that the multiplier should be in the small to zero range.

Overall, there is a strange methodology at work here, though. The monetarist premise seems to be that the Keynesian theory multiplier must be large (m ~ 1) in order for the Keynesian theory to be wrong. That is just odd to me. The evidence is that if the Keynesian theory (K) is correct, the multiplier is small (m ~ 0)

P( m ~ 0 |K) >  P( m ~ 1 |K) ≈ 0

So that the Bayesian view with Keynesian prior is:

P(K | m ~ 0) =  P(m ~ 0 | K) P(K)/P(m ~ 0)

which is greater than zero here. But the monetarist premise contains the factor:

P(m ~ 1 | K) P(K)

where they are trying to show both probabilities are near zero. But P( m ~ 1 |K) is small (approximately zero), so P( m ~ 1 |K) P(K) ≈ 0 even without P(K) ≈ 0.

Basically the result you should get out this analysis is that the multiplier isn't large, not that the Keynesian view is wrong.

Footnotes:
[1] Actually, it can be written as a single model that has two limits: an ISLM-like model and a QTM-like model. The ISLM limit is more accurate today, while the QTM limit was more accurate in the 1970s. At least for the U.S. -- other countries like Russia and China are in the QTM limit, Japan is in the ISLM limit, and Canada is at the transition.

## Friday, June 26, 2015

### Always look at the data (Keynesian economics edition)

Scott Sumner says that we had a half a trillion dollars in deficit reduction from 2012 to 2013:
At the time, I was under the misapprehension that many Keynesians thought a massive and sudden reduction in the federal budget deficit would constitute “austerity” and hence would slow growth. Now that the dust has settled, we can calmly look at the data
Calendar 2012: Budget deficit = $1061 billion Calendar 2013: Budget deficit =$561 billion
A reduction of \$500 billion in one year. I used to be under the impression that Keynesians thought this would be a disastrous policy that sharply slowed growth.
It's true! That's a huge amount of deficit reduction all at one time. It's sharply visible on this graph of Federal receipts and expenditures:

It's nearly as big as the ARRA -- almost a 20% rise in Federal receipts. It definitely should give Keynesians pause. Or at least a reason to dig into the data ... I mean how do you get nearly a 20% rise in Federal receipts with the economy essentially giving a collective "meh" ... ? Monetary offset is one way. However, it turns out it is nearly entirely due to a single source: dividends from Fannie Mae and Freddie Mac [pdf from BEA]:

That accounts for over half of the deficit reduction between 2012 and 2013. You can see things in perspective if we subtract this piece from the graph above (the old receipts line in gray now):

The other half primarily comes from (from the BEA [pdf]):
Contributions for government social insurance accelerated as the result of an acceleration in social security contributions that reflected the expiration of the “payroll tax holiday” at the end of 2012 and to a lesser extent, the introduction of a hospital insurance tax surcharge of 0.9 percent for certain taxpayers.
Without those two pieces, there would be effectively zero deficit reduction -- there don't seem to have been any significant spending cuts, only a tax increase. And the tax side has a lower multiplier than the spending side (see e.g. here [pdf]) in the typical Keynesian analysis.

The Keynesian effect of the so-called "austerity" in 2013 would have been relatively small (this is just the deficit reduction divided by NGDP compared with NGDP growth):

It would have been completely lost in the noise.

### Ramsey model and the unstable equilibrium of a pendulum

Also in Romer's Advanced Macroeconomics is the Ramsey-Cass-Koopmans model (here is Wikipedia's version). It has some of the same flavor as the Solow model, but it has a rather silly (from this physicist's perspective) equilibrium growth path:

We are expected to believe that an economy not only will start out (luckily) somewhere on the path from F to E in the diagram above (you can extend F back towards the origin), but will in fact stay on that (lucky) path until reaching E at which point it will stay there (with a bit of luck).

This is a bit like a believing a damped pendulum, given just the right swing from the just the right height, will go all the way around and just come to a stop so that it is "bob-up" like in this picture from Wikipedia:

My first reaction to seeing that growth phase diagram was to laugh out loud. Economists couldn't be serious ... could they? Now it isn't strictly impossible, but the likelihood is so small that the tiniest air current will cause it to fall back to one of the more normal equilibria:

But the phase diagram from the Ramsey-Cass-Koopmans model is basically equivalent to the phase diagram of a damped pendulum near one of its unstable equilibria:

So basically, according to the Ramsey-Cass-Koopmans model, all economies head towards being all capital or all consumption. Who thought this was a good model?

Now there is some jiggery-pokery in the model -- economists include "transversality conditions" that effectively eliminate all other possible paths. If I eliminate all other paths besides the ones that lead to the unstable equilibria in the pendulum case, I get magic pendulum that stands on its head too!