Thursday, February 27, 2014

Monetary Abenomics has not generated inflation

Scott Sumner makes the claim (linking to Marcus Nunes) about Abenomics:
The data suggests that the new BOJ policy has raised inflation, and inflation expectations.  There is a mountain of evidence on that point.
I am skeptical that the "evidence" shows what Sumner and Nunes claim it shows. Why? Because my model shows the exact same rise in inflation (the red line is the core inflation cited by Sumner and graphed by Nunes): 


Wait, because I get the same inflation out of the model I say that their claim is unfounded? You heard me right. You see, the model I am using assumes no impact from monetary policy under Abenomics. Basically, you'd see almost the same rise in inflation without the QE under Abenomics. The graph also suggests that the inflation was a result of the fiscal expansion under Abenomics -- it is important to note that model result (which is a function of NGDP and the monetary base) doesn't depend on the fiscal counterfactual (which involves a guess about NGDP without fiscal expansion). But if you assume naively that NGDP' = NGDP + ΔG from Abenomics, then you can account for almost the entire rise in inflation.

Now that I've destroyed the market monetarists' claim that Abenomics is a natural experiment that proves them right, let me go on to claim that any claims about the effect of Abenomics are still mired in uncertainty. You shouldn't trust anyone that makes strong claims about the effect of monetary or fiscal policy.

Let's start with the model: it's pretty good (better than anything I've ever seen), but not perfect. Here is the price level:


Those seasonal fluctuations wreak havoc on the derivative of the price level (inflation). There are a couple of ways economists deal with this. One is to subtract out the seasonal fluctuations -- this involves a model and requires essentially a stable background which makes looking at changes in the background difficult, especially over a short time scale. Another is to look at year-over-year inflation rate. It assumes that monthly measurements should be at the same point in the annual cycle each year, and via the mean value theorem, YOY represents an annual average of the inflation rate. This is the inflation rate used in the graph by Nunes as well as in the graph at the top of this post (it is derived from the same 'core-core' CPI data set).

Using YOY already produces some issues. For example, the first points after Abenomics supposedly begins at the start of 2013 where inflation starts to take off represent an average over several months before Abenomics begins! Another issue is that YOY is very sensitive to data error, especially when your derivative is small (and the result you're after is down in the noise) because it depend on only two data points. In this graph, I show both the derivative (dashed) and the YOY average (solid):


You can see that the derivative is much noisier, but you can also see that the YOY average is way down in the noise and is equivalent to a smoothing of the derivative data. How much would you trust someone who said they smoothed the dotted red curve to the solid red curve and claimed the small rise (on this scale) is significant? This is not to say it is wrong! It's just way too early to tell.

The noisy data, model dependence and general uncertainty all become more apparent when you look at the longer run:


The graph at the top of this post is inside the box. You can see that both the model error as well as the measurement error are significant compared to the size of the effect we're supposed to be seeing. It is even more problematic when you consider what else we're supposed to be swallowing. The market monetarist view requires fiscal policy to have negligible impact, and has no numerical estimate for the actual rise. Not only does the rather simplistic model I'm using give you a numerical estimate consistent with the observed rise in inflation without changes in monetary policy, the addition of a naive model of the impact of fiscal policy is sufficient to account for the entire recent rise in inflation over the 20-year deflationary trend. The model I'm using was also created before we had data from 2013 and works for other countries.

And if that wasn't enough, the recent rise over the low at the start of 2013 is perfectly in line with linear trend that begins in 2010 (shown in the graph), two full years before Abe takes office.

Tuesday, February 25, 2014

Did the Fed offset the ARRA?

In the previous post, I looked at the effect of the ARRA. Now there exist claims of "monetary offset", i.e. that the Fed didn't do as much monetary expansion as would have happened if there was no stimulus, negating all or part of the effect of the stimulus.

This claim is strongly dependent on the counterfactual path of monetary policy, but I plan to address it indirectly by asking how much would the Fed have to do to either negate (offset) the ARRA completely or supplant it (i.e. produce an effect equal to the ARRA in its absence). This analysis boils down to: No, the Fed didn't offset the ARRA -- the ARRA had the effect I show in my previous post (reducing unemployment by about 2.5%, and otherwise having an effect comparable to other estimates of the impact), and Ramesh Ponnuru is incorrect.

Update 9/30/2014: In the following I refer to the "monetary base", but I am ignoring central bank reserves (i.e. this is the currency component of the monetary base).

First I will look at the "money channel", which is the impact of monetary policy on the price level. This is based on the Fed's implicit inflation target. In the graph I show the paths of the monetary policy that would offset the ARRA (dotted) or supplant it (dashed):


The actual path taken appears as a solid red curve. Now, how do we interpret this graph? Since we don't really know what the counterfactual would have been without the ARRA, all we can do is look at the size of the adjustments required and realize they are enormous. It's a reductio ad absurdum arugment: the Fed would have had to keep the base constant at $800 billion or raise it by 20% relative to that constant. In reality, the Fed raised the base by about 10% relative to the $800 billion, but it is impossible to say what the Fed would have done without the ARRA. It would be some path between the dotted and dashed line, but we can't know where exactly.

This argument might be a little clearer when we look at the "interest rate channel" (i.e. the Fed raising interest rates to offset the ARRA or lowering them to achieve the same stimulus as the ARRA). This is based on the Fed's explicit interest rate targets. I previously used the IS-LM model to estimate this effect here using a IS-curve slope ~1. In the graph I show the paths of the monetary policy that would offset the ARRA (dotted) or supplant it (dashed):


The actual path taken again appears as a solid red curve. Here, the Fed doesn't have to make as heroic an effort. In fact, it appears likely that the Fed moved enough (i.e. refrained from lowering rates enough) in the interest rate channel to offset the effect of the ARRA -- but in that channel only. Essentially, in the IS-LM market, the delta G was offset by a delta M. In the counterfactual universe without the ARRA, we might have expected the Fed to have done more [1].

The Fed likely "offset" the entire IS-LM market effect, which makes the analysis in the previous post more accurate since it ignored the interest rate channel [2], showing results for the price level only. Since the monetary policy and fiscal policy were both attempting to improve the economy, the Fed would have done more in the absence of the ARRA, but the effect in this channel would likely have been the same in the two cases (the Fed acts in a universe with the ARRA vs the Fed acts in a universe without the ARRA). The Fed might have lowered interest rates more, but that would have produced the same results as the Fed lowering interest rates less coupled with the ARRA.

[1] The short term rates were driven to zero by rounds of QE, but the long term rates remained above the zero bound. The Fed would have printed more currency, lowering long term rates.

[2] I looked at how the ARRA increased interest rates (crowding out) in the previous post, but did not translate that increase into a negative effect on output.

The effect of the ARRA

I had previously looked into the effect of the ARRA ('the stimulus'), but I forgot to include the effect of the tax cut component (which I was reminded of by this Krugman lecture). So here's an update; I've also updated these with the more accurate monetary model. Here are the results as a rather sparse gallery of graphs.

Here is the effect on the price level (pretty small):


Here is the effect on inflation (it generates a spike of inflation at the onset and takes it back as the stimulus fades):


Here is the effect on interest rates ("crowding out" of about 20 basis points on long term rates, about 5 bp on short term rates):


Here is the effect on unemployment:


Here is the change in unemployment rate (it shaved about 2.5 percentage points off the peak):


Here is the effect on RGDP:


And finally, here is an "apples to apples" comparison with the other estimates from the table in Krugman's lecture linked above (the information transfer model is basically in line with these, the shaded region is the CBO estimate which was given as upper and lower bounds):


This model ignores the "interest rate channel" in the sense of the IS-LM model (see e.g. here). This turns out to be a pretty good assumption as I discuss in the next post!

Friday, February 21, 2014

The Fed caused the Great Recession

With the release of the Fed transcripts from the September 16th 2008 meeting, a narrative of the Fed worrying about commodities inflation distracted it from the worsening economic situation is forming. Here are e.g. Matthew Yglesias and David Glasner.  In general this is part of a larger monetarist narrative that the Fed caused the recession and the financial crisis with tight monetary policy prior to September 2008. Here fore example is Scott Sumner.

In this post I will analyze this scenario with the information transfer model. First, I will look at the direct effect of monetary policy on NGDP and the price level. One issue is determining what the counterfactual monetary policy would have been. I chose a linear extrapolation from 2006 as this counterfactual. A second issue is the counterfactual for NGDP: was the shock an exogenous shock (i.e. independent of monetary policy) or not (i.e. potentially caused by monetary policy). Due to this ambiguity, I decided to do the calcuation two ways: NGDP had empirical path (scenario 1: NGDP shock was exogenous -- not due to monetary policy) and NGDP had a counterfactual path (scenario 2: no exogenous NGDP shock) [2]. 


Turns out both gave me the same answer, so we can be fairly confident about the effect of monetary policy. I used this procedure to extract NGDP shocks. Here is the effect of monetary policy relative to the counterfactual in scenario 1 (the effect of monetary policy is dashed blue relative to the counterfactual solid blue line, the gray shaded area is the actual shock):


Here is the effect of monetary policy relative to the counterfactual in scenario 2 (same key to the graph as above):


Both of these result in the same impact on NGDP (this graph shows the difference between the dashed curves and the solid curves in the previous two graphs in black and the actual shock in blue):


From this analysis, base adjustments resulted in a peak -3% of GDP shock, but that is only about 10% of the required shock (integrated) or 23% of the required shock (amplitude). Therefore the direct impact of monetary policy through the price level (the quantity theory of money) is insufficient to account for the entire shock. There is another potential source of a shock from monetary policy: interest rates. Here I show the effect scenario 1 (dashed black) and scenario 2 (solid black) on the long term interest rates and the short term interest rates (green, which is shown relative to the long term rate):


The Fed was effectively raising interest rates gradually from well before the onset of the financial crisis by having the base grow more slowly than NGDP. Short run rates followed long run rates up until the first rounds of QE. If we use the IS-LM model, we can get an estimate of the impact of this interest rate increase. Begin with the IS market equation: 

$$
\log r = \log \frac{Y^{0}}{\kappa_{IS} IS_{ref}} - \kappa_{IS}\frac{\Delta Y}{Y^{0}}
$$

If we have a small change in $r = r_{0} + \delta r$, then we have

$$
\log r_{0} + \frac{\delta r}{r_{0}} + \cdots = \log \frac{Y^{0}}{\kappa_{IS} IS_{ref}} - \kappa_{IS}\frac{\Delta Y}{Y^{0}}
$$

So that

$$
\frac{\delta r}{r_{0}} \simeq - \kappa_{IS}\frac{\Delta Y}{Y^{0}}
$$

Essentially the percent increase in the interest rate $r$ is equal to $\kappa_{IS}$ times the percent decrease in output. Now we don't know what $\kappa_{IS}$ (it is effectively the slope of the IS curve, and estimates tend to cluster around 1, but can be as high as 5, see e.g. here [1]), but if we assume $\kappa_{IS} \sim 1$ then our approximate 10% change (about 50 basis points) in the 10-year interest rate (which was averaging 4.7% for the two years prior to the financial crisis) would result in a shock of 10%. Coupled with the 3% shock due to the base adjustment above, this would account for all of the shock that caused the Great Recession. (The 25 basis point adjustment referred to as Alternative A at the last meeting would have reduced the impact by about half.) 

So according to this analysis the Fed is to blame, acting through the interest rate channel. An exogenous shock from the financial crisis is not necessary to account for any additional shock to NGDP. This is effectively Sumner's view above, however I don't think he'd agree with my use of the IS-LM model! The unfortunate thing is that after the shock occurred, we ended up mired in a liquidity trap

[1] Comparative Performance of U.S. Econometric Models (1991) Edited by Lawrence R. Klein

[2] PS: Here is another representation of the counterfactuals (scenario 1 dotted and scenario 2 solid black) and the actual path (blue):


Wednesday, February 19, 2014

Phlogiston economics is information economics

First, I must say that Matthew Yglesias's description of "technology" models total factor productivity (TFP) as phlogiston economics is, in a word, choice. He mentioned it today as well as at his previous blogging job (Noah Smith responded to it back then, too). There's a bonus in that it is tied to "The Great Stagnation" as defined by Tyler Cowen (see for example this David Beckworth piece).

I am going to put forward a model of TFP (using data from John Fernald at the SF Fed along with FRED data) and then I'm going to use that model to identify what TFP actually is.

Going back to this post where I extracted "nominal shocks" (i.e. what change in NGDP is left over after accounting for a change in the monetary base), we can turn that question around and see how much of NGDP growth is due to monetary expansion. I've plotted NGDP growth in blue and the monetary component of it as a dashed blue line:


We can see that NGDP growth was predominantly monetary before the 1990s and that the monetary component is falling. Now this monetary component isn't just inflation -- the easiest way to see that is that the monetary component is larger than the total NGDP growth for several years in the 1960s and 1970s. In fact, the dashed blue curve is on average higher than inflation during the 1960s and 1970s. During this period increasing the monetary base increases real GDP. After the 1990s, the dashed blue curve is lower than inflation. During this period, increasing the monetary base has little effect on real GDP. What if we turn the growth rate of the monetary component into a level? We get this:


The model (blue) lines up pretty well with total factor productivity (red). In fact, we see "the great stagnation" as a gradual flattening of the blue curve.

So, according to this model, what is total factor productivity? Well, it increases when monetary expansion produces NGDP growth that exceeds inflation. It is real (not nominal) economic growth that is "unlocked" by adding money (medium of exchange) to an economy. There was information in the aggregate demand (and therefore output) that wasn't being captured by the market because there wasn't enough money in it. Money allows information to flow. I once referred to this as the low hanging fruit of a growing economy.

This low hanging fruit runs out eventually (adding money doesn't allow a lot more information to flow -- this is because money is also a unit of account, so printing dollars means each new dollar is worth less). What you are left with is due to basic things like population growth. This is "the great stagnation".

Technically, the fall in TFP growth is just another manifestation of a larger information transfer index and has to do with the changing relationship between money as a unit of account and as a medium of exchange.

Tuesday, February 18, 2014

Tight monetary policy is an artifact of being in a liquidity trap

Nick Rowe discusses Chris House and comes to the conclusion (surprise!) that monetary policy is too tight: "What this means is that the so-called ZLB 'liquidity trap' is merely an artefact of tight monetary policy."

There is some theory at the link behind this statement, but essentially it says that monetary policy isn't ineffective, it's just too tight. It seems to me that monetary policy is, at least according to monetarists, universally too tight: the EU, Japan, the US, etc all have monetary policy that is too tight. This is a strange coincidence.

Imagine an off-road car race. A bunch of cars at one location on the track don't seem to be going anywhere. Is the explanation that they aren't pushing on their gas pedals hard enough? Or should we look and see if they are stuck in mud?

There is an alternative explanation: monetary policy is ineffective because of a liquidity trap. Interestingly, a monetary policy that is ineffective will look too tight from the perspective of someone who thinks it is effective. This explains that strange coincidence: all these economies are stuck in mud. A monetary policy that seems too tight is an artifact of being in a liquidity trap.
 
The information transfer model shows that monetary policy is ineffective when the monetary base is large compared to NGDP (the picture is on the left below; note the bending of the curve as the monetary base MB increases). There's our mud. Specifically, changes in the monetary base (δMB) do not lead to as large of changes in price level (δP) as they do when the monetary base is small compared to NGDP. This situation, δP/δMB ~ 0, I called an information trap that has a lot of similarity to Krugman's (and Keynes') liquidity trap. Additionally, ineffective monetary policy (liquidity trap) is associated with low interest rates. This picture is on the right below:


The line where δP/δMB ~ 0 is given by the black line -- this is the liquidity trap. Interest rates are the red lines and the path of long term interest rates (and the economy) in NGDP, MB space is the dark blue line. Short term interest rates are light blue. The gray lines are the contours of the price level. (The 3D version of this picture is actually on the right side of this page with the red surface describing the interest rates and the white surface, the price level.)

A two-way Lucas critique

Due to the most frequent comment on this blog being something akin to "I have no idea what you are talking about" I thought I'd take some advice from Paul Krugman and write something less abstruse.

I would say that the big idea in the last couple of posts (see links linked here), from an economics perspective, is a two-way Lucas critique. Lucas asked if macro observations were consistent with microeconomics. I'm asking what microeconomics are consistent with macro observations (and vice versa). In particular, I found long run neutrality of money (and other observations that fall under the "homogeneity postulate") and the efficient markets hypothesis (prices are maximally uninformative) lead to the same equation. This is interesting because not only is the former is a macro observation while the latter is a micro observation, but the equation itself leads to supply and demand diagrams. This could explain why supply and demand reasoning works well for a market for a single good as well as for macroeconomics (AD-AS, IS-LM). It is actually pretty magical that these diagrams can be used for individual markets and entire economies! It didn't have to turn out that way (it's a bit like a biologist using Newton's laws to describe an ecosystem and finding it works out).

This has import in the microfoundations debate: there are no assumptions about the motivations of economic agents being made at all. No rational expectations. No utility functions. This is not to say microfoundations aren't correct or useful, only that the EMH and long run neutrality give us enough information to write down e.g. the IS-LM model without specifying the behavior economic agents.

Sunday, February 16, 2014

A physicist reads the economics blogs

I just put up a two part series that motivates the equation:

$$
\text{(1) }p = \frac{dD}{dS} = \frac{1}{\kappa} \; \frac{D}{S}
$$

in two different ways.

I. Quantity theory and effective field theory
II. Entropy and microfoundations

The first (I) is a top down approach; take a "symmetry" of macroeconomics like the long run neutrality of money and use it to narrow down the form of macroeconomic relationships. Of course, long run neutrality is just one instance of homogeneity of degree zero and according to Leontief [1] this represents one of the fundamental assumptions of general equilibrium:
One of these fundamental assumptions - that which Mr. Keynes is ready to repudiate - defines an important universal property of all supply and demand functions by stating that the quantity of any service or any commodity demanded or supplied by a firm or an individual remains unchanged if all the prices upon which it (directly) depends increase or decrease exactly in the same proportion. In mathematical terms, this means that all supply and demand functions, with prices taken as independent variables and quantity as a dependent one, are homogeneous functions of the zero degree. In course of the following discussion, this theorem will be referred to as the "homogeneity postulate".
Equation (1) is the leading order differential equation consistent with homogeneity of degree zero. Additionally, to leading order, supply and demand systems described by equation (1) are those described by supply and demand diagrams. Diagram-based analysis advocated by e.g. Paul Krugman is exactly this leading order analysis.

The second (II) is a bottom up approach; take the efficient markets hypothesis (EMH) as a statement about information flows and use it to build a canonical microeconomic model of supply and demand.

It is nice that these two approaches result in the same form (1). However, we can ask: is it consistent to say that the price level turns out to be some predictable function of aggregate supply and aggregate demand while the EMH tells us that prices can't be predicted? Ah, but the EMH doesn't contradict supply and demand ... increasing supply, ceteris paribus, should predictably lead to lower prices on average. The EMH states that you can't beat the market in the long run, i.e. past prices are maximally uninformative, not completely uninformative. The price level and a growing company's stock both follow a long run path. There is additional uncertainty about the latter relative to the former due to the much larger number of equally probable microstates consistent with the macrostate. Statistical fluctuations should be suppressed by factors $\sim 1/\sqrt{N}$. 

Systematic deviations due to bubbles, herding, or other behavioral effects don't have to obey this. In fact, things like money illusion and the involuntary unemployment Keynes was attempting to describe (that was being discussed by Leontief) are considered violations of homogeneity of degree zero -- and I would expect a program looking at homogeneity-violating terms might be a fruitful line of research [2].

[1] Leontief, W.W. The fundamental assumption of Mr. Keynes' monetary theory of unemployment Quarterly Journal of Economics 192-197, 1937.

[2] In fact, I specifically add the homogeneity violating  $\kappa = \log S / \log D$ term in Part I in order to describe the price level.


Friday, February 14, 2014

II. Entropy and microfoundations


In Part I, we started with an empirical macro observation: the long run neutrality of money. There are many macro relationships that have followed from empirical study like the Phillips curve and Okun's law. But, asked e.g. Lucas, were these relationships consistent with microeconomics?

Lucas suggested based on the Phillips curve changes that previously observed macro relationships could change whenever policy changed [1]. A microfounded theory purportedly avoids this by determining how people respond to changes in policy (hence the reason that the Lucas critique tends to be interpreted as saying you must have microfoundations). That is a potential solution.

There is another way: assume ignorance. That is assume the principle of indifference: given the macrostate information you know  (NGDP, price level, MB, unemployment, etc), assume the system could be in any microstate consistent with that information with equal probability [2]. In Bayesian language, this is the simplest non-informative prior. This way lies statistical mechanics, thermodynamics and information theory.

We can see these two paths: assume ignorance about the microfoundations and derive what conclusions that will hold under most microfoundations, or assume particular microfoundations and see what macrostates result.

However we can also see the Sisyphean aspect of the microfoundations program. Since the macrostate represents a loss of information relative to the microstates, many different microfoundations will lead to the same macrostate ... or another way, the details of even the correct microfoundations are lost.

How do we know the details of the microfoundations are lost? The efficient markets hypothesis. At least in the sense that I rationalize both Fama and Shiller winning Nobel prizes in economics. The EMH (put one way) is the idea that price data is maximally uninformative. Note: that is maximally uninformative, not completely uninformative. How else could there be things Shiller found like long run trends, mean reversion and momentum? 

Equilibrium in thermodynamics represents a state of maximum entropy (ignorance) about initial conditions of the system: all we know are "conserved quantities" i.e. properties of the macrostate. Well, we could know more ... and knowing more would theoretically allow you to extract useful work from that knowledge. Consider Bennett's information powered engine [3].
Imagine a tape of double-sided pistons with a separating wall and a single atom (green) on one side (see top of the diagram). One side of the wall is labelled 0 and the other 1. Knowledge of which side the atom is on can be converted into work by performing the compression cycle given in the bottom of the diagram. This work could theoretically be used to power a vehicle (I changed the vehicle from Sethna's [3] train in the picture below):
How does an information powered warp drive engine relate to the EMH? Supposedly if you knew the series of price movements, you could beat the market by using that information (which becomes useless after you used it). However, this is about as likely as turning knowledge of all the positions of all the gas molecules in a room into useful work.

One more analogy before we get back to microfoundations. One way to see biology is as a process by which living things intercept entropy (free energy) flows. An autotroph converts low entropy high energy photons into high entropy waste (heat, low energy photons); a heterotroph converts low entropy organisms into high entropy waste (heat, poop). Economic agents intercept entropy (information) flows: they convert low entropy money into high entropy goods and services.

In living things, the free energy in the photons or sugars is converted into lower free energy products and the information in their original structure is lost. The information in the market (e.g. prices of goods) is converted in to quantities of goods where the prices they were bought at no longer matter (according to the EMH). This information is consumed by the market in the same way free energy is consumed by organisms.

So take the principle of indifference and posit that information flows from the demand to the supply the way the entropy flows through the conversion of high energy photons into low energy photons that is transferred to the environment.

We have the information source (demand) $I_{D} = K_{D} n_{D}$ and information destination (supply) $I_{S} = K_{S} n_{S}$ (here the Hartley information $I = n \log k \equiv K n$ corresponds to the Shannon information when all the states $k$ are equally probable, i.e. indifference). Define an information flow detector (a price) that measures the transfer of information from D to S

$$
\text{(1) } \frac{dD}{dS} \equiv p
$$

Take a small demand signal $dD \ll 1$ and a small supply signal $dS \ll 1$ so that $D/dD = n_{D}$ and $S/dS = n_{S}$ with $n \gg 1$ and assume $I_{D} = I_{S}$ (define $\kappa \equiv K_{S}/K_{D}$)So that now we have

$$
\text{(2) } p = \frac{dD}{dS} = \frac{1}{\kappa}\; \frac{D}{S}
$$

Which is the minimal equation we came up with that satisfied homogeneity of degree zero which guarantees the long run neutrality of money in Part I. This derivation is basically a simplified presentation of one of the first posts on this blog [3].

Returning to the microfoundations of macroeconomics, we can say that observed relationships that follow from equations of the form (2) represent microfoundation-independent macroeconomic results. They represent what we know given the greatest amount of ignorance about the microfoundations. Additionally, these macroeconomic relationships will be consistent with microeconomics.

I've used the notation p:D→S as a shorthand for these relationships There are a few of these we've mentioned on the blog (of varying degrees of accuracy):

Price level (quantity theory of money, liquidity traps and hyperinflation)
Interest rates (part of the IS-LM model)
Labor market (leads to Okun's law)
Unemployment (less accurate than the labor market version, but gives an interesting interpretation of the natural rate of unemployment)
Note that the Phillips curve follows from looking at the price level market and the labor market, and you can see the changes over time (the gradual flattening) is predicted by the theory.

[1] We found a way to predict these changes based on the unemployment market mentioned at the end the post.

[2] Under e.g. different assignments of initial endowments. In this earlier post I discuss the idea from Foley and Smith that economics grew up with special consideration for initial endowments and thus a predilection for studying irreversible processes rather than reversible ones that dominated physics.

[3] Citations: I borrowed the nice pictures from Entropy, Order Parameters, and Complexity by James P. Sethna (2006), which are based on pictures in the Feynman Lectures on Computation (which is based on work by Charles Bennett). Some of the discussion is based on those works as well. The derivation of the equations follows from work by Fielitz and Borchardt who developed the the original information transfer model of physical processes.

I. Quantity theory and effective field theory


According to Bennett McCallum [1], the quantity theory of money (QTM) is the macroeconomic observation that the economy obeys long run neutrality of money (it's not just MV = PY).  This Implies supply and demand functions will be homogeneous of degree zero, i.e. ratios of $D$ to $S$ such that if $D \rightarrow \alpha D$ and $S \rightarrow \alpha S$ then $g(D,S) \rightarrow \alpha^{0} g(D,S) = g(D,S)$. The simplest differential equation consistent with this observation is

$$
\text{(1) } \frac{dD}{dS} = \frac{1}{\kappa}\; \frac{D}{S}
$$

We can identify the RHS with the price level $P$ (ratio of NGDP to the money supply the exchange rate for the marginal unit of AD for the marginal unit of AS should be proportional to the price level). The solution (for varying D and S) is $D \sim S^{1/\kappa}$, or

$$
\text{(2) } P = \frac{1}{\kappa} S^{1/\kappa - 1}
$$

If we take D = NGDP and S = MB, this equation does well over segments of the price level, with different values of κ (which I'll just call the IT index for now): 

If we let $\kappa$ become $\log S / \log D$ [2] then with $D, S \gg 1$ and $\alpha$ small (i.e. small changes in NGDP or MB), there is still approximate short run homogeneity of degree zero. Additionally with S, D → ∞ with S/D finite, we have "long run" homogeneity of degree zero in a growing economy [3]. And it turns out the best fit to the local values of κ is approximately log MB/log NGDP (measured in billions of dollars):

But why should we be satisfied with (1)? In Part II, we'll motivate the equation via information theory. In this Part I, we'll resort to one of my favorite topics from physics: effective field theory.

A general homogeneous differential equation (of first order) is given by

$$
\frac{dD}{dS} = g(D/S)
$$

Where $g$ is an arbitrary function. This would capture any possible (first order) theory of supply and demand for money with long run neutrality. A Taylor expansion of $g$ around $D = S$ results in an equation of the form

$$
\text{(3) } \frac{dD}{dS} = c_0 + c_1 \frac{D}{S} + c_2 \left( \frac{D}{S} \right)^{2} + c_3 \left( \frac{D}{S} \right)^{3} + \cdots
$$

Note that higher order derivatives by themselves are not consistent with homogeneity. If we take D→α DS→α S means that d²D/dS² → (1/α) d²D/dS². Terms like D d²D/dS² would be necessary, which we'll subsume into "generalized" second order terms.

So what's this about effective field theory? In physics we write down a Lagrangian of the form

$$
\text{(4) }\mathcal{L} = \partial_{\mu} \phi \partial^{\mu} \phi + m^{2} \phi^2 + g_1 \phi^{4} + g_2 \phi^{6} + \cdots
$$

where all terms we consider must be consistent with Lorentz symmetry (i.e. special relativity, this theory is also symmetric under charge symmetry as well); the resulting theory is guaranteed to be consistent with Lorentz invariance. This way of coming up with particle theories is called an effective field theory. Generally, one writes down every possible term consistent the symmetries under consideration. Our process with equation (3) was analogous to writing down every consistent with long run neutrality of money (analogous to a symmetry).

The higher order terms in field theory (4) represent higher order interactions (2, 4-particle, etc interactions with coupling constants $g_1$, $g_2$). They tend to be "suppressed" (in physics) because the coefficient has dimensions of mass and that mass is considered "heavy". The higher terms (degree > 2) in the Lagrangian represent vertices (interactions) in Feynman diagrams.
It is possible that the analogous terms in our long run neutrality of money (money invariance) theory (3) represent three or more party transactions which would be heavily suppressed by the existence of money (you'd likely trade apples for money and then get oranges with some of that money rather than work out some complicated contract between the three parties allocating money, oranges and apples). I.e. the higher order coefficients $c_2$, $c_3$, ... might be suppressed by factors of 1/MB (!) where MB is the size of the monetary base (how much money is out there). 

The previous paragraph is just reasoning to justify taking  $c_2$, $c_3$, ... = 0 [4]. But the best reason to do so is that the model fits the empirical data! If we use κ = log S/log D, then equation (2) does an excellent job of describing  the price level:

In Part II, we'll motivate (1) with information theory.

[1] Long-Run Monetary Neutrality and Contemporary Policy Analysis Bennett T. McCallum (2004)

[2] This prescription of κ = log S/log D is reminiscent of the beta function in quantum field theory; it is motivated by empirical evidence and the information theory in Part II because κ is the ratio of the number of symbols used to describe S to the number of symbols used to describe D. In an older post, I refer to it as the unit of account effect when used to describe the price level: the size of the money supply defines the unit of money in which aggregate demand is measured.

[3] Series expansions around α ~ 0 have a small coefficient for the linear term (if S, D >> 1) and the limit as S, D → ∞ is independent of  α.

[4] We'll take c0 to be zero, too. Although we shoud be careful. Einstein famously took the equivalent of c0 (the cosmological constant) in general relativity to be non-zero in order to allow a steady-state universe. He later regretted that action, but more recent results show that it is not actually zero, but very very close.

Thursday, February 13, 2014

Micro-deflationary monetary micro-expansion?

One thing I noticed in the graph I put up the other day is that the CPI less food, energy for Japan seasonally falls when the currency component of the monetary base seasonally rises (monetary base in blue, IT model result in brown and CPI data is a dotted black line):


The information transfer model gets this effect about right in order of magnitude and direction. Note that for the US, the effect is in the other direction (i.e. monetary expansion increases the price level, although it's less pronounced).

Same caveats apply as in the linked post: I don't know if this correlation occurs because CPI data is estimated in part based the currency in circulation (or they are both based on some other data set) or because it is a real effect. There could also be some other economic effect going on.

There is still the overall trend deflation happening alongside expansion of the base to contend with.

Wednesday, February 12, 2014

Extracting shocks (again)

This is essentially this post, redux; the difference is the use of the currency component of the monetary base and NGDP rather than the full adjusted monetary base and GNP from FRED. There aren't many significant changes, but I thought I'd rather have this post to link to rather than the other one. Effectively the blue arrows represent the direction an NGDP boost would push the economy and the red arrows represent the direction a monetary boost would push the economy. The latter raises the price level, causing inflation that is seen in NGDP -- hence the red arrows in general have a projection along the blue arrows. Here is the plot (the path of the actual economy is in blue):


If we zoom in on a few years (1971, 1981, 1991 and 2001) we can see how well the "purely monetary" effect (red arrow) approximates the actual path:


Effectively the economy of the 70s and 80s was purely monetary, while the economy since then has gradually seen the dominance of monetary policy fall off. If we take these red arrows as a measure of where we would expect the economy to be without external NGDP shocks (economic/population growth, fiscal policy, technology development, supply shocks, etc), we can extract those shocks:


For the most part, these shocks line up with recessions (which is a metric I learned from Noah Smith). The data before the 80s seems a bit noisy, but it is possible to see some hints at negative shocks lining up with the recessions through the noise. One interesting thing to note is that the shock that corresponds to the 1990 recession was approximately the same size as the one that corresponds to the 2008 recession, but the depth of the recessions were completely different. That gives some perspective on the difference between effective and ineffective monetary policy.

Tuesday, February 11, 2014

Sticky wages, information transfer and piece work

David Glasner intrigued me with this thought:

My own view is actually a bit more guarded. I think that “sticky wages” is simply a name that we apply to a problematic phenomenon for which we still haven’t found a really satisfactory explanation for.

I'd like to take a stab at an explanation after first discussing sticky wages in the information transfer model (based on this post), especially in the context of Roger Farmer's post mentioned in Glasner's. Here is the picture of the average wage during the depression based on Farmer's data (dotted red), FRED data (red), CPI data from FRED (blue) and the ratio of NGDP/L (black, where L is the labor supply):


Purportedly, the fact that NW/L follows the price level P implies that wages in the Depression were flexible (NW = total nominal wages and L = total employees, or the labor supply). The use of an average wage brings up distributional issues. The average wage could be falling due to falling inequality, nominal wage cuts, or some combination of these. Wiping out Wall Street and flexible nominal wages are two completely different mechanisms that both show up as a falling average nominal wage.

But I agree with Farmer that average nominal wage should fall along with the price level! It just doesn't necessarily mean that wages are flexible. Total nominal wages are proportional to NGDP so if NW/L ~ P then NGDP/L ~ P. Can we use this to say employees are taking a nominal GDP cut? The fact that NW/L ~ P more likely derives from NGDP/NW ~ constant [1] than simply nominal wage flexibility.

This analysis in the information transfer framework looks like this. If NGDP/L ~ P in the labor market and NGDP/NW ~ c in the wage market, then NW/L = (NGDP/c)/(NGDP/P) = P/c. [2] A market with a constant price isn't really transferring dynamic information, so information from aggregate demand is being registered in the number of employees, not their wages. Additionally, NGDP/L ~ P leads to (a form of) Okun's law relating the change in RGDP to the change in employment. Here are those two markets (the constant red line indicates that wages are sticky, while the number of employees is not, the black line following the blue price level curve):


However, there is some nominal wage flexibility (or other effects) as indicated by the fact that the labor market does not capture all of the information; if we look at the difference between the model and the price level we can see that in fact there was a lot more nominal wage flexibility (or other effects) during the depression (right graph below) than, say, the 1980s (the center of the left graph below):


Now Glasner and Farmer are economists, and I am not. I'm not even using "economics" as such to analyze sticky wages. But maybe information theory can help find that explanation of sticky wages. It appears that the market for the number of employees is transferring dynamic information -- that the economy responds to NGDP shocks by laying off employees rather than reducing their wages. This means that a market has been set up to transfer information from aggregate demand to the labor supply, but not wages. How did this market get set up?

Potentially it is self-organized around the principles Glasner lays out here. But it seems that given the history of unemployment -- where the unemployed were hanged -- there would be a strong incentive for individuals being laid off to negotiate a lower wage. There could be a coordination failure in this scenario since each person designated to be let go would have to negotiate a lower wage such that total of the wage cuts equalled the amount needed to keep the factory profitable.

How does the morale picture fit historically? When your job could save you from the gallows, employers had significant power and little reason to worry about morale (especially in bad economic times when other jobs were scarce). If the boss wanted wage cuts, the boss got wage cuts. If you had a problem with that, you could be summarily fired. Remember, unemployment began in a time when there weren't any labor laws and little concern for the well-being of employees. Their morale would be low down on the list of concerns when unemployment first became a social problem.

My attempt at an explanation derives from the fact that early wage labor was piece work. You sewed a shirt and you got paid for each shirt you made. This not only provided an excellent metric for determining who was performing worst, hence who to lay off in a downturn, but forced an employer to lay off people, not reduce their wages. If you were still an employee, you could still make shirts and expect to be paid. As piece-work wage labor expanded, social systems started to form that catered to laid-off employees, beginning with workhouses -- effectively accepting a 100% nominal wage cut for room and board. As this custom of layoffs becomes entrenched, even salaried employees would be shown the door rather than given the opportunity to renegotiate their salary (especially given the coordination problem above). Today we are stuck with an information transfer system where the aggregate demand sends signals that are captured by the labor supply, not nominal wages.

Wages are sticky because no market or tradition has been set up for flexible wages. Concern for morale may be a post hoc rationalization for seeming irrational market behavior, or may be actually a concern for the morale problem due to violating an accepted social norm rather than the morale problem due to cutting wages as such.

[1] It is constant relative to a slowly varying function of the economy like the price level. In fact, it is more constant than the employment-population ratio

[2] Note that if things were the other way around where NGDP/L ~ c in the labor market and NGDP/NW ~ P in the wage market, then NW/L = (NGDP/P)/(NGDP/c) = c/P and the average nominal wage would rise with a falling price level.

Monday, February 10, 2014

This model is sufficiently awesome to see seasonal effects

One thing I noticed in the past couple of posts (especially the one on Abenomics) is that you can see the information transfer model produces seasonal fluctuations of approximately the right size in the right place. Here for example are the price levels for the US and Japan:



Now I don't know enough about how the CPI and currency in circulation are calculated to say this is definitely due to the model being totally awesome (rather than, say, the data being calculated from the same underlying numbers and hence is correlated already -- e.g. what I think is going on with the unemployment rate here).

So I will just tentatively say the model is totally awesome.

Models and metrics

I realized I hadn't given a lot of time to the metrics for the model fits. It turns out that they aren't really very informative -- errors are correlated so metrics like R squared tend not to show which model is worse. In fact, the adjusted R squared for the information transfer model (ITM) is 0.999 for both the monetary base including reserves (MB) and currency component (M0) versions. True it is "only" 0.9 for the quantity theory of money (QTM) where P = k  M0 and 0.75 for a constant (P = k), but therein lies the problem. However residuals tend to be a good metric and in this case tend to be a little more informative, so I'll present those.

First, here are the model fits to the price level (clockwise from top left: QTM, ITM constant κ [1], ITM M0 and ITM MB):


The residuals are given here (color coded to the graphs above):


The mean residual fractional error is 0.03 for the ITM M0 model, 0.04 for the ITM MB model, 0.10 for the ITM constant κ model and 0.37 for the QTM (horizontal lines). Basically, anyone who takes the quantity theory of money seriously should take the ITM even more seriously, but that isn't saying much except to hard core monetarists.

Here are the errors for the ITM MB and ITM M0 models blown up and put on a linear scale:


You can see that the recent QE is the only obvious reason to reject the ITM MB model -- otherwise the errors are really close to each other [2]. But then the recent QE is the major reason MB differs from M0! Fitting the recent years makes the years in 70s and 80s worse in the MB version of the ITM than the M0 version.

Still, the errors are correlated. They don't seem to have any particular relation to recessions. There are two periods of especially low error that are associated with economic booms (the late 60s and late 90s) and the period of high inflation from the 70s and 80s is associated with the largest persistent error. However, the ITM M0 version doesn't sustain 5% errors for very long.

[1] Constant κ uses the same function as the M0 and MB models P ~ (1/κ) M0^(1/κ - 1), but keeps κ constant.

[2] Trying to match the derivative (inflation rate) will give an additional reason to accept the ITM M0 version over the MB version.