Wednesday, April 9, 2014

Light to no blogging starting a few days ago

I'm on a business trip to LA for the real job all this week, and I'm going on vacation next week; blogging will be light to non-existent. I'll be back at it April 22nd or thereabouts (thenabouts?). I'm working on a post about how expectations that deviate from what actually happens represent destruction of information.

Update: I was in a rather serious accident while driving in LA; my wife and I are shaken but doing all right. My car was totaled. There has been and will continue to be some additional time off.

Saturday, April 5, 2014

Inflation predictions are hard, especially about future inflation

Currently reading through Simon Wren-Lewis's post on microfoundations and the Phillips curve. I'm going to have an empirical look at this quote:
First, although various microfounded models suggest inflation should depend on expected inflation next period, the empirical evidence is more equivocal. A number of studies have found that inflation today depends on both expected inflation next period, but also on actual inflation last period (‘inflation inertia’).
Now I don't think very highly of expectations as a long run driver of macro variables (or even microfoundations). In the short run, any influence of expectations is likely to reduce information flow because: 1) who knows anything about the future, 2) the EMH and 3) in financial markets any sort of stock picking behavior tends to under-perform the market. (I guess all three of those are related.)

Anyway, here's a plot of year-out inflation expectations from the Michigan survey (red) alongside previous year-over-year inflation (i.e. prior period, blue) and instantaneous inflation (i.e. d/dt log P, or current period inflation, in gray):


So let's check out a simple model where instantaneous inflation (current period) is the average of expected inflation and past inflation (prior period). The result is in purple in the next graph:


How does this model compare with the simple martingale of using only the prior period inflation? It actually makes it a bit worse. In the graph below, you can see the model error for the martingale (MTG) in blue and the averaging model (AVG) in purple, the latter has a slight linear trend, being low in the 1980s and high in the 2000s (i.e. the trend is non stationary/has a unit root):


In the next graph, I show the error histograms for AVG (purple), MTG (blue) and expectations-only (EXP, red). The positive skew of the averaging model derives almost entirely from the expectations -- therefore any more complicated linear combination of the averaging model, say of the form (1-λ) MTG + λ EXP with 0 ≤ λ ≤ 1, will also have a larger positive skew than MTG by itself.


Overall, inflation expectations are biased toward more inflation than appears and actually throw off the simple prior period inflation heuristic (which itself is slightly slightly biased towards higher inflation than occurs). There are two scenarios (at least) that could account for this:
  1. People are still psychologically affected by the high inflation of the 70s and 80s. This would explain not only the expectations (obviously), but also the trend towards lower inflation due to a central bank staffed with psychologically affected people.
  2. There is an underlying trend towards lower inflation due to e.g. the diminishing impact of monetary expansion (see also here). This would cause even people who used the simple (and fairly accurate) martingale to overshoot inflation, and the Fed to undershoot its de facto if not de jure inflation targets because what worked to achieve 2% inflation in the prior period wouldn't work in the future period.

What's up with M1?

A quick addendum to the previous post; I tried using M1 in the interest rate model, but it didn't work. However, it didn't work in an interesting way:


The vertical line represents the point where NOW accounts were introduced nationally in 1981 (here is the NY Fed and here is the NY Times), which should represent a big influx of money (and thus lower interest rates in the model since it depends on NGDP/M1). Starting in the 1990s, M1 is significantly lower than interest rates would suggest (if the model was correct). Maybe this has something to do with the reserve requirements on M2 components dropping to zero (which I learned from Tom Brown in comments), making banking products that are part of M2 more attractive than those that are part of M1 -- hence moving money out of being measured by the M1 aggregate and into being measured by the M2 aggregate.

Friday, April 4, 2014

Broad money, narrow money and interest rates

I was inspired by Tom Brown (in comments on the last post) to attempt to understand broader measures of money with the information transfer model. The first baby steps had me starting from this post on Nick Rowe's model of the money stock. I graphed the price level using the "implied" monetary base by the Fed's interest rate target (i.e. using r = f(NGDP, MB) to solve for MB) in blue, alongside the best price level fit to MZM (money with zero maturity, which the Fed tends to regard as the money stock) in red:


It seems to match up well enough that there might be something to it, but I quickly realized that this must be due to MZM being a better model of interest rates than the currency component of the base ("M0") ... here it is (the old fit in terms of M0 is light blue, while MZM is dark blue):


It's not immediately obvious this is a better model if we just look at the model errors (again M0 is light blue, while MZM is dark blue):


However, if we look at the histogram of the errors, we can see the significant skew in the M0 model [1] vanishes in the MZM model.


Overall, it seems MZM is more strongly linked to interest rates than M0. I'm still working on the link between MZM and M0 themselves. One idea is that difference between MZM and M0 fills the gaps in the bound -- see footnote [1] below. This might tie in to the law of reflux being discussed by Nick Rowe and David Glasner (we'd think of MZM as caulk filling in the gaps between the empirical interest rate and the rate given by the bound).

[1] This skewing is potentially due the fact that M0 makes a good bound on the 10-year interest rate (in log space). There would be a tendency for the actual interest rate to fall below the model result (you can at most get all of the information from the source -- communication breakdowns always mean less information):


Thursday, April 3, 2014

The downward trend in real interest rates

I saw this [1] today (on global interest rates) and decided to look at what the old model shows for real interest rates in the US where we take 1 + i = (1 + r)(1 + π) where r is the real interest rate, i is the nominal interest rate and π is the inflation rate. Here's the fit to the price level and the 10-year interest rate:


And here is the resulting real interest rate:


The downward trend since the 1980s appears here as well. If this model is successfully describing that trend, then we can trace it to the diminishing impact of changes in the monetary base on the price level. In fact the global data in the link [1] above seems to imply that this diminishing impact is widespread, implying many developed countries are moving towards the right on this diagram:


The diminshing impact of monetary expansion shows up as a negative (downward) curvature as you move to the right on the graph.


Monte Carlo economy

I've still been looking into the deviations (for the 1960s and the 1990s) of the actual path of the economy from the Monte Carlo average (see here and here). Removing the data from the recession in 1960 right at the start of the data fixes the problem - there is a downward jog that starts the integration off in the wrong direction in the simulations at the links.


It would make sense not to set the initial conditions for a differential equation in the middle of a shock. However, there still is the issue of the large deviation in the 1990s.


Wednesday, April 2, 2014

Economics is neither physics nor computer science

I was a bystander in a crash of paradigms recently -- Eric Weinstein joined a discussion prompted by Chris House when House asked the question: Why are physicists drawn to economics? House seemed to think it was out of mathematical hubris physicists felt they could jump right in. Weinstein, in comments at Orderstatistic (House's blog) and Noahpinion, pushed for an interpretation of economics in terms of gauge theory.

Now there is nothing incorrect about Weinstein's reformulation of economics in the language of fiber bundles with ordinal utility behaving like a connection (gauge field).  As a physicist, I actually enjoyed the mathematics involved in reformulating gauge theory in the language of fiber bundles (and differential forms). I gave seminars on both as a grad student. Similarly there is nothing incorrect about Eric Smith and Duncan Foley's reformulation of economics as thermodynamics. Actually, the existence of both reformulations is a little unsurprising -- the partition functions found in quantum field theory and thermodynamics are closely related.

However, Steve Ellis, one of the professors on my thesis committee, asked me a question at one of those seminars that has stuck with me. What is it good for? (He would disparagingly pronounce the word JAR-GON as if it was the name of a villain from Krypton.) Paul Samuelson answered in 1960 in the case of thermodynamics with absolutely nothing ...
The formal mathematical analogy between classical thermodynamics and mathematical economic systems has now been explored. This does not warrant the commonly met attempt to find more exact analogies of physical magnitudes -- such as entropy or energy -- in the economic realm. Why should there be laws like the first or second laws of thermodynamics holding in the economic realm? Why should 'utility' be literally identified with entropy, energy, or anything else? Why should a failure to make such a successful identification lead anyone to overlook or deny the mathematical isomorphism that does exist between minimum systems that arise in different disciplines?
If he was alive today, Samuelson would probably throw in gauge theory. The thing is, thermodynamics was the big thing in physics at the time of Walras (1870s), and statistical mechanics was fully developed by the time of Fisher (late 1800s to early 1900s). Both of those economists used analogies that used the big new physics at the time. Today theoretical physics is dominated by quantum field theories, so Weinstein is in good company. 

Utility as gauge field, utility as thermodynamic potential: what are they good for? If it is nothing more than a re-labeling, a translation into a new language, then I submit it's not really good for anything practical. It is analogous to translating the English "person" into Japanese -- 人 -- look at the beautiful simplicity of the representation! Of course, it doesn't help you do anything that you couldn't do with the English word besides save space or enable elegant graphic design (think Maxwell's equations on a t-shirt which uses modern mathematical notations, not Maxwell's [1], and definitely not the even more elegant version reformulated as differential forms). And to use it you need to learn Japanese (to use the gauge representation of utility, you need to learn gauge theory).

Maybe these reformulations help in the way most analogies help -- helping thinking and intuition?

To that end let me introduce two other (related) reformulations of economics as computer science. Cosma Shalizi wrote what is my favorite blog post ever -- ostensibly a book review, it becomes a coherent framework for thinking about markets. If running an economy is an optimization problem (allocating raw materials, goods and services), then the top-down communist model where every allocation is centrally planned is akin to trying to solve the linear programming problem directly. Shalizi demonstrates that this is beyond the capabilities of computers for thousands of years, given the size of modern economies. Instead, the market and the price mechanism are remarkably more effective for such an easy system to use.

Nick Hanauer and Eric Beinhocker seemingly independently took this idea up earlier this year, saying that capitalism is an algorithm that solves problems. Economics must therefore be the study of that algorithm. Arguing against the market as a force of nature, they argue the market as algorithm is supposed to function in the service of humanity. These top-level interpretations are the flip-side of the reformulations in mathematics. The former give purpose of the machine while the latter fiddle with the gears.

We return to Steve Ellis's question: Other than giving a general motivation for capitalism and a framework for interpreting the value of the results of capitalism, what are these reformulations as computer science good for? I mean, as far as the science of economics goes? I can see that if you believe capitalism is an algorithm, it e.g. forestalls moralizing in favor of the market distribution of resources. But it doesn't help figure out how the Phillips curve works. (I'm definitely not saying these ideas are wrong!)

What is the impact on the everyday work of particle physics if it is seen as quantum field theory as fiber bundles? I will tell you as a particle physicist: very little. Physics at least has topological solutions that are illuminated by a fiber bundle approach (instantons, Aharonov-Bohm effect, etc). I will make the bold claim that economics has no such topological solutions.

I'd venture to say that recasting economics in this or that mathematical framework or coming up with a new analogy doesn't really add new capabilities to the science of economics that didn't exist before. This pessimism may seem weird coming from a blog that's all about reformulating economics as information theory.

But!

There are quantitative results that come from this particular reformulation -- a quantitative treatment of money as the unit of account and medium of exchange, an excellent model of the price level (including Japan and the general trend towards disinflation), and a reason for the evolution of the Phillips curve, among others.

At least, that's the answer to Steve Ellis's question [2].

[1] See this link to Einstein's special theory of relativity for a hint of how Maxwell wrote them down.
[2] This post can be seen as a sequel to this older post that references some of the same material.