Tuesday, September 30, 2014

Deriving the "fudge factor" in the interest rate market

Here's a possible solution to one of the unsolved problems of information transfer economics: the fudge factor $c$ in the interest rate market.

In the (long term [2]) interest rate market there was an unexplained constant $c$ so that the market was (in my shorthand Price:Demand → Supply)

r^{1/c}:NGDP \rightarrow M0

So that the final equation relating the interest rate to NGDP and the currency base M0 was

\text{(1) } \log r = c \log \left( \frac{NGDP}{\kappa M0} \right)

And the best fit is graphed in the graph above. This is a weird way of using the information transfer model (ITM). We're not describing the interest rate market in terms of the supply and demand for bonds. The interest rate has an inverse relationship with the price of bonds, so the price (the $r^{1/c}$ [1]) in the interest rate market defined by Price:Demand → Supply is actually a price for money (not bonds) and the supply is the money supply (not the supply of bonds).

The key to understanding why we ended up with this backwards way to go about making an information transfer model for the interest rate lies in the ISLM model. And that also gives us the solution for whence the constant $c$.

In the ISLM model, we have the LM market where there is a function $L(i, Y)$ that represents the demand for money [3]. Here's a diagram from Wikipedia:

Well, basically, the slope of $L(i, Y)$ (in log space) is $c$. If we write down the ITM version of the function $L$ we get:

\text{(2) } r = \frac{Y_{0}}{k M_{ref}} \exp \left( -k \frac{M0 - M_{ref}}{Y_{0}} \right)

In equation (1) above, we have the relationship

\text{(3) } r \sim 1/M0^{c} = \exp \left(-c \log M0 \right)

If we do a Taylor series of equation (3) around $M_{ref}$, keeping only the linear terms, we get

\exp \left(-c \log M0 \right) \simeq \exp \left(  -c \frac{M0 - M_{ref}}{M_{ref}} - c \log M_{ref} \right)

= {M_{ref}}^{-c} \exp \left(  - c \frac{M0 - M_{ref}}{M_{ref}} \right)

Which is essentially the form of equation (2) above with

\frac{c}{M_{ref}} \rightarrow \frac{k}{Y_{0}}

and a normaliztion factor out front (that will depend on NGDP). So now we can see what $c$ is

c \sim \frac{k M_{ref}}{Y_{0}}

This makes me feel much more comfortable with the interest rate model since the constant $c$ isn't so much of a "fudge factor" anymore.

[1] Note that whether you have c or 1/c depends on which side you put the constant on in equation (1), and I haven't necessarily been consistent throughout the blog.

[2] It's in the short term interest rate market as well, I am just referring using the long term market as an example in this post -- for the short term market, replace M0 with MB.

[3] Added 10/3/2014: I am using r = i here and the generic ISLM model doesn't make a distinction between real and nominal rates -- I am referring to nominal rates.

Inflation in Japan (update/correction)

In this update, I'm also correcting the graph from this post. The error was mostly due to the kludgey way I put the new data points in (which was itself due to Mathematica's automatic date recognition choking on a Japanese excel file) so that instead of adding the dates to the end of the time series, it started just replacing earlier data points (which cascaded into other problems with e.g. the error bands).

Anyway, here are the corrected graphs for the normal model and the smoothed version (with a new data point for August on the end and extrapolation in the smoothed version):

Here's the reference for interpreting the graph. The orange line is the segment of the model that was fit to the data and the red line is the results using those same parameters (the light red line takes out the estimated fiscal component of Abenomics).

Monday, September 29, 2014

Jason versus the New York Fed DSGE model

A research group at the NY Fed has released some forecasting results [1] with their DSGE model, so here's a comparison with the information transfer model (ITM). The Fed results are in red (and observations in black), and the ITM is blue with gray error bands (in both cases the errors are the 70 and 90% confidence limits). These are both quarterly models/error estimates, but the data (shown in green) is the monthly data so the data should show a bit more spread. Here is the graph:

As a side note, the ITM is essentially linear extrapolation at this short scale. Well, short for the ITM. The prediction does turn into a shallower curve as it nears zero inflation rate (about 0.3%) as you head out towards 2050 (think Japan). However, the assumptions of log-linear extrapolation of NGDP and currency base will probably fail before then.

As for the relative simplicities of the two models, well there's this (on the left is the FRBNY DSGE model and on the right is the ITM):

Sorry, I couldn't help myself. But really ... 29 parameters [pdf] versus 2.

Update (2:45pm PDT): forgot a H/T to Mark Thoma.

[1] They are careful to say that these are not official Fed forecasts, but rather for research purposes.

Saturday, September 27, 2014

The economic combinatorial problem

I mentioned in this post that an economy is a combinatorial problem (along with some hints at entropy); let me sketch out the mathematical side of the argument.

I connected $\log M$ (where M is the currency supply) with $\beta = 1 / T$ in the partition function awhile ago (setting $k_{B} = 1$). If we take the thermodynamic definition of temperature:

\frac{1}{T} = \frac{dS}{dE}

as an analogy (where $S$ is entropy and $E$ is energy), we can write (putting in a constant $c_{0}$ that corresponds to the constant $\gamma M_{0}$ in the information transfer model):

\log M/c_{0} = \frac{dS}{dN}

where we've used the correspondence of the demand (NGDP, or $N$ -- i.e. aggregate demand AD) with the energy of the system. We don't know what the economic entropy is at this point. However, if we take (using the solution shown here):

N = M^{1/k}

Then we can write down

k \log N/c_{0} = \frac{dS}{dN}

So that, integrating both sides,

S = k \; (N/c_{0}) (\log N/c_{0} - 1) + C

Which, via Stirling's approximation, gives us (dropping the integration constant $C$)

S \simeq k \log \; (N/c_{0})!

If we compare this equation with Boltzmann's grave:

S = k \log W

We can identify $(N/c_{0})!$ with the number of microstates in the economy. The factorial $N!$ counts the number of permutations of $N$ objects and we can see that $c_{0}$ adjusts for the distinguishability of  given permutations -- all the permutations where dollars are moved around in the same company or industry are likely indistinguishable. This could lend itself to an interpretation of the constant $\gamma$ mentioned above: large economies are diverse and likely have the same relative size manufacturing sectors and service sectors, for example -- once you set the scale of the money supply $M_{0}$, the relative industry sizes (approximately the same in advanced economies) are set by $\gamma$.

This picture provides the analogy that a larger economy ($N$) has larger entropy (economic growth produces entropy) and lower temperature ($1/\log M$).

The Great Stagnation: the information transfer non-story

I think one of the issues I have with the limited interest in the information transfer model among professional economists is a language barrier. I'm not fully versed in the language of economics and most economists aren't versed in the language of physics. In the post below I make references to "degrees of freedom" and "strongly coupled" mostly out of habit where an economist would say "agents" and "not in partial equilibrium". I probably need to shift a bit more toward the economists -- especially since I'm having a go at reinventing their entire field. However, in the long run, if this information transfer model (ITM) is correct (a big if), economists will have to learn some statistical mechanics.

That's because there's another issue: the idea of a "story". I think this is intimately linked with the degrees of freedom in the theory being humans as opposed to particles. Scott Sumner didn't see the story [1] in my attempt to explain how a falling exchange rate isn't necessarily a sign of inflation. Paul Krugman didn't see the story in Stephen Williamson's deflationary monetary expansion. In statistical mechanics, I don't try to come up with a story for why a molecule in an ideal gas decides to occupy a given energy state -- it occupies a given energy state because that is the most likely thing for it to do given the infinite number of possible energy states that's consistent with the macroscopic information I know (like pressure and temperature). The main insight of the information transfer model is that it doesn't really matter what people think (see e.g. here or here) ... there is no story.

With that throat-clearing out of the way, let me set about writing the information transfer "non-story" of the Great Stagnation.

I wrote a comment on Scott Sumner's post on the mysteriously low long run interest rates in the US, Canada, the EU and Japan (and earlier on Nick Rowe's Canadian-centric post on the same subject). I made the claim that maybe markets were coming to grips with a world of chronically under-shooting of inflation targets (Canada isn't doing this yet, but should soon if the model is correct). The picture you should have in your mind is this one:

The upper left graph shows that as economies grow (under a given definition of money), inflation slows down. The bottom left shows the same for NGDP: stagnation. On the right side are simulations based on 100 markets with random growth rates. That is the source of the story. However, this is not a story of technological stagnation, per Nick's comment on Scott's post. It's (an absence of) a story about the most likely transaction being facilitated by given dollar becoming a low growth market.

Let's tally up a set of random markets by growth rate at one instant in time. Each box represents one market [2]:

High growth markets (industries) are on the right and low growth (or declining) markets are on the left. Now any given market might move from where it is in this distribution from year to year -- a typical story would be an industry starts up at a high growth state, moves to lower and lower growth and might eventually collapse. The distribution doesn't change, though. When that industry moves from the high side to the low side, it's position on the high side is replaced by some other industry. If it collapses completely, it falls off the diagram and is replaced by some new industry. In the picture above, when the growth in the market represented by the box with the "X" slows down, moves to some new location in the picture below:

The two pictures are drawn from the same distribution (a normal distribution with the same mean and variance) -- industry "X" just went from high to low growth and some other industry took its place (although you can see it doesn't have to in order to keep the distribution the same).

This is where the key insight of the information transfer model comes in: that replacement happens for some random reason -- invention of the computer, a war causes oil prices to go up and oil companies make big profits, everyone starts a yoga class, everyone buys an iPhone and stops buying Nokia phones. Some companies are mismanaged. Some are well-managed [3]. Borrowing a picture from David Glasner, some plans are thwarted, others work out better than expected [4]. There are thousands of such stories in an economy and they all tend to cancel out (we muddle through) most of the time leaving the distribution unchanged .

Well, almost unchanged. Sometimes the changes in the locations of the boxes become correlated and you get a recession (plans that depend on each other get thwarted [4]). Over time the economy grows and the distribution shifts. How does it shift during growth? Like this schematic:

The smaller economy is the blue curve and the larger one is the purple curve. A larger economy is more likely to have its low growth rate states filled simply because there are more ways that an economy can be organized where this is true (given the details of the macro state -- e.g. NGDP, monetary base, price level). This is analogous to the molecule in the ideal gas. It is unlikely to find all of the high growth states occupied just like how it is unlikely to find an ideal gas where all of the energy is in a few molecules [5]. It's also unlikely to find all of the markets in the average growth state -- just like an ideal gas doesn't allocate an equal amount of energy to each molecule. 

In physics, we'd say a bigger economy has higher entropy: there are more possible states for each of the constituent markets to be in consistent with the macro information we know (like NGDP). We are missing more information about the exact microstate given the macrostate when the economy is larger (another way of saying it has higher entropy).

There isn't a reason or a story behind this. By random chance you are more likely to find an economy with markets occupying a distribution of growth states with an average that gets smaller as the economy gets larger. If you follow a dollar in the economy, as the economy grows larger, you are more and more likely to find that dollar being used in a low growth industry.

Maybe a better way to put it is this: because there isn't a reason for the markets in an economy to be in any particular growth state (no one is coordinating a market economy), you treat all possible states are equally likely and the result is a distribution where the average growth rate decreases with the size of the economy. 

This is the "Great Stagnation" (the supply-side version) or "secular stagnation" (the demand-side version). Supply and demand are strongly coupled in the ITM (i.e. not in partial equilibrium) so reduced demand growth is reduced supply growth and vice versa. It's not because all the easy things have been invented or the easy gains from the inventions that happened during WWII have been realized. It's not slowing growth of the working age population. It is quite literally a combinatorial problem with Dollars (or Euros or Yen, etc) and units of NGDP. And it happens because there is no story. It happens because the economy isn't being coordinated by anyone -- we just find it in its most likely state. That most likely state is one that grows more slowly as the economy expands.

How do we solve this problem? One way is to coordinate the economy, like in WWII (or communist economies) -- but the coordination problem is hard to solve [6] and the economy would probably collapse eventually. Another way is to change the combinatorial problem by redefining money through monetary regime change or hyperinflation. A third way is to leave it alone and provide better welfare programs to handle economic shocks [7]. Secular stagnation essentially renders the central bank impotent to help against the shocks. This third option seems preferable to me: it reduces the influence of an un-elected group on the economy (e.g. the ECB or FRB). The lack of inflation will be harder on people who borrow money, but hey, interest rates fall!


[1] In the original comment, Sumner was saying that he didn't see the story where inflation doesn't lead to currency depreciation. However, in that case, the story is a traditional economics story -- in exchange rates, ceteris doesn't seem to be paribus (supply and demand shifts are strongly coupled) and an expansion of the currency supply is always accompanied by an increase in demand to grab it (at least empirically).

It's actually a similar story to "loose" money leading to high interest rates -- there is a short run drop due to the liquidity effect, but inflation and income effects cause rates to rise in the longer run. In fact, it is governed by the same equation (except that in the case of interest rates the information transfer index varies causing the relationship to change slowly over time).

[2] I'm assuming all the markets are the same size right now, but that is not a big deal. Fast growing markets will get big with slow growing (or shrinking) markets getting smaller relative to the other markets. As these markets move around the distribution, their average growth rate will be the average of the distribution.

[3] Note that there is little evidence that a CEO has a significant effect on the company's stock price which tends to follow the industry average (or the SP500).

[4] I borrowed this picture from David Glasner who describes an economy in terms of the coordination of plans:
The Stockholm method seems to me exactly the right way to explain business-cycle downturns. In normal times, there is a rough – certainly not perfect, but good enough — correspondence of expectations among agents. That correspondence of expectations implies that the individual plans contingent on those expectations will be more or less compatible with one another. Surprises happen; here and there people are disappointed and regret past decisions, but, on the whole, they are able to adjust as needed to muddle through. There is usually enough flexibility in a system to allow most people to adjust their plans in response to unforeseen circumstances, so that the disappointment of some expectations doesn't become contagious, causing a systemic crisis. 
But when there is some sort of major shock – and it can only be a shock if it is unforeseen – the system may not be able to adjust. Instead, the disappointment of expectations becomes contagious. If my customers aren't able to sell their products, I may not be able to sell mine. Expectations are like networks. If there is a breakdown at some point in the network, the whole network may collapse or malfunction. Because expectations and plans fit together in interlocking networks, it is possible that even a disturbance at one point in the network can cascade over an increasingly wide group of agents, leading to something like a system-wide breakdown, a financial crisis or a depression.
[5] This isn't always true -- a laser works by creating a population inversion where the high energy states are occupied. 

[6] I'm so glad I get a chance to link to what I consider to be the greatest blog post of all time anywhere.

[7] Shocks -- unmitigated by monetary policy -- are the major drawback of secular stagnation.

Wednesday, September 24, 2014

Does Canada know about the information transfer model?

Nick Rowe put up a post yesterday and to simplify the discussion, let me just quote him:
Either the bond market thinks it will take a very long time for the Bank of Canada to get back to the neutral rate [1-2% real rate], or else the bond market thinks that the neutral rate is lower than the Bank of Canada thinks it is.
First, let me say that it is probably impossible to get anything out of market fluctuations of this size. The graph at the end of Nick's post is inside the green square in this picture:

These are really tiny movements.

Now the information transfer model (ITM) is probably the best model of interest rates in existence [1] (the results for Canada are also shown in the graph above). Again, the movements Nick is talking about are swamped not only by market fluctuations, but by model error.

However, there is another possibility Nick doesn't mention in his either/or quoted above: the bond market could believe Canada will undershoot its inflation target. Why would they believe this? Maybe some people read this post and believed the model. The ITM predicts the average inflation rate going forward 30 years is about 1.6% in Canada, which means that the nominal neutral rate [2], instead of being 3% to 4%, is actually 2.6% to 3.6% -- exactly where the data is. I show both these ranges on the graph above as well (in gray and blue, respectively).

Nick will probably apoplectic about the Bank of Canada being unable to meet its 2% inflation target (if he even notices). Personally, I'd subscribe to the "it's within the model error" view.


[1] That's a joke. Or hubris. Take your pick :)

[2] Add inflation to the real rate of 1-2%

PS Graphs of the inflation model projection:

Tuesday, September 23, 2014

Information transfer prediction aggregation

Here are a list of the predictions I've made with the information transfer model (ITM). This is mostly to help me keep track of what predictions I've made, but should also help keep me honest.

These are all inflation predictions for the US in a head to head with the Federal Reserve. The ITM says the same thing for all of them -- a slow downward drift in inflation over the medium term:


This has the same inflation prediction as above, but includes a comparison between the ITM and David Beckworth's claim that the Fed is targeting 1-2% PCE inflation


Here is the most recent update of a more complete version of the model that incorporates a guess for the path of monetary policy and predicts YoY inflation, RGDP growth, interest rates and unemployment:


Here is a prediction for Canada; I am hoping to test the hypothesis that a central bank can always achieve an inflation target it wants to (per e.g. Nick Rowe) versus a hypothesis that Canada will undershoot based on the ITM:


Japan is another interesting venue for testing the ITM as Japan is deep in the liquidity trap and things like deflationary monetary expansion become a possibility. This isn't so much of a prediction (I haven't projected the inflation rate for Japan) but rather following the model as new NGDP and currency data becomes available:

Update/correction 9/30/2014:

I have some older predictions on the site, however they happened before I figured out the relationship between the monetary base (including reserves) and the currency component. I'd consider them moot at this point.

Jason versus the Fed (update)

I realized I forgot to apply the same adjustment to the error I applied here [1] to the result here [2]. In the graph at the second link [2], I show the expected error in the monthly PCE inflation values -- but the FRB/US model is quarterly. So here is the same graph with the expected quarterly error:

This somewhat decreases the chance we'll fail to reject either model. The overall picture is that the Fed models (and predictions in [1] above) expect that the US economy will return to 2% PCE inflation, while the information transfer model predicts the economy will continue to undershoot that value.

Monday, September 22, 2014

Jason versus the Fed Presidents (addendum)

Unfortunately the RGDP piece of the contest won't be as decisive ... the predictions of the information transfer model and the predictions Federal Reserve Board members and bank presidents are essentially the same. Here are the results, for completeness:

Jason versus the Fed Presidents

I came across this [pdf] during the course of a Google search; it shows the predictions of the Federal Reserve Board (FRB) members and the bank presidents. Maybe the predictions are based on models, but for some reason I believe the majority are just some gut opinions from some dudebros (e.g. this guy or this guy). Let's see how the information transfer model stands up:

This is based on the predictions here, except I used a starting date of 2013 instead of 2014. The Fed guesses predictions are given as annual averages and so are shown as a coarse-graining. Note that the gray bands (70% and 90% confidence) are primarily due to irreducible measurement error in the monthly measurements -- the error in the annual averages will be much smaller. Here's a better apples-to-apples comparison in terms of annual averages:

I am at a distinct disadvantage in that unlike the bank presidents I don't have a vote on the Federal Reserve Board and can't influence monetary policy to e.g. make my prediction come true.

Sunday, September 21, 2014

The US economy: 1798 to the present

Tom Brown directed me to David Andolfatto's post about deflation; I am still pondering that post and plan on discussing it in the future. However, one thing that caught my eye is that Andolfatto cites measuringworth.com (MW) which has produced some estimates for the US economy going back to the late 18th century. My question is, as always, how does the information transfer model (ITM) do with this data?

One thing that is missing is currency data going back to the 1790s. That turns out not to be a big problem -- if we use the interest rate data from MW and trust the ITM interest rate model. This interest rate model seems to work right through monetary regime changes (see e.g. here or here), so I will use it to estimate the currency base (M0) based on NGDP and interest rate data from MW. The graph on the left is the interest rate data from MW and the graph on the right is the extrapolated M0 (LOESS smoothed in blue, data from FRED in light blue and the extrapolation in gray):

One thing that we can immediately show after extrapolating the currency base is the path of the US economy through NGDP-M0 space:

And one thing that immediate jumps out of this graph are two trend paths (derived from the information transfer model for a large number of random markets). One holds while the US is on the gold standard and a second picks up after WWII. The three major upward deviations from the trend preceding major crashes are associated with the War of 1812 (the 1815-1821 depression), the Civil War (the 1865-1867 recession) and the Great Depression. On this scale, most of the post-war recessions of the fiat currency regime look insignificant (even the "Great Recession"). See here for more about how these deviations are associated with recessions.

MW also has price level data, so let's see how the ITM does with the price level and inflation. I broke the data into three "monetary regimes" around the establishment of the Fed (1914) and WWII (note I already dealt with the transitional period around the Great Depression and WWII here). Here are the results for the price level:

Here are the results for inflation:

Overall, the mode does a pretty good job -- the ITM has somewhat less dramatic shifts in in inflation (which leads to some systematic overestimation/underestimation of the price level since the price level depends on the integral of inflation). I dealt with the missing part in during the transition off of the gold standard here -- the annual data is too coarse to get that piece to be anything more than a line.

Thursday, September 18, 2014

The liquidity effect and the inflation/income effect

This is an update to this post where I've instead set the change in NGDP based on the increase due to the change in the monetary base (the NGDP shift is described here, the methodology for these shifts in monetary policy matches up with this post). I show the results for both an increase and decrease in the monetary base (I assume M0 follows MB up to the reserve requirement and the shift is 5% of M0):

There is a disinflationary dip (inflationary spike) at the onset of the policy change, followed by inflation (disinflation) in each case. In the 1993 and 2005 cases, nominal interest rates follow what you'd expect from an decrease (increase) in the base: rates rise (fall). What is interesting is that in the 1970s the income/inflation effect (which raises NGDP and tends to raise interest rates) is offset by the liquidity effect (which lowers interest rates). Essentially, in the equation

log r = c log (NGDP/M) + b

NGDP and M go up (or down) by approximately the same amount, leaving NGDP/M unchanged. When inflation is high, the income/inflation effect cancels the liquidity effect; when inflation is low, the liquidity effect dominates.

In Williamson's post, he gives a model where the Fisher effect causes nominal rate increases to produce inflation. I have used the term "Fisher effect" in different ways on this blog. In this post, I use it to describe the direct correlation of inflation an nominal interest rates. In this post, I attributed the deviation from the model to the "Fisher effect" -- i.e. a higher interest rate than the information transfer model predicted due to some un-modeled force (like expectations). However, the information transfer model seems to confine this "Fisher effect" to long term interest rates and to the 1970s. Williamson's approach models the Fisher effect (expectations of higher inflation). However these two models can be consistent if inflation expectations are viewed in the light of this post -- agents expect future inflation if the monetary base is small relative to the size of the economy [1].

Overall, the information transfer model takes what closer to an orthodox view -- expansionary monetary policy lowers interest rates and creates a spike in inflation, while contractionary policy raises interest rates and creates a deflationary dip. The dip in inflation is followed by a steadily higher than expected inflation after the onset of the new policy -- this effect is much smaller, but lasts longer (the integrated result is that both of these effects cancel by the time the shock of the change in monetary policy wears off). The information transfer model does indicate that the effect of monetary policy on inflation steadily diminishes from 1973 to 2005, which isn't part of orthodoxy -- except of course, this is how the model describes the liquidity trap.

Essentially, the experiment Williamson describes doesn't have an exact analogy in the information transfer model. The central bank can't choose an interest rate to hold policy at indefinitely (too high or too low would eventually lead to a boom or bust, respectively). What I'm trying to do here (and here and here) is show that the information transfer model can manifest the effects described by Williamson (under certain circumstances) in order to make contact with "real economics" as practiced by real economists.

[I'm not sure if I'm happy with this post -- it seems pretty disjoint. Consider it some "thinking out loud" ... ]

[1] This does seem to be backwards in the sense that if the monetary base is large, people seem to fear inflation is right around the corner. However an expectation that presumed ignorance of the future direction of policy would see that there are more possible states of the economy with a smaller monetary base than a larger one as the base approaches NGDP.

Wednesday, September 17, 2014

The path of policy is strongly dependent on the path of policy

The redundant title was intended as something of a joke. I was hoping to try and do an apples to apples comparison to Stephen Williamson's last graph at this post:

However, it is impossible to construct an economy that maintains a constant nominal interest rate in the information transfer model, partially for reasons given here (constant inflation requires ever increasing NGDP), and partially for reasons given here (the interest rates need to follow a specific trend to follow the path of NGDP-M0).

Therefore I looked at the difference between the (smoothed) model result and the (smoothed) model counterfactual result. This means that the results below indicate the difference between the given variable and the expected variable path (i.e. a positive nominal interest rate means that the interest rate would be higher than the trend path). The results also strongly depend on how the nominal interest rate increase happens (in particular because inflation is a derivative, so sudden jumps have strange effects) [1]. Additionally, the inflation rate depends on the size of the economy and the monetary base, so I chose three years for the onset of the nominal interest rate increase: 1973, 1993 and 2005.

I tried three different approaches to how the nominal rate changed, and they all end up with slightly different results. The first assumes a sudden jump in the nominal interest rate that goes from being slightly below the NGDP-M0 trend to being slightly above.

The second assumes that there is a smooth (but still narrow) rise in the nominal rate with a long decay back to the trend

The third assumes that there is a smooth rise and shorter decay back to the trend

(Also note that the scales aren't all the same.)

The end result is that inflation always seems to increase a bit with a nominal rate increase (the increase gets smaller and smaller over time, both in terms of the onset year -- 1973 to 2005 -- and in terms of inflation eventually returning to the trend set by NGDP-M0), broadly in line with Williamson's graph. The path inflation takes to get there varies from a sudden jump to disinflation (of varying magnitude) followed by increased inflation depending on the precise path.

It appears that nominal rate increases (which involve a reduction in the monetary base -- i.e. contractionary policy) tend to be inflationary (relative to a case where there was no rate increase) in the medium to long run because they move the price level down (P decreases if M0 decreases, ceteris paribus), so inflation must increase in order to return to the price level back to its expected long run path. It's a bit like how digging a hole in front of a mountain can make the average grade steeper on the far side of your hole. Here is a graph of the price level (with red being the counterfactual contractionary policy):

That dip and the return to the trend is responsible for a nominal interest rate increase leading to inflation.

[1] I also tried to make this as pure of a nominal interest rate increase as possible by making the change in the path of NGDP-M0 perpendicular to the lines of constant interest rate illustrated e.g. here. This differs somewhat from the methodology I used here, where the NGDP increase is dependent on the M0 increase.

Tuesday, September 16, 2014

Another analogy for monetary policy and recessions

The traditional mental model seems to see contractionary monetary policy as adding friction or letting up on the gas pedal in a car, for example, Noah Smith: "When the economy is doing well, raise interest rates to slow things down ..."

This is very different to the information transfer model view; I'd previous likened contractionary monetary policy to piling snow on a mountain until an avalanche occurs.

I'd like to add another mechanical analogy: contractionary monetary policy is like stalling an aircraft to lose altitude. Aircraft can become difficult to control at their stall angle/speed and you can end up losing much more altitude that you intended. Another interesting extension of this analogy is that at "high speed" (i.e. high inflation) a stall is less likely than at "low speed" (low inflation). In a sense, the economies of the US, EU and Japan all seem to be flying near their stall speeds -- contractionary policy could induce a recession.

Distilling Fisher relationship data

I mentioned I was going to say some more about Stephen Williamson's piece on the EU and the Fisher relationship (where higher interest rates are associated with higher inflation and lower rates with lower inflation). The plan of action is to remove the empirical noise from the Fisher relationship Williamson presents by finding the underlying trends in the data in the fluctuations (based on smoothing the model inputs).

Here is the model (blue) and the data (green) -- removing the 500 € notes -- for the price level (CPI less food and energy):

Here are the inflation rates (year over year for the data in green, instantaneous derivative of the price level for the model in blue):

Both the inflation based on CPI and CPI less food and energy are shown. And here are the interest rates (long rate is red, short rate is blue):

So now finally we are equipped to reproduce Williamson's second to last graph:

You can see how much of the scatterplot (green points) is a deviation from the model (blue line). The noise dominates the model; it would be hard to associate any particular movement of interest rates or inflation with this relationship (only the long run trend over several years). Due to some of the data not going back to 1997, only the larger green dots are from the same period as the model result. However those points cover much of the same range as the full data set shown with the smaller green dots (although I couldn't find data on the short term interest rate that goes all the way back to 1994 like Williamson's data -- so it misses the highest interest rate part).

We do recover the Fisher relationship where inflation and interest rates are directly correlated, though. I'm not sure this is a causal relationship -- at least in the way a neo-Fisherite would see it. The neo-Fisherite view is that the central bank could e.g. raise rates and produce inflation (or keeping rates low leads to deflation). However, in the information transfer model, high inflation means that the monetary base is small with respect to the size of the economy (NGDP) which in turn means expansionary policy increases interest rates increase through the income-inflation effect. Low inflation means that the base is large compared to NGDP and the liquidity effect dominates interest rates (expansionary policy reduces rates). In this scenario, any attempt to raise interest rates (reducing M0 or MB growth) to try and increase inflation would throw the economy off the long run NGDP-M0 path likely leading to recession.

Saturday, September 13, 2014

What is inflation?

Everybody's talking about inflation.

Stephen Williamson asks: "How do macroeconomists think about inflation?" and answers with a very long (and very interesting -- I plan on doing a post dedicated to it later) history of thought, ending with an analysis of the situation in the EU. According to Williamson, it's not obviously wage-price spirals (e.g. the Phillips curve) and its not directly linked to the money supply -- empirically, these seem to lack stable relationships.

Simon Wren-Lewis says: "The idea that we can take one variable, or one equation, and distill from that the future price level is a fantasy. What is surprising is that this fantasy has been, and still remains, so attractive for some economists."

Wren-Lewis cites Frances Coppola, who says: "Empirically, it is abundantly clear that there is no clear relationship between the quantity of monetary base and the price level."

But Nick Rowe disagrees, he believes that printing more money than you would have otherwise leads to higher inflation that otherwise would have been.

Scott Sumner thinks inflation is irrelevant. He says: "When I say inflation is a meaningless concept I’m suggesting that the concept is not well defined, despite the BLS’s attempts to do so." He also says: "There is no such thing as a “true rate of inflation,” but there’s also no reason to assume that inflation has not averaged 2% in recent decades. It’s just as reasonable as any other number the BLS might pull out of the air." Sumner references the hedonic adjustments made by the BLS (adjusting for the quality of goods) -- which of course makes it seem very arbitrary. I got into a discussion with Sumner and commenter Dustin about it awhile back.

Overall, there seem to be a lot of ideas of what inflation is or isn't without any rigorous definitions. I was reading Romer's Advanced Economics (also on my flight home last night) where he says economists don't even necessarily know if inflation is good or bad (Nick Rowe has a concise list of the mechanisms that make it bad: "Economists would instead talk about shoe leather costs, menu costs, relative price distortions, difficulties of indexing taxes, confused accountants, etc").

In short, there seems to be a lot of confusion.

In the information transfer model, however, the price level is pretty well defined:

P = \frac{dNGDP^{*}}{dM0}

The price level is the increase in NGDP given an infinitesimal increase in currency M0. These days you get about twice as much NGDP for the same amount of printed currency as you did in the 1980s (in the US) [1]. The inflation rate is just the time rate of change of this (the derivative of the log gives the fractional rate):

i = \frac{d}{dt} \log P = \frac{d}{dt} \log \frac{dNGDP^{*}}{dM0}

Now you may have noticed that I slipped an asterisk on NGDP -- that's because it's not really measured NGDP, but rather the theoretical shock-free NGDP I've shown here. Here are the two compared (the blue curve is the shock-free NGDP and the red curve shows the empirical data that includes shocks):

Now the price level model in terms of the information transfer index $\kappa (NGDP, M0)$ is an approximation, so the model isn't perfect, but it is pretty good (taking the derivative of the blue line above gives the points and the inflation data is the green line):

So inflation is the time rate of change at which the "friction-free" "ideal" $NGDP^{*}$ goes up for a given expansion in currency $M0$. In terms of equations we have [2]:

NGDP^{*} \simeq \beta \left( \frac{M0}{M_{0}} \right)^{1/\kappa (NGDP, M0)}

P = \frac{dNGDP^{*}}{dM0} \simeq \alpha \frac{1}{\kappa (NGDP, M0)} \left( \frac{M0}{M_{0}} \right)^{1/\kappa (NGDP, M0) - 1}

Why don't we include the shocks in $NGDP^{*}$? Here's one plausible story. Maybe that derivative is supposed to be a partial derivative -- the price level is the change in NGDP that is due to a rise in the supply of currency only, independent of other factors. If the shocks $\sigma$ are independent of currency M0 (or only weakly dependent, say on $\log M0$), and $NGDP(M0, \sigma) = NGDP^{*}(M0) + \sigma$ then we have:

P = \frac{\partial}{\partial M0} NGDP(M0, \sigma) = \frac{\partial}{\partial M0} NGDP(M0, \sigma = 0)

P \simeq \frac{\partial}{\partial M0} NGDP^{*}(M0) = \frac{dNGDP^{*}}{dM0}

Now how does the BLS measure CPI if it doesn't know about $NGDP^{*}$ -- I'm pretty sure they've never heard of the information transfer model. It is likely helpful that the individual prices the BLS measures won't be experiencing the same shocks at any given time, so on average a price for a typical good will give insight into $NGDP^{*}$. However, since NGDP (without the asterisk) measures all goods produced, it will be affected by the distribution of prices for a given good. BLS also drops the highly fluctuating food and energy from "core" inflation. There is no "core" NGDP. Both of these effects mean that the BLS CPI statistics may in fact be measuring $NGDP^{*}$ when they measure inflation.

PS I wrote this while I have a pretty serious cold, and am feeling a bit loopy; please excuse me if this doesn't make any sense.


[1] Just because the price level is going up, it doesn't mean that the relative expansion of NGDP is getting bigger for a given expansion in M0 -- for that to happen the price level would have to go up at the same rate as NGDP -- ie. RGDP growth is zero.

[2] In the function $\kappa (NGDP, M0)$ we have the actual realized NGDP because that is what a dollar buys a fraction of.

The bitcoin wall

There have been a couple of references to bitcoin on this blog (including a couple comments), but I was reading this on my flight back home last night and something clicked when I got to this:
Most digital currencies incorporate a pre-determined path towards a fixed eventual supply.

Since I've been looking at the path of NGDP-M0 in the past few posts (e.g. here), I realized that a bitcoin economy (where a bitcoin defines the unit of account) would reach some point along the NGDP-M0 path, stop and fluctuate around that point.

The Fed came close to experimenting with this kind of trajectory just before the Great Recession -- currency in circulation started to reach a plateau. Let's take the bitcoin counterfactual where that plateau continued indefinitely (i.e bitcoin existed since the 1960s or so and reached fixed supply in 2008):

Although I guess it is possible to delete bitcoins, I also altered the time series so the bitcoin M0 never decreases. Now, what would happen to the path of NGDP-M0? Well, you hit a wall:

I'm not sure what would constrain the size of the shocks -- you'd have massive recessions and large booms (somewhat like the pre-Fed US, except even during those times the gold supply would increase). I would imagine that the series of shocks would eventually cause the bitcoin economy to collapse as people abandoned it for some kind of fiat currency.

Of course, if the bitcoin path were defined by the theoretical curve then it wouldn't be doomed. Maybe someone should make info-coin that follows the theoretical path of NGDP vs M0? It is unfortunate that the name bitcoin already used "bit".

But fiat currency already seems to follow the path, so why introduce something new that does the same thing?

Wednesday, September 10, 2014

Suggestive graph juxtaposition


Speculative, but something to think about ...

Recessions and avalanches

Karthik questions whether my human desire to see patterns and impose order on the universe has gone too far in this post where I speculate that recessions happen when the path of NGDP vs M0 goes above the trend (I also speculate that the timing of the recession is akin to an avalanche -- you can say when the probability is high, but not exactly when the avalanche will occur). That is always a good instinct as people tend to see patterns in randomness and we should be wary of testing hypotheses suggested by the data.

I promised to graph the data in a different way to better show the result (it is somewhat hard to see in the graphs except for the dramatic case of the Great Recession -- or Great Depression e.g. here). Here is the graph of the difference between the empirical path of NGDP-M0 and the theoretical trend:

I also smoothed the data and highlighted where the data goes above the zero point. Two issues -- there looks like there should be a recession in the mid-to-late 1980s and the early 2000s recession happens when the data fall below the zero point. The former happened when the Fed was raising interest rates to subdue inflation -- interestingly this action seems to have pushed NGDP back up, much like how Fed raising interest rates pushed up NGDP 2006-2008. The other interesting coincidence is that a massive stock market crash (1987) and a banking crisis (1988) also happened. The difference between 1988 and 2008 was probably the fact that monetary policy was effective during the 80s and 90s (far from a liquidity/information trap).

There is some debate as to whether the early 2000s recession is a recession at all (unemployment peaked at 6.3%, only 0.2% above where it is today and NBER's recession probability maxes out at just above 50%) -- but not only is the recession not well described by the model, but also the fact that the path of NGDP-M0 was so far below the trend "equilibrium" path for so long. Did the economy find a different "equilibrium"? Is this one of the other states in a hidden semi-Markov model Noah Smith talks about sometimes? The other time the economy was significantly below the trend was in the 1970s. Interestingly that coincides with the two lowest "metastable" periods in the unemployment rate I talked about e.g. here .. . and that may shed some light on the above trend piece in the 1980s, which is when the highest "metastable" unemployment periods happen.

Ok, that is too much speculation for now.

Tuesday, September 9, 2014

Jason versus the Fed

A few months ago I learned from Noah Smith's blog that the Fed had released one of its models: the FRB/US model. On the linked Fed webpage, they gave an example analysis done with this model that predicted four macroeconomic variables (clockwise from the top left: the unemployment rate, the Fed funds rate -- wait, don't they set this? [1], RGDP growth and PCE inflation):

How does the (far simpler [2]) information transfer model stack up against this model? I started from the trend inflation modeled here and predicted out to the end of 2018 (the prediction bands are the 70% and 90% bands like in the Fed model, based on a Gaussian error distribution -- which if you check the link is a pretty good model):

I was unable to find numerical data for the Fed's prediction, so I rather cheesily traced out the central value and the error bands in PowerPoint (in red) and placed them on top of a close up of the prediction in the previous graph:

The overall error at the end of the period seems about the same (recall that for short term predictions, the information transfer model is pretty much dominated by measurement error, not model error). It is kind of funny that the first few measurements after the start of the prediction jump right outside the Fed's 90% confidence interval. The interesting thing is that these models predict different things -- although given the overlap of the 70% confidence interval, there is a strong possibility of failing to reject either model (i.e. both are fair descriptions).

I'll keep updating this as inflation data becomes available.

For a bonus graph, here is the model result without the smoothing:


[1] Just kidding.

[2] According to the Fed website, "FRB/US currently contains about 60 stochastic equations, 320 identities, and 125 exogenous variables" ... of course it models quite a bit more (like household consumption and savings).