Monday, July 27, 2015

Kaldor, endogenous money and information transfer

Nick Edmonds read Kaldor's "The New Monetarism" (1970) a month ago and put up a very nice succinct post on endogenous money.
The first point is that, for Kaldor, the question over the exogeneity or endogeneity of money is all about the causal relationship between money and nominal GDP.  The new monetarists ... argued that there was a strong causal direction from changes in the money supply to changes in nominal GDP ... 
Endogenous money in this context is a rejection of that causal direction.  Money being endogenous means that it is changes in nominal GDP that cause changes in money or, alternatively, that changes in both are caused by some other factor.
I've talked about causality in the information transfer framework before, and I won't rehash that discussion except to say causality goes in both directions.

The other interesting item was the way Nick described Kaldor's view of endogenous money
As long as policy works to accommodate the demand for money, we might expect to see a perpetuation in the use of a particular medium - bank deposits, say - as the primary way of conducting exchange.  ...  But any stress on that relationship [between deposits and money] will simply mean that bank deposits will no longer function as money in the same way. The practice of settling accounts will adapt, so that we may need to revise our view of what money is.
One interpretation of this (I'm not claiming this as original) is that we might have a hierarchy of things that operate as "money":

  • physical currency
  • central bank reserves
  • bank deposits
  • commercial paper
  • ... etc

In times of economic boom, these things are endogenously created (pulled into existence by [an entropic] force of economic necessity). The lower on the list, the more endogenous they are. When we are hit by an economic shock stress on the system causes these relationships to break, one by one. And one by one they stop being (endogenous) money. In the financial crisis of 2008, commercial paper stopped being endogenous money.

Additionally, a central bank attempting to conduct monetary policy by targeting e.g. M2 can stress the relationship between money and deposits causing it to behave differently (which Nick reminds us is similar to the Lucas critique argument).

This brings us to an interpretation of the NGDP-M0 path as representing a "typical" amount of endogenous money that is best measured relative to M0. Call it α M0 (implicitly defined by the gray path in the graph below). At times, the economy rises above this value (NGDP creating 'money' e.g. as deposits via loans, as well as other things being taken as money like commercial paper). When endogenous money is above the "typical" value α M0, there is a greater chance it will fall (the hierarchy of things that operate as money start to fall apart when their relationship is stressed).


Another way to put this is that the NGDP-M0 path represents the steady state (or vacuum solution in particle physics) and fluctuations in endogenous money are the theory of fluctuations from the NGDP-M0 path. The theory of those endogenous fluctuations aren't necessarily causal from M2 to NGDP; however the NGDP-M0 relationship is causal both ways (in the information transfer picture).

At a fundamental level, the theory of endogenous fluctuations is a theory of non-ideal information transfer -- a theory of deviations from the NGDP-M0 path in both directions (see the bottom of this post).

19 comments:

  1. Jason, interesting.

    O/T: Is your opinion of Sadowski's conclusion similar to your opinion on his 1st in this series?

    ReplyDelete
    Replies
    1. Yes. If you look at the graphs he has in log's the logs on one side measuring the base go up by more than 1 -- the axis goes from 7.2 to 8.4. That represents a greater than e-fold increase (actually about 3.3). On the other axis, the scale is usually a fraction of

      So what Sadowski has shown is that tripling the monetary base can get you a few percent increase in some economic variables (assuming the monetary model is correct).

      I think when he says a 2.2% increase in the base leads to x% increase in the other variable, he means a 2.2% increase in the log of the base -- i.e.

      exp(1.022*8.0)/exp(8.0) = 1.19 ... i.e. a 20% increase in the base.

      That is of course consistent with my conclusion that doubling (100% increase in) the monetary base would lead to a 9% increase in the price level:

      http://informationtransfereconomics.blogspot.com/2015/07/the-sadowski-theory-of-money.html

      However, I don't think doubling the base (again) and getting a few percent extra output is what people mean by effective monetary policy. That's why I think Sadowski is being disingenuous -- he is obfuscating that conclusion (that you can reach with a simple linear model) with VAR's, Granger causality and impulse responses.

      Delete
    2. Thanks Jason. Did you see the comments? Nick Rowe & I ask him questions and he answers.

      Delete
    3. Yes, that is basically what he says throughout the posts. In the comment he says a 2.4% increase in MB in month 1 leads to a 0.25% increase by month 10 (2.4% / 0.25% ~ 10) ... roughly equivalent to the 100% increase leading to a 9% increase in the price level (i.e. about a factor of 100% / 9% ~ 10).

      Before the liquidity trap, the factor is closer to 1; in 2008 it suddenly goes up to 10 ... which sounds like a liquidity trap to me.

      I believe the impulse responses are showing a positive shock to MB leads to a delayed (by 10 months) and suppressed (by a factor of 10) positive movement in other variables (in the various posts).

      That is to say if log P ~ (k - 1) log MB we have

      log P ~ log MB

      before the crisis and

      log P ~ 0.1 log MB

      after. Which is roughly the idea behind the liquidity trap model in the information transfer framework ... k goes from being about 2 (kappa ~ 1/2) to being about 1.1 (kappa ~ 0.9).

      It turns out it is more empirically accurate and less dramatic if you use "M0" i.e. the monetary base without reserves.

      But that is why I said Sadowski is my first monetarist convert! It's the information transfer model with varying IT index!

      Delete
  2. "It turns out it is more empirically accurate and less dramatic if you use "M0" i.e. the monetary base without reserves."

    1) "Accuracy" implies that there is a statistically significant relationship between currency in circulation and prices and/or output. And yet over the period since December 2008 there is no evidence that such a relationship exists. So that's an extremely bold claim with essentially nothing to back it up.
    2) The central bank has no real control over the amount of currency in circulation, so even if there was such a relationship, it would have no useful policy implications.

    ReplyDelete
    Replies
    1. Hi Mark,

      Regarding 1, accuracy is a measure of 'correctness' that has nothing to do with statistical significance. For example, an accurate measure of inflation would center on the actual values of inflation over a period of time. You could measure relative accuracy between two models by comparing their residuals when compared to existing data or looking at predictions.

      I don't restrict the domain to 2008-present, either. I'm looking at the full range of available data. The most accurate (mainstream) model thus far of the US price level was the P* model from the Fed from the early 1990s. It does outperform the information transfer model (ITM) on the data from 1960-1990, but the ITM does better on out of sample data:

      http://informationtransfereconomics.blogspot.com/2014/07/notes-from-ben-bernanke-and-p-model.html

      However, the P* model had about 10 parameters IIRC while the ITM price level model has 3, so that would actually mean that even though P* outperformed it on 1960-1990 in terms of accuracy, it doesn't outperform in terms of e.g. the Akaike information criterion.

      Regarding 2, ... what? Are you saying central bank reserves are the only component of the monetary base that has policy implications?

      MB = M0 (Fed has no control) + reserves (Fed controls)

      I'm mostly just confused about that.

      Also, the Fed takes requests for physical cash from banks (converted from their reserves) and passes them to the Treasury for printing. It is true that the Fed typically grants such requests, but it does seem to have at least one knob.

      However in the ITM, M0 is directly related to the long term (10 year) interest rate, so they could potentially adjust it that way.

      Delete
    2. The residuals are related to the degree to which the model's other variables have a statistically significant relationship with the variable that is being predicted. Since there is no statistically significant relationship between currency in circulation and prices and/or output over the period since late 2008, a model that relies on currency in circulation is almost guaranteed to have larger residuals than a model that relies on the monetary base during that period.

      "Regarding 2, ... what? Are you saying central bank reserves are the only component of the monetary base that has policy implications?"

      No, I'm saying the central bank does not have precise control over the amount of currency in circulation or the amount of reserves. It has precise control over their sum.

      "It is true that the Fed typically grants such requests, but it does seem to have at least one knob."

      True, the Fed could in theory limit the amount of currency in circulation. But it cannot compel the public to hold more currency than it choses to.

      Delete
    3. Hi Mark,

      log M0 ~ 1.4 log NGDP

      https://research.stlouisfed.org/fred2/graph/?g=1xgK

      It's highly correlated -- but it really isn't enough data to draw a conclusion.

      And no, the ITM model using the full base (MB) has larger residuals than the M0 model, so that argument can't be correct:

      http://informationtransfereconomics.blogspot.com/2014/02/models-and-metrics.html

      Regarding the your last statement, I think that is the idea behind the hot potato effect ... if given extra currency via a helicopter drop for example, people will seek to unload it, raising the price level in the process.

      Delete
    4. There's 78 months which is plenty of data. According to my estimates the p-value for the Granger non-causality test is 52.6%, 73.4%, and 77.2% from currency in circulation to CPI, PCEPI and industrial production respectively. A model relying on currency in circulation during the last six and a half years would have relatively large residuals.

      A helicopter drop of cash would essentially be an increase in the monetary base, and much of it would probably be converted into bank reserves.

      Delete
    5. Hi Mark,

      From your blog posts it looks like you are doing Granger causality tests up to at least 10 lags -- that is at least a 20-variable model (testing in one direction) ... 78 data points aren't enough for that regardless of the inputs.

      My comment about there not being enough data was based on the fact that the economic variables under consideration are generally exponential functions with fairly regular growth rates -- so even though the relationship:

      log P(t) = a log MB(t-q) + b

      is highly statistically significant over any given 5 year stretch, you shouldn't mindlessly follow the results of statistical tests. The data are exponential and any two exponentials will be related by the above relationship. You're not learning that your model or theory is correct. You're learning that economic systems tend to be exponential. Interpreting statistical significance as model correctness is an inference error.

      Delete
    6. I love this thread.

      Jason, can you reproduce the p-values Mark gets above?

      Mark/Jason: what are the p-values for MB instead? Are these p-values meaningful here?

      Delete
    7. "Mark/Jason: what are the p-values for MB instead?"

      Read my posts Tom and you'll find out.

      Delete
    8. "The data are exponential and any two exponentials will be related by the above relationship. You're not learning that your model or theory is correct. You're learning that economic systems tend to be exponential. Interpreting statistical significance as model correctness is an inference error."

      Just because two variables both exhibit a similar trend does not mean there is a statistically significantly relationship between them. Moreover there is often statistically significant relationships between variables which do not exhibit similar trends. So frankly this statement is weirdly nonsensiscal.

      One should always check to see if the results of a model are statistically significant. If they are not then that is probably good reason to be skeptical of the model.

      And since one should always be interested in seeing if a model's results are statistically significant, one should go to the truble to doing extensive statistical diagnostics to insure that a model is properly specified.

      Delete
    9. Mark,

      I needed some graphs so I put it all in a blog post:

      http://informationtransfereconomics.blogspot.com/2015/08/statistical-significance-is-not-model.html

      Tom,

      I haven't tried to reproduce Mark's results. I don't think the p-values are wrong or anything -- I think it's the interpretation that is questionable.

      Delete
  3. "From your blog posts it looks like you are doing Granger causality tests up to at least 10 lags -- that is at least a 20-variable model (testing in one direction) ... 78 data points aren't enough for that regardless of the inputs."

    There's one Granger causality test with six lags (Aaa Corporate Bonds). Since there's two variables, two extra lags as exogenous variables and a constant that adds up to 2*6+2+1=15 covariates. In that particular test there's 78-6-1=71 included observations. Is that a problem? No, since the p-value of that test is already less than 1%.

    Why does sample size matter, if at all? The Power of a test (i.e. the ability to generate statistically significant results) increases with sample size. More ("moar Power") is always better. But the results of all of the Granger causality results I posted on Historinhas are statistically significant, so sample size isn't an issue in any of them.

    So the real question is whether the sample size is large enough to assert that the Granger causality results concerning currency are meaningful.

    Most information criteria suggested a maximum lag length of one for all three of those tests. Thus the number of covariates is 2*1+2+1=5. The number of included observations is 78-1-1=76.

    Is 76 enough observations to estimate 5 parameters? It depends on how much Power you think is "acceptable".

    Rules of thumb on this issue abound, but 80% Power is almost always the threshold for what is considered "acceptable". With five or fewer parameters, the rule of R. J. Harris (1985) says the number of observations must equal 50 plus the number of parameters estimated in order for the Power to be 80% or higher (and 76 > 55).

    Would more observations reduce the p-value of those tests? Possibly, but even if you doubled, tripled, or increased the number of observations ten-fold, it's all but certain the p-values would not fall from more than 50% or 70% to less than 10%.

    ReplyDelete
    Replies
    1. Mark, what does this mean?

      "Most information criteria suggested a maximum lag length of one for all three of those tests."

      I think understand lag length, but I don't know what the "most information criteria" are or how you would have determined that they suggest that. Thanks.

      Delete
    2. Tom,
      I'm referring to the lag order selection criteria, specifically the sequential modified Likelihood Ratio test (LR), the Akaike Final Prediction Error (FPE) criterion, the Akaike Information Criterion (AIC), the Schwarz Information Criterion (SIC) and the Hannah-Quinn Information Criterion (HQ). With the exception of the LR test, the lowest criterion value is preferred.

      When estimating a Vector Auto-Regression (VAR) model, EViews enables one to generate all the information criteria up to a specified maximum lag value, or"pmax". I usually start with small values of pmax and increase it incrementally until a majority of the criterion have settled on a particular lag length.

      There's no hard and fast rules about using lag length selection criteria, but if you think models with parsimonious lag length are preferred, then the approach I use is probably the best.

      Delete

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.