Wednesday, July 1, 2015

Paul Krugman's definition of a liquidity trap

In the discussion in this post, I mentioned how I viewed the liquidity trap and zero lower bound:
[the phrase "zero lower bound"] is used to mean the appropriate target nominal interest rate (from e.g. a Taylor rule for nominal rates, or estimate of the real rate of interest after accounting for inflation) is less than zero (i.e. because of the ZLB, you can't get interest rates low enough). I've usually stuck to [that] definition ...
I am happy to report that it is essentially the same as Paul Krugman's definition [pdf] of a liquidity trap:
Can the central bank do this? Take the level of government purchases G as given; from (2) and (5) this will tell you the level of C needed to achieve full employment; (4) will tell you the real interest rate needed to get that level of C; and since we already have both the current and future price levels tied down, this implies a necessary level of the nominal interest rate. So all the central bank has to do is increase the money supply until the rate is at the desired level. But what if the required nominal rate is negative? In that case monetary policy can’t get you there: once the interest rate hits zero, people will just hoard any additional cash – we’re in the liquidity trap.
Bold emphasis was italic emphasis in the original. As Paul Krugman says: the required (target) nominal rate is negative. The observed nominal rate is unimportant except as an indicator of the point where people hoard cash.

H/T to Robert Waldmann for pointing me to the reference.


  1. "The observed nominal rate is unimportant except as an indicator of the point where people hoard cash."

    I guess you mean unimportant to the word game of definitions... if the observed nominal interest rate is significantly above zero there is basically no way to determine whether the "required nominal rate" needs to be below zero...we do a pretty embarrassingly poor job of measuring the Wicksellian rate or whatever you want to call it...

    1. Point taken. That is definitely true. I expect that the model is a typical macro model in that if you try to use it to describe empirical data, it'll fail.

    2. Is the Wicksellian rate measurable? It does not have an operational definition, right?

    3. I mean, if it is not measurable, there is no shame in not measuring it. You can't do a poor job of it.

  2. Did Waldmann point you to the reference from his blog?

    O/T: any intention to look at Sadowski's latest.

    I haven't dug in myself yet... but I am happy to report that I do know and use Cholesky decompositons (which makes me feel incrementally better about digesting it later when I have time to take a detailed look... Lol)

    1. Oh dear god no.

      In part 1, if you take the (approximate) scaling required to move the two curves that close to each other, this is what you get outside of the region of the plot he shows:

      On the other hand, he is using local information equilibrium in saying that CPI ~ (SBASENS/m0)^k ... it's just doesn't have the same information equilibrium relationship outside of 2009-2015 ... you eventually are lead here:

      Also ... k ~ 1/6, so you could multiply the base by 100 times and get only double the price level (if the model above was correct). I'd take that as a sign of monetary policy ineffectiveness.

    2. "Oh dear god no." ... Lol... OK, now I'm definitely going to go back and actually read his posts. You have me intrigued. Thanks for taking a look.

    3. Tom! cholesky decompositons are not neutral! they are evil! develop the entire impulse response space instead

    4. I did a new post on it. Mark Sadowski effectively proves the liquidity trap for the US by being the first monetarist to build and test an information transfer model. It's quite a victory!

    5. LAL, I have no reason to doubt you... it's just that it's pretty rare when I recognize a name whenever I encounter mathy econ, so I was happy to just recognize that name. I use Cholesky decompositions for an entirely different purpose: a quick & easy means of generating a set of sample vectors from a zero mean jointly Gaussian distribution with covariance C (the answer is x = chol(C)'*randn(size(C,1), num_samp)). Perhaps I used it to check for matrix positive definiteness too (for an embedded application?)... the decomposition fails if it's not positive definite, and it's super easy to calculate the decomposition.

      I will have to educate myself to even understand Sadowski's post. I did a little reading about VARs today, but I'll have to revisit later tonight. I think I get the gist of it so far though.

      Then when I get to Sadowski's "part 2" hopefully his use of Cholesky decompositions and your comment will make sense to me! (c:

      I've also seen them misused in my line of work, but I doubt that has anything to do with it.

    6. Not that I'm saying Mark misused them... but it sounds like that's your implication. I'll check out your link (work keeps getting in the way here (c:)


Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.