Monday, February 1, 2016

Model forecast update: core PCE inflation

Core PCE numbers were released today for December of 2015 (and also Q4 of 2015 is now available). Only 0.51% (!)

Anyway, here are the updated comparisons of the IT model forecast (from 2014!) with data as well as other models. I like to continue to include the FOMC forecast from 2014 as it not only has been completely defeated by the IT model, but it is especially funny because core PCE inflation is a) one of the FOMC's key indicators, and b) ostensibly something under the FOMC's control.

There is still a dead heat in the head-to-head with the FRB NY DSGE model, with the IT model still a touch better (and starting a quarter earlier).

David Beckworth's "corridor model" seems like it should have predicted QE4 at the beginning of 2015 -- or at least no rate hike at the end! But like all market monetarist models, I can't actually use this model to make predictions (see here or here) -- only David Beckworth can interpret it.


  1. Great!

    Sorry Jason, you've probably answered this before... and I did check out several layers of links looking for the answers, but I didn't see them:

    light green dashed = monthly data?
    dark green solid = quarterly data?
    black = ? previous DSGE predictions maybe?
    dark gray zone = +/- 1-sigma on IT model?
    light gray zone = 90% confidence band on IT model?
    dark orange zone = +/- 1-sigma on FOMC or DSGE?
    light orange zone = 90% confidence band on FOMC or DSG?

    I'm guessing 90% rather than +/- two sigma since it seems less that twice the width of the darker shades in both cases.


      green = monthly data
      dark green = quarterly data
      black = quarterly data available in 2014 (basis for NY Fed DSGE)
      errors are 70% and 90% confidence

      Except FOMC, which is just central tendency and spread of the "dots" from the FOMC.

  2. Very interesting- will have to keep following through 2017, when the divergence between the IE model and Fed's estimates will be large.

  3. Very interesting- will have to keep following through 2017, when the divergence between the IE model and Fed's estimates will be large.

  4. O/T: Stephen Williamson put up a collection of forecast charts (e.g. from Sweden) like you have above.

    I asked if he had models he could compare them against, and he said:

    "If I couldn't do that, my econometrics teachers would be very disappointed. These models are boilerplate. Everything reverts to trend, with a bit of Phillips curve thrown in (and the Phillips curve is part of what's throwing them off). The confidence intervals you can't take seriously."

    But he didn't actually produce any.

    1. Which made me think, what's he mean by "Everything reverts to trend?" Is he talking about modeling the "wiggles" as Roger Farmer says, and just assuming it reverts to a long term trend? "Econometrics" makes me think of VAR models (like I'm guessing Allan Gregory builds?).

      It occurred to me that your P : N <-> M model (AD-AS), with dynamics for the resultant k parameter, is a model of the trend, isn't it? Isn't that what Farmer is advocating in his critique of RBC that Noah was so impressed with?

    2. I think Williamson meant he could make a VAR model like the ones for Sweden -- of which I have no doubt. I don't think he interpreted your question as "can you compare your favorite model (e.g. some neo-Fisher model) against the results", but rather "could you make some graphs like these". Compare models vs replicate results.

      In those VAR models, you have an AR process shock that goes to zero over a long period. Once you remove the trend (e.g. via HP filter), the AR processes represent the wiggles.

      Nearly all of the IT models are models of the trend. Depending on whether deviations from the trend are the result of actual fluctuations (like the QTL "model") or non-ideal information transfer (like the short term interest rate model).

    3. BTW, Levine didn't really give me an answer. He did say that "statistical mechanical models" have been used in economics, with the "assumption free" ones producing "empty results" while others which "have assumptions" lead to "substantial results." Then he gave me this link (I guess as an example of one having "substantial results"):

      I read the abstract, and I don't get the connection.

    4. Maybe I'll just thank him, and include a link to your latest response to Avon. ;D

    5. That paper uses random networks, but still is about the details of human actions (they form links through people they "know" and derive some utility from those links).

      Not sure what the substantial results are -- there is no comparison to empirical data. It's a theoretical result about the prisoner's dilemma in a social network.

      I have no doubt that such models could be useful, but they aren't directly related to the info eq approach.


Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.