Tuesday, November 15, 2016

The surge in the 10-year rate

That was quite a surge in the 10-year rate after the election, however it's still consistent with the error bands I put forward in 2015:

The Blue Chip Economic Indicators (BCEI) continues to be incorrect.


Update 17 January 2017

That surge is starting to look transient ... stay tuned:


  1. This raises a point which I don’t think you have discussed before: the difference between unconditional and conditional forecasts.

    An example of an unconditional forecast would be that (barring an event which is outside of all human experience), the next eclipse of the sun will be on a specific date and at a specific time. The probability of the event is so close to 100% that we can ignore the difference.

    An example of a conditional forecast would be that, if a business gets a new product to market before its rival, its sales will increase by 20%. The probability of the event is much less than 100% and the probability is, itself, part of the forecast.

    Businesses and governments use mostly conditional forecasts to create plans for various scenarios. They try to imagine alternative futures. For example, the business above would also forecast what would happen to sales if its rival is first to market with the new product. Businesses also plan for disasters e.g. an oil major might ask what happens if a major distribution depot catches fire and is put out of commission for some time. They also plan for predictable fluctuations e.g. it is well known that demand for electricity peaks at the end of major televised sporting events due to people using kettles to make hot drinks. This last example illustrates a herding event which is why it disrupts demand and why it needs to be included in any useful forecast.

    It seems likely that the election result caused the change in your graph.

    A conditional forecaster would point out, before the election, that the election result might impact markets. They would produce conditional forecasts based on the possible results. This requires that their models can accept the different conditions as input parameters and that they contain some logic which is conditional on those parameters.

    The problem for your method is that you appear to be doing unconditional forecasting. Even if your assumption that individual events are random is correct over 99% of the time, it will be wrong when a major disruptive event occurs, such as last week’s election. However, these are the events where we most need an accurate forecast to guide our decisions.

    Your assumption that everything is random seems to preclude conditional forecasting. How do you square this circle in your mind?

    1. Hi Jamie, it's been awhile!

      Technically, the forecast above is conditional: it assumes NGDP and the monetary base (minus reserves) are log-linear stationary stochastic processes. In fact, it should be thought of more as a relationship among three variables such that

      f(r, NGDP, MB) ≈ 0

      plus some stochastic error (ARIMA process).

      It is also based on the monthly average rather than the daily -- although I show the daily data because it is updated daily. For the daily data, the error bands should be wider.

      But the model is based on information equilibrium, so the change due to the election should evaporate over time. If it does not (at least without a strong deviation from the log-linear path of NGDP and MB), then the model is wrong and could be rejected.

      Effectively, there was a coordination in the market (by the election result) that lead to non-ideal information transfer. If the market stays non-ideal for a long period of time, then the information equilibrium (which assumes ideal information transfer) is not useful -- the scope I(NGDP) ≈ I(MB) is not valid.

      So it's a test. I say this fluctuation disappears in the data, much like Draghi's comments about monetary policy impacting exchange rates.

      I have a nice post on this with cool animations here:


    2. Thanks for the reply. However, I think that we are using “conditional” in different ways.

      Take an example. In the UK, the key macro-economic issue of 2016 and beyond is the impact of Brexit on the economy. This is based not only on whether we leave the EU or not, but also on the options for the specific terms on which we leave.

      To make sense of this, we need conditional forecasts to tell us which options have which implications. However, your forecasting method (which you define as a function of r, NGDP and MB) does not appear to help with this type of problem as none of the three parameters have anything to do with the UK’s position in the EU or its options for moving forward.

      This seems to me like a challenge to your method – not a challenge to the mathematics of the method but a challenge to its usefulness. If you can’t help with the biggest macro-economic issue of the day, we need another method, but that also raises the question of what types of issue your method would help us to address?

      That’s an open question, not a condemnation, but I’m not clear what your answer would be. However, it a question about the relevant problem set rather than the technique or solution set, whereas your posts are mostly about technique.

      Jason: it's been awhile!

      Yes, I have cut back on my economics blogs reading and commenting. I am focusing on specific areas of interest to me rather than just reading widely around the subject which I did for six or seven years. If I had more discipline, I would try to write a book on basic macro-economics for the lay person to save others the time and effort. However, I don’t have your admirable level of dedication and focus. Maybe next year!

    3. I think we are sort of using conditional in different ways, but I also think we're defining model usefulness in different ways.

      One of the main motivations for my model was to come up with the simplest possible one. It would be so basic as to ignore nearly everything. Another version of this would be simple log-linear "models". With that, the model would act as the "nothing happens" counterfactual. If it continued to describe interest rates in light of Brexit or Trump, then maybe those things aren't important to economics. Maybe it will fail, and those things are important.

      In a sense, I wanted a non-political, simplistic, (mostly) unconditional counterfactual to anchor various claims. And where there are differences, then there are interesting effects (e.g. the impacts of the various conditional you mention).

      An example of this is here where I looked at WWII price controls in the US:


      There are differences from the model where it looks like price controls did have an impact. I'd call that a success.

      Basically the counterfactual is one of the hardest things to pin down in any economic model or conditional forecast. You never see it. I'd hope that my approach would produce counterfactuals. I'd expect to see deviations from them!

      In a sense, the graph in the post above would raise the question: if interest rates are consistent with where they'd be predicted to be back at the beginning of 2015, then was there an effect, or was it simply market randomness that will fade away?

      Since economic models are so bad, we need "dumb" counterfactuals to see if any model is actually bringing anything to the table.

    4. In the above, I said:

      "maybe those things aren't important to economics."

      I should have said:

      "maybe those things aren't important to interest rates."

    5. Thanks. That's an interesting reply. It raises a number of questions in my mind, but I need to think further before replying more fully.

    6. I agree with some of your points here, and I would like to agree with other points, but they don’t quite add up for me. Here are a few random thoughts.

      You have previously suggested that your method might replace traditional economics (mainstream and heterodox) or at least their forecasting methods. However, if you are now positioning your method as an honest counterfactual, we still need other forecasting methods to answer the types of question I raised earlier.

      I agree that simpler models are better than more complex models. Einstein said this, and he knew what he was talking about. However, that applies only on a like-for-like basis. You can’t compare models designed for different purposes purely on their relative simplicity. The types of model required to answer my questions are inevitably more complex than your model even at a micro level. The first mathematical model I came across in industry was a model used by an oil major to plan its downstream activities in the UK. The model was complex because it needed to consider demand for oil products in different parts of the country; crude oil prices and availability; oil product prices and marketing campaigns; the relative merits of manufacturing finished products versus buying them on the market or from overseas subsidiaries of the same oil major; the logistics of the manufacturing and distribution processes; the potential for swaps of oil products with other oil majors in the UK; the probabilities associated with unexpected events that might impact on demand and other unexpected events which might impact on supply capability etc.

      If I have understood correctly, you now seem to be saying that there are two success criteria for your model. First, your model will forecast accurately in the absence of a major shock. Second your model will not forecast accurately when a major shock occurs, but the divergence will highlight that a major shock is occurring. I would like to believe that. However, it means that your model can be deemed to be successful irrespective of whether it forecasts accurately or not. How do you test a model of this nature?

      I agree that many macro shocks could be short-term effects that quickly die out. However, after a shock has occurred, it becomes part of history, so the shock is available as data to your forecasting method. I presume that your method will respond to that new data by moving the forecast towards the shock. I am struggling to see how you could differentiate between a short-term shock which reverts to your forecast, and a longer-term shock where your forecast moves towards the shock. Both would see the data and the forecast converge. Also, if your method moves the forecast towards the shock after the event, it is not a true counterfactual. Surely a counterfactual would ignore the shock after it happened as well as not predicting it before it happened.

      I don’t work in finance so don’t know much about the forecasting methods used by financiers. However, I do know that some people use moving averages of different durations (e.g. 10 days, 20 days, 200 days) as forecasting tools. This seems akin to your idea of a base-line counterfactual against which to measure major shocks. A moving average is even simpler than your method. Have you thought of comparing your results to moving averages? Alternatively, have you thought of using your method to try to outperform moving averages on stock prices? You could make money if your method is better! Also, there is much more data available on stock prices than macro prices so it might be easier to reach a scientific conclusion.

    7. Regarding your last point first, I have put the model up against "moving averages" (actually various methods), and it does pretty well.

      It is probably true that one could make some money, and I've thought about it. However I think testing over about 3 years isn't enough (I expect the model to do its best over 5-10 years based on some analysis). I could just be more risk averse with money than I am with my reputation.

      I agree with your question about success criteria -- it would be hard to test the model if it truly was "information equilibrium is right, except when it's wrong". The key is that the failure mode of information equilibrium is not open ended: information equilibrium remains a bound on the time series when it is failing (it's called non-ideal information transfer). The relevant math is actually used in estimating stochastic differential equations.

      Regarding your question about forecasts moving towards the shocks, there are ways of handling that (here are the plucking models, and here is one possible way to statistically identify non-ideal shocks compared to "normal" shocks).

  2. "Your assumption that everything is random seems to preclude conditional forecasting. How do you square this circle in your mind? "

    With error bands which ever increase in divergence and in extreme cases by extending them out to 10 SDs. You can never have enough of a good thing.

    1. The error bands are 90% confidence limits (about 1.6 SDs) for a stochastic process follow the typical ~ √t behavior of Brownian motion.

      Ten standard deviations would be 99.999999.... % confidence and show error bands that would be a bit more 6 times as wide.

      If you'd like to hear my answer to Jamie's question, it appears above.