Tuesday, August 4, 2015

Nonlinear Signals of Unusual Size (NSUS's)

So the discussion continues (in comments here, and a post and comments here), but I think the TL;DR version of the Great Smith-Sadowski Row of 2015 is this:
Sadowski: Changes in the monetary base Granger-cause changes in the economy. [So the liquidity trap isn't real.]
Smith: You ignore large nonlinear signals in the monetary base data. 
Sadowski: There were no targets for the monetary base before the ZLB, so I ignore the data before 2009. 
Smith: If you ignore the large nonlinear signals, the data is log-linear and is likely spuriously correlated. 
Sadowski: I know about spurious correlation. You have to de-trend the data. 
Smith: You can't de-trend data with a large nonlinear signal in a model-independent way. 
Sadowski: You are guilty of showing spurious correlation, too. 
Smith: The data contains a large nonlinear signal that I don't ignore, so spurious correlation isn't an issue.
update 8/10/2015
Smith: Look, here's Dave Giles: "Standard tests for Granger causality ... are conducted under the assumption that we live in a linear world."
Sadowski: The title of that post references nonlinear Granger causality.
Smith: Huh? I was quoting the introduction that says why nonlinear Granger causality was invented.
Sadowski: You can't interpret a post that's skeptical of nonlinear models to mean you should be skeptical of linear models.
Smith: ???
Sadowski: Here's Chris House"In Praise of Linear Models …"
Smith: Chris House goes on to say (in that same post): "There are cases like the liquidity trap that clearly entail important aggregate non-linearities and in those instances you are forced to adopt a non-linear approach." This is exactly the case that you are treating as linear when you say that the monetary base Granger-causes changes in the economy in order to say that the liquidity trap isn't real. ... I'm also now under the impression that you only read the titles of blog posts. 
After writing this, I think I've confirmed my feeling that this is just going in circles. You can't just sweep these under the rug:


I mean you can just sweep them under the rug, but the sweeping is strongly model-dependent.

Update (+ 10 min):

Example of model-dependent sweeping: QE is expected to be temporary, therefore changes in the base will not impact the price level.

Plausible? Sure. But completely dependent on a particular model of expectations.

43 comments:

  1. I for one found the discussion extremely entertaining. IMHO Mark bobbled the ball a bit when he tried to pull math rank on you.

    ReplyDelete
    Replies
    1. Todd, I will heartily 2nd your opinion on the entertaining part. I will happily accept my role as math (and all around?) nincompoop here, so I won't comment on that bit.

      Delete
    2. I think I am below you still on the math scale. However, what I meant was that Mark brought up his math credentials in one of his postings, trying to argue from authority. Anyone who has perused this site for any length of time would know that trying to out-math the Jason is a fool's errand.

      Delete
    3. Lol, well yes, unfortunately for me though, I'd be (an even bigger) fool if I tried to out-math either of them. Although anybody can make a mistake.

      You bring up an interesting conundrum though: how does a non-expert judge what an expert is saying or judge between experts that disagree? Well other than becoming an expert yourself, or simply withholding judgement, your options are limited. Counting logical fallacies is one idea. One I like is asking them "How would you know if you were wrong?" and seeing where that leads. Not so helpful here though.

      Delete
    4. Mark probably does know a lot more about statistics than I do. But this isn't a discussion that really involves e.g. Fisher information and Cramer-Rao bounds. I think the issue is that Mark is used to seeing roughly log-linear time series, so he immediately brings that context and experience to bear on every time series. In the particular instance of the monetary base, this is not appropriate. As shown above, there are real non-(log-)linear signals ... and so log-linear thinking will lead you astray.

      I also think he is absolutely convinced that changes in the monetary base have an impact on output or inflation (or other variables); I think he's told Tom as much.

      And judging a debate between 'experts' as a 'non-expert' is a difficult proposition. If either of you have seen the original "Connections" series from James Burke -- he addresses this question in the last episode. It's more in the context of making decisions about technological advances, but the idea is similar -- if you don't know e.g. how nuclear power works (more relevant in 1978 UK), how can you make informed decisions about it? He leaves it open ended.

      In any case, I don't think Mark has made any direct mistakes in his analysis. It's just that his analytical approach is the wrong one for the job. Economists are not used to time series that suddenly change by orders of magnitude, but when you spend years learning to use a log-linear hammer everything looks like a log-linear nail.

      Does the graph at the top of this post appear to be a line after 2008? Well, there is a bit of a linear trend. But there is also a series of steps that coincide with exogenous information: the Fed announcements of QE1, QE2 and QE3.

      Delete
    5. Jason, thanks for the reply. I've been dreaming of a way to indicate a resolution to this that doesn't involve me spending four more years at university. What occurs to me is this: if I were King and both you and Mark were in my court, one thing I could do is have you produce a black box that arbitrarily generates a new model each time the button is pressed and then produces noise corrupted time series data from several causally and non-causally related signals as output. Mark takes the box, and using his best methods, estimates models, model parameters, statistical significance and Granger causation, etc. for a set of button presses. Then we use your methods on those same time series. Perhaps some information about the kind of arbitrary model that's generated has to be available to both of you: I'm not sure. Then you and Mark switch roles: he creates the black box. Then I can compare how each one of you did against the truth (recorded in the black box) in both cases. Using that idea as a basis can you imagine a "challenge" that might actually work and be fair, or is that a hopeless fantasy on my part?

      Delete
    6. Also, thanks for the "connections" link. I've never seen it or even heard about it before.

      Delete
    7. Sorry, Jason, I keep thinking about this ... even though you're probably sick of it by now. You write:

      "Does the graph at the top of this post appear to be a line after 2008? Well, there is a bit of a linear trend. But there is also a series of steps that coincide with exogenous information: the Fed announcements of QE1, QE2 and QE3"

      Do the post-2008 steps (non-linearities) matter if you're including pre-2008 data as well? What would change about your above statements if there was just one break in the log-linear trends: right at 2009 (with the post-2009 data not showing steps, but having a trend matching what you describe)? Is your concern about the post-2008 steps mostly related to strictly post-2008 analysis?

      Delete
    8. ... replace all "2009" occurrences with "2008" in the above.

      Delete
    9. Hi Tom,

      The steps matter in any subset of the data where they exist. The only case where the steps don't matter is if you only take data before 2008.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. BTW, I tried to entice Dave Giles into adjudicating this dispute... and came oh so close, but no cigar. I was on my best behavior because he didn't publish my 1st attempt nor did he answer my email. His blog has a very technical and professional air to it, and I was starting to think that he has little patience for my clownish shenanigans and lack of expertise.

    ReplyDelete
    Replies
    1. Tom,
      Your comment there is almost certainly what led Dave Giles to weigh in on Jason's blithe assertion that:

      "The H-P filters are precisely "rendering a nonstationary process stationary for time series analysis". The core of real business cycle theory is that the business cycle is a stationary process."

      http://informationtransfereconomics.blogspot.com/2015/08/statistical-significance-is-not-model.html?showComment=1438737574379#c4030383867890188212

      [Note also, after I respond to that comment, Jason continues to furiously dig himself in even deeper]

      Here is Dave Giles' post on the subject (several hours after Tom's first comment at Dave Giles blog):

      "There's a widespread belief that application of the H-P filter will not only isolate the deterministic trend in a series, but it will also remove stochastic trends - i.e., unit roots. For instance, you'll often hear that if the H-P filter is applied to quarterly data, the filtered series will be stationary, even if the original series is integrated of order up to 4.

      Is this really the case?"

      [Explanation]

      "One implication is that when the H-P filter is used to remove deterministic trends, it doesn't remove stochastic trends (unit roots)! This runs contrary to the accepted wisdom, and provides a formal, mathematical, explanation for the folklore (and the evidence provided by Cogley and Nason) that the H-P filter can generate "spurious cycles" in the filtered data."

      http://davegiles.blogspot.com/2015/08/the-h-p-filter-and-unit-roots.html

      That post also ended up in Mark Thoma's links today:

      http://economistsview.typepad.com/economistsview/2015/08/links-for-08-08-15.html

      In Jason's defense, Dave Giles describes this erroneous belief as "widespread".

      However my general impression is that Jason is highly certain about a long list of things that he evidently knows very little about.

      Delete
    2. And on a related note, check out this partial quote of a critique of Menzie Chinn on a recent Econbrowser comment thread:

      Rick Stryker:
      "... you’ve first got to insure that you have a valid regression. The variables are trending obviously. If there is really no relationship between them but you nonetheless estimate a linear regression, then we know asymptotically that the coefficient on Y goes to the ratio of the drifts of the series, R2 goes to 100%, and the t-statistic goes to infinity. The regression is spurious..."

      http://econbrowser.com/archives/2015/08/to-log-or-not-to-log-part-ii#comment-191345

      In Menzie Chinn's defense, he clearly does know what he is doing. He just wasn't explicit enough while trying to make an unrelated point about the usefulness of logged data.

      In my opinion the whole comment thread is worth reading. 2slugbaits even brings up the subject of Dave Giles' post on spurious regressions.

      Delete
    3. Mark,

      Philips and Jin (2015) does not say what you are claiming it says regarding our argument (HP filters are fine at removing stuff unless your smoothing parameter is much larger than your data series [1]) -- in fact, later on in the paper it points to exactly the problems with attempting to de-trend jumps in the data.

      Here's a paper that says Granger causality can't deal with trend breaks:

      https://ideas.repec.org/a/ebl/ecbull/eb-08c20013.html


      Here is Dave Giles on trend breaks:

      http://davegiles.blogspot.com/2011/04/testing-for-granger-causality.html

      It looks as if there may be a structural break in the form of a shift in the levels of the series in 1975. We know that this will affect our unit root and cointegration tests, and it will also have implications for the specification of our VAR model and causality tests.

      Mark: THERE ARE TREND BREAKS (nonlinear signals) IN YOUR MONETARY BASE DATA. YOU CAN'T IGNORE THEM. IF YOU DO YOUR ANALYSIS IS FLAWED.

      I don't know how to make this any clearer for you.




      [1] When I read that, I did a spit take. WTF? How can you use a smoothing length greater than your data series???? Economists is weird.

      Delete
    4. Let's hear it for Tom Brown! See Mark you always said I write a great Trog but Tom's skill set is impressive too.

      He gets to people by disarming him. I get to them by 'arming them even more.'

      Delete
    5. Jason,
      "Philips and Jin (2015) does not say what you are claiming it says regarding our argument"

      You said:
      "The H-P filters are precisely "rendering a nonstationary process stationary for time series analysis". "

      And yet Philips and Jin say (in their abstract):
      "When it is used as a trend removal device, the HP filter therefore typically fails to eliminate stochastic trends..."

      In other words, you state that HP filters precisely render nonstationary processes stationary, and Philips and Jin state that HP filters typically do not.

      It sounds to me like Philips and Jin disagree with you.

      In fact in the previous set of comments I said:
      "HP filters can render nonstationary processes stationary if you set the smoothing parameter low enough in value, but that is definitely not what they are used for in practice."

      http://informationtransfereconomics.blogspot.com/2015/08/statistical-significance-is-not-model.html?showComment=1438740662798#c8313105924732633778

      Which more or less is the same as what Philips and Jin are stating.

      Jason:
      "HP filters are fine at removing stuff unless your smoothing parameter is much larger than your data series [1].....When I read that, I did a spit take. WTF? How can you use a smoothing length greater than your data series???? Economists is weird."

      What are the typical values for lambda?

      The usual rules of thumb for lambda are:
      Annual data = 100*1^2 = 100
      Quarterly data = 100*4^2 = 1,600
      Monthly data = 100*12^2 = 14,400
      Weekly data = 100*52^2 = 270,400

      However, Ravn and Uhlig (2002) state that lambda should be:
      Annual data = 6.25*1^4 = 6.25
      Quarterly data = 6.25*4^4 = 1,600
      Monthly data = 6.25*12^4 = 129,600
      Weekly data = 6.25*52^4 = 45,697,600

      I find that when I set lambda to 100 I can render the log of the monetary base stationary over the period of ZIRP (78 months). But the usual rule of thumb is 14,400 for monthly data, and Ravn and Uhlig suggest that it should be 129,600 for monthly data.

      If my data set of monthly were larger than my lambda value it would have to include 1200 years worth of data under the typical rule of thumb, and 10,800 years worth of data according to Ravn and Uhlig's rule.

      Try finding that in FRED.

      By the way, who primarily (if not exclusively) uses HP filters?

      Economists, or more specifically macroeconomists, or more specifically still, applied macroeconomists. What is weird is physicists telling applied macroeconomists that they way they use a tool designed for applied macroeconomics is weird.

      And let's review what you said in the previous thread:
      "Prescott likely didn't design a filter to invalidate his own approach to macroeconomics by leaving a non-stationary time series. I'm not saying that is how they are always used, but making a time series [stationary] is the original purpose."

      Then why does Prescott always set lambda to 1600 for quarterly data?

      HP filters typically will not render nonstationary quarterly data stationary at that value of lambda. If that was the original purpose of HP filters why didn't Prescott simply reduce the lambda to a level that would guarantee that it would be stationary?

      Don't you think that a guy with a Nobel Prize would be smart enough to have figured that out?

      Delete
    6. Jason,
      "Here's a paper that says Granger causality can't deal with trend breaks:"

      No, that's not what it says. The key point of that paper is that structural breaks will bias unit root tests which will lead to incorrect identification of the order of integration. Determining the correct order of integration is important for valid Granger causality tests (as well as for valid cointegration tests).

      Jason:
      "Here is Dave Giles on trend breaks:"

      You left out the most important part.

      Dave Giles:
      "It looks as if there may be a structural break in the form of a shift in the levels of the series in 1975. We know that this will affect our unit root and cointegration tests, and it will also have implications for the specification of our VAR model and causality tests. *This can all be handled, of course,* but rather than getting side-tracked by these extra details, I'll focus on the main issue here, and we'll shorten the sample as follows"

      And yes, indeed, it can all be handled.

      The incorporation of breakpoint unit root tests is one of the many nice new features in the EViews 9. I've run breakpoint unit root tests on the log of monetary base during my sample period (December 2008 through May 2015) with a variety of trend and break specifications as well as a variety of automatic break selection methods. I still find that the order of integration for the log of the monetary base is *one*.

      I didn't discuss the issue of trend breaks in any of my posts for the same reason Dave Giles didn't get into it. It's a lot of extra details and the posts are already probably too technical. This is much ado about nothing.

      Delete
    7. Jason,
      Let's review the various issues you've raised concerning my Granger causality tests:
      1) Too small of a sample size.
      Sample size is only an issue because more observations increases statistical power, but evidently you didn't know that. The results were already statistically significant, so this was a total red herring.
      2) Nonstationary data
      This was ludicrous because nobody above a college freshman in statistics (that is, other than you) would even attempt to claim that regressing one nonstationary series on another leads to meaningful results. (I'm still stunned you left that post up without a retraction or an update. Evidently you don't realize how bad it really is.) The fact that I've reported the results of every single one of my unit roots tests obviously went right over your head.
      3) First differencing is just like using an HP filter, so it's model based.
      I still don't follow this argument. Not only is it not true, trying to invalidate standard practice in time series analysis on the basis that it is no different from RBC modeling is ludicrous.
      4) Structural breaks
      Structural breaks don't make Granger causality tests impossible, they just may bias the unit root tests, the results of which are important for valid Granger causality tests. But this can all be handled (and now very easily, thanks to EViews 9).

      Now keep in mind that throughout my posts I have assiduously reported the results of unit root tests, information criteria tests, autocorrelation tests, tests of dynamic stability, and cointegration tests. What specification tests have you subjected your "model" to?

      In fact I'll wager that Tom Brown can't even name the statistical method by which you've "fitted" your model.

      Given how little you evidently know about time series analysis I'd be astonished if your model were even remotely well specified.

      But at least it's an 11 on the AWESOME scale.

      P.S. And never mind that your "model" implicitly depends on the central bank using currency in circulation as the instrument of monetary policy, which every monetary economist knows is endogenous. It's even more amusing that you essentially argued for its virtues as an instrument of monetary policy in, of all things, a post on *endogenous money*.

      Delete
    8. 1) Too small of a sample size.

      You've misunderstood my point. It was actually that you didn't use all of the available data. It is an implicit model to say that "things changed" in 2009. If monetary policy was effective after 2009, then it was effective before 2009.

      And if that extra data will only increase statistical power, why don't you add it in?

      2) Nonstationary data

      Because the data has a nonlinear signal that is *critically* important (i.e. QE), you can't remove the trend. But you did anyway.

      I didn't know how you could compare the data using a linear model without removing the trend that was the entire signal. You graphed log MB next to log CPI, so I assumed you just fit the levels and got a spurious correlation. Which we both agree is garbage. Apparently what you did was remove the QE, an example what I'd like to call "statisticiness": something that superficially looks like a proper approach, but isn't.


      3) First differencing is just like using an HP filter, so it's model based.

      I explicitly said first differencing introduces model dependence. My point was that all methods of de-trending data are model dependent. First differences, the HP filter, LOESS filters, simple linear fits, ...

      When you use a method to de-trend, you make an assumption about how to de-trend, and therefore you make an assumption about the trend.

      Assumptions = model dependence. This is not very hard.

      4) Structural breaks

      How the structural breaks (i.e. QE1, QE2 and QE3) are handled is model dependent. You make no mention of structural breaks in your analysis and in fact treat MB as a log-linear series. So did you handle the structural breaks in the MB data? How?

      Delete
    9. Jason:
      "If monetary policy was effective after 2009, then it was effective before 2009."

      The instrument of monetary policy changed in December 2008. Read the introduction to my first post.

      Jason:
      "Apparently what you did was remove the QE, an example what I'd like to call "statisticiness": something that superficially looks like a proper approach, but isn't."

      If it's not a proper approach, then there's long list of applied macroeconomists who are not following a proper approach, some of them with Nobel prizes.

      Jason:
      "My point was that all methods of de-trending data are model dependent."

      A stationary series is stochastic process whose distribution does not change when shifted in time. Thus the mean and variance of the process do not change over time, and they also do not follow any trends. This is a mathematical *definition*.

      Jason:
      "So did you handle the structural breaks in the MB data? How?"

      Why do structural breaks even matter?

      Structural breaks will bias unit root tests which will lead to incorrect identification of the order of integration. Determining the correct order of integration is important for valid Granger causality tests (as well as for valid cointegration tests).

      I've run *breakpoint unit root tests* on the log of monetary base during my sample period (December 2008 through May 2015) with a variety of trend and break specifications as well as a variety of automatic break selection methods. I still find that the order of integration for the log of the monetary base is *one*.

      Delete
    10. Mark, you write:

      "The instrument of monetary policy changed in December 2008. Read the introduction to my first post."

      Meaning the instrument was the federal funds rate prior and MB after? Assuming that's what you meant, then during the time when the fed funds rate was the instrument (2008 and prior), the Fed still performed OMOs, effectively targeting MB as well, didn't they?

      BTW, I got a bit more out of it this time after reading (but still not fully understanding) what appears to be one of Dave Giles most popular posts (with 325 comments!).

      Delete
    11. OK, another dumb question (for anyone): is Mark's analysis closer to "frequentist" or "Bayesian?" I'd guess the former. If I'm correct, is it possible to do a Bayesian analysis?

      Delete
    12. Mark,

      None of those Nobel Prizes came from keeping terms of order ΔMB/MB but dropping terms of order (ΔMB/MB)² when the latter are bigger than the former. As Chris House says, most of the time, linear models are fine. And can win Nobel prizes. That other people used linear models correctly is not evidence that you have used them correctly.

      ...

      When you make a distribution stationary, you are assuming some kind of model. In your case, you are assuming (ΔMB/MB)² << 1, which is false.

      Sometimes distributions aren't stationary (e.g. Brownian motion in a system with increasing temperature). To make it stationary implies the removal of a particular temperature trend that is either assumed linear (as you do), derived from theory, or comes from some other measurement (or some combination).

      ...

      Saying the Fed changed its policy instrument in 2009 is an implicit model which you do not define. Is it a DSGE model that suddenly changes from having a dependence on interest rates to a dependence on MB? How? Is there some kind of step function θ(ɛ-r)*(Δmb/mb) so that when the interest rate goes below r < ɛ, a log-linear term in mb = log MB turns on? Is there a phase transition or something? Where does such a non-perturbative term come from?

      It's sounds pretty ad hoc to me.

      Tom,

      Mark's analysis is Bayesian where the prior probability of his model being right is equal to 1.

      Delete
    13. Well this convo continues to entertain, that's for sure.

      Delete
    14. Tom,
      When you have a fed funds rate above zero it is essentially impossible to maintain a fed funds target and target the monetary base at the same time. Targeting the monetary base in such a situation would lead to a highly volatile fed funds rate.

      Good time series analysis requires the use of both Bayesian and frequentist ideas.

      Delete
    15. Jason,
      I cannot think of one VAR estimated by Sims, Christiano, Evans, Edelberg, Marshall, Thorbeck, Bernanke, Kuttner, Romer&Romer, Fatas, Mihov, Blanchard, Perotti, Mountford, Uhlig, Ramey or Shapiro where they had a quadratic term. It simply doesn't happen because there's simply nothing to be gained by doing it.

      "That other people used linear models correctly is not evidence that you have used them correctly."

      We now have at least three posts where you argue that spurious regressions have significance. So I would say we have definite evidence of your incompetence when it comes to time series analysis.

      Have you looked at any of the four papers I posted links to in post 1? It's not like I'm doing anything that hasn't already been done.

      Delete
    16. "We now have at least three posts where you argue that spurious regressions have significance."

      That is not my argument. My argument is that standard linear regression measures of goodness of fit are useless for models with large nonlinear signals.

      None of those people tried to regress MB data that includes QE vs the price level or output.

      Here is someone else who did:

      http://www.bis.org/publ/bppdf/bispap19l.pdf

      ... but they found no impact of QE.

      However, I'd say that study has some of the same issues as yours: applying linear models to a system with large nonlinear signals. However, their paper has the redeeming feature of looking at times when QE was and wasn't in place.

      Delete
    17. Jason:
      "That is not my argument. My argument is that standard linear regression measures of goodness of fit are useless for models with large nonlinear signals."

      Evidently we have to do a thorough review.

      http://informationtransfereconomics.blogspot.com/2014/11/quantitative-easing-cleanest-experiment.html

      Jason:
      "Let's plot the Pearson's correlation coefficient of MB (blue) and M0 (red) with P (as well as the correlation of MB and M0, green):

      [Graph]

      Before QE, all of these are fairly highly correlated -- actually MB and M0 are almost perfectly correlated. This really doesn't tell us very much. NGDP is also highly correlated with the price level. So is population, and in fact any exponentially growing variable.

      With the onset of QE, the correlation between MB and P drops precipitously (as well as the correlation between M0 and MB). We see that the counterfactual path MB without QE would have been more correlated with P (effectively given by the red line)....This means central bank reserves have nothing to do with the price level or inflation."

      You cannot regress nonstationary time series on each other. These are spurious regressions. Spurious regressions result in invalid estimates with high R-squared values, high t-statistics and low p-values.

      Thus the Pearson's r values (the square root of the R-squared values) are subject to extreme statistical bias and so could not have been interpreted as meaning that these series are correlated. Nor could the fact that the Pearson's r values dropped precipitously be interpreted as meaning anything.

      http://informationtransfereconomics.blogspot.com/2015/08/statistical-significance-is-not-model.html

      Jason:
      "I agree that there can be statistically significant relationships between variables with different trends, which is beside the point; my point was that if your data are all roughly samples of exponentially growing functions, you can almost always find a statistically significant relationship between them even if they isn't any relationship at all.

      I decided to demonstrate my assertion with a concrete example (the complete details are at the bottom of this post). Let's generate two randomly fluctuating exponentially growing data series (using normally distributed shocks to the growth rate):...And the result is that the parameter p-values of the model fit are all p < 0.01 ... a statistically significant relationship, worthy of publication in an economics journal for example."

      Except that you cannot regress nonstationary time series on each other. These are spurious regressions. Spurious regressions result in invalid estimates with high R-squared values, high t-statistics and low p-values.

      No economics journal in its right mind would publish these results.

      http://informationtransfereconomics.blogspot.com/2015/08/comparing-methodologies-monetary-base.html

      Jason:
      "How about the same thing but looking at levels? Well the traditional stats test is going to be garbage because of spurious correlation ... but the model looks much better than the statistically "proper" version ...
      should we abandon the level approach because it doesn't conform to a narrow view of what is "empirically acceptable"?...Does a slavish devotion to statistical purity lead us away from real understanding?"

      We should avoid estimating spurious regressions because : 1) estimates of the regression coefficients are inefficient, 2) forecasts based on the regression equations are sub-optimal, and 3) the usual significance tests on the coefficients are invalid.

      Delete
    18. Jason:
      "Here is someone else who did:... but they found no impact of QE.
      ....However, their paper has the redeeming feature of looking at times when QE was and wasn't in place."

      Kimura et al. look at 1980Q1 through 2002Q2.

      But the BOJ didn't significantly expand the monetary base under the original QE until December 2001 and the BOJ didn't finish expanding the monetary base until February 2006.

      In other words there's 85 quarters of data in the paper of which only 1 quarter reflects a significant expansion in the monetary base.

      Another way of putting it is that there was 51 months of significant expansion in the monetary base of which only 4 managed to make it into the paper.

      In short, a paper supposedly studying the effects of QE very neatly manages to exclude QE almost entirely from its sample.

      If your purpose is to study the effects of a QE regime, you need to focus on periods of QE, much like the four papers I cited in my first post.

      Delete
    19. RE: http://informationtransfereconomics.blogspot.com/2014/11/quantitative-easing-cleanest-experiment.html

      If this is spurious correlation, why don't the series stay spuriously correlated? That's because correlation can be used to look for changes. correlation doesn't tell us anything, but a change from correlated to uncorrelated does.

      RE: http://informationtransfereconomics.blogspot.com/2015/08/statistical-significance-is-not-model.html

      You've misread that post. I was trying to illustrate spurious regression, not say it's awesome. The line you clipped after "worthy of publication in an economics journal for example" was "Except it was random data."

      I thought *you* had done a spurious regression because I couldn't imagine you would have done something as daft as de-trend the QE.

      RE: http://informationtransfereconomics.blogspot.com/2015/08/comparing-methodologies-monetary-base.html

      I wasn't estimating a regression. It was spurious as I *explicitly* said. I fit a theoretical function to data. Evaluating min_p abs(Theory[p] − Data) is a perfectly fine procedure.

      ...

      Look Mark, regardless of what I've said you should really ask around your economics friends and see if what you've done really is reasonable. You're obviously not going to listen to me, assuming QE is log-linear is wrong. I know you're not going to believe anything that detracts from the idea that monetary policy is effective. So be it.

      But before you fall (further) down the rabbit hole into epistemic closure, you should consider the possibility that you are legalistically taking bits and pieces of what I am saying out of context (as in the above three posts) in order to hold on to your worldview. That since your livelihood depends on it, you are scrutinizing every piece of evidence I am presenting contrary to your view, but accepting that you've correctly done the analysis that confirms it without much scrutiny.

      My livelihood isn't at stake here. I may well be wrong. I mostly do this stuff because I find it entertaining. If I felt that any of your arguments made sense, I would gladly change my view! But they don't. You can't assume that the monetary base is log-linear from 2009 to 2015 and nothing you have said has even confronted that fact. Your only argument that the base is log-linear from 2009 to 2015 appears to be that the regressions turned out to have low p-values. But those regressions require the data to be log-linear in the first place for the p-values to make any sense.

      So ask some of your economist friends, fellow grad students, or your thesis advisor whether they think the base from 2009 to 2015 is log-linear.

      Obviously you're not going to listen to me.

      Delete
    20. Jason,
      "If this is spurious correlation, why don't the series stay spuriously correlated? That's because correlation can be used to look for changes. correlation doesn't tell us anything, but a change from correlated to uncorrelated does."

      If two nonstationary series go from having a high Pearson's r to a low Pearson's r that only tells us that there is a change on one or both of their trends. It tells us absolutely nothing about the authentic statistically significant relationship, if any.

      Jason:
      "You've misread that post. I was trying to illustrate spurious regression, not say it's awesome. The line you clipped after "worthy of publication in an economics journal for example" was "Except it was random data."

      You evidently thought that it was possible to create an authentically statistically significant relationship from random data (suggesting that the whole concept of statistical significance should be questioned) except that you evidently didn't understand that the regression you were estimating was spurious.

      Jason,
      "I thought *you* had done a spurious regression because I couldn't imagine you would have done something as daft as de-trend the QE."

      The daft thing to do is to *not* render the data stationary and then to estimate a spurious regression. Even now you reveal you don't really understand what a huge mistake that is.

      Jason,
      "I wasn't estimating a regression. It was spurious as I *explicitly* said. I fit a theoretical function to data."

      Yes, but you used the spurious regression to argue that such estimations typically fit the data better than properly specified regressions. The properly specified regression you created was a straw dog comparison (perhaps because it is the best well specified regression you are able to create). But high quality well specified regression will always fit the data better than a spurious regression because it depends on the authentic statistically significant relationships and not on the fact both series display a trend.

      Jason,
      "Obviously you're not going to listen to me."

      I might if you ever had the decency to own up to your mistakes and started making sense.

      Delete
    21. "You evidently thought that it was possible to create an authentically statistically significant relationship from random data"

      Nope.

      On purpose trying to show spurious correlation.

      Maybe my writing is bad. But I've said a few times now that I was trying to demonstrate spurious correlation.

      Delete
    22. "... that only tells us that there is a change on one or both of their trends"

      Gah! That is what I said! If you have A, B and C going along and suddenly there is a drastic change in the correlation (i.e. the inner product in some abstract data space) of A and B and A and C, but not B and C, then something changed in A.

      ...

      While my "mistakes" all seem to be you misreading what I've written, why should I have to admit anything for you to listen to my objections to your analysis? Sure, that's how people "save face" in the face of criticism, but that derives from some human sense of honor not the pursuit of science. It's not tit for tat "I'll listen to your criticism if you listen to my criticism".

      Quite plainly:

      1. You can't de-trend the monetary base data without an underlying theory.
      2. If you don't de-trend the monetary base data, you end up with spurious correlation.
      3. If you do de-trend the monetary base data without a theory, you will induce all kinds of artefacts [1] and implicit model dependence (using first differences, you implicitly say the trend of the base is a line).

      [1] It will induce artefacts if your theory is wrong as well.

      Delete
    23. "artefact?" ... Jason, did you spend some time in the UK?

      Delete
    24. Jason, you write:

      "1. You can't de-trend the monetary base data without an underlying theory."

      "3. If you do de-trend the monetary base data without a theory, you will induce all kinds of artefacts [1] and implicit model dependence (using first differences, you implicitly say the trend of the base is a line)"

      The 1st part of 3. seems to be a direct contradiction of 1. But I think I see what you're saying. Would it be better to write it as:

      "3. Detrending by use of 1st differences incorporates an implicit theory that the trend of the base is a line. If this theory is wrong, it will induce artefacts."

      I know you said more there, but I'm having a hard time sorting it out. I was trying to rewrite it for my own edification because it bothered me that "If you do de-trend the monetary base data without a theory" seemed to fly in the face of your item 1 (which says you can't do that).

      You use both the words "model" and "theory" in point 3. I'm conflating them in my rewrite and perhaps that's wrong. If by those words you mean two distinct concepts, can you explain?

      It sounds like there are two separate paths that lead to artefacts. What are they? Also, there's a path (e.g. first differences) that leads to "implicit model dependence" which is a bad thing apart from the artefacts. Is that correct, or am I misreading that?

      Delete
    25. Mark, if we were to use your analysis to estimate the amount of base expansion (say over the next year) that would be required to get NGDP back on trend, can you say what that would be (say in dollars)? Is that possible? Thanks.

      Delete
    26. Tom,

      I sometimes say and write strange things. I tend to use the spelling artefact to refer to "things that enter analysis due to the process used" -- finite sample spacing introduces artefacts and archaeologists dig up artifacts. But that is totally idiosyncratic. I also pronounce "adjacent" improperly.

      Yes, 1 is in direct contradiction of 3. You've ignored 1 in the case of 3. I probably should have said "you shouldn't detrend ... " in 1. Theory and model are interchangeable. The "artefacts" are the result of implicit model/theory dependence. First differencing leads to artefacts due to the implicit linear model.

      Delete
    27. Great, helpful, thanks. I just read your latest post demonstrating an example of the above (with Fermi-Dirac step fits to MB)

      Delete
  6. I love a geekfight-almost as much as a chick fight. Where else in the world can you get such animation out of a discussing over H-P filters and nonstationary series, and unit roots?

    See more at: http://lastmenandovermen.blogspot.com/2015/08/my-new-political-soulmate-rush-limbaugh.html?showComment=1439123953453#c8147806178550175864

    ReplyDelete
    Replies
    1. Well I'm glad you've found it entertaining.

      Economists seem to get really angry for some reason which just confirms my priors that it's mostly about personal politics. I've seen physicists get animated about various approaches to a problem, but they rarely bring up educational credentials.

      Delete

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.