Monday, April 30, 2018

The ability to predict

While I might agree with the subtitle of this piece "The Problem isn’t Bad Economics, It’s Bad Science" by David Orrell, there appears to be a deficit of Feynman's other test of science:
But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
The first thing to note is that the quote from Feynman at the top of the article is taken out of context, and does not mean what Orrell implies with regard to forecasting in economics. Leaning over backward requires a bit of context; here's the original quote (emphasis added):
There are those who are going to be disappointed when no life is found on other planets. Not I — I want to be reminded and delighted and surprised once again, through interplanetary exploration, with the infinite variety and novelty of phenomena that can be generated from such simple principles. The test of science is its ability to predict. Had you never visited the earth, could you predict the thunderstorms, the volcanoes, the ocean waves, the auroras, and the colorful sunset? A salutary lesson it will be when we learn of all that goes on on each of those dead planets — those eight or ten balls, each agglomerated from the same dust cloud and each obeying exactly the same laws of physics.
Feynman is saying that it is impossible for humans to predict the "infinite variety ... of phenomena" based on only knowledge of the "simple principles" (i.e. underlying physics). He is motivating interplanetary exploration; he is talking about collecting data, not validating theory. He thinks physics is already validated and wants to see what other kinds of weird things it explains. Also, there is a difference here in the meaning of prediction. Feynman is not talking about predicting the future — when a thunderstorm might occur — but rather the existence of thunderstorms. But he is also saying that physics is established and the novel phenomena on other planets (that we can't predict) will be able to be explained by science. No alarms, and no surprises.

Now Feynman was no fan of "social sciences" (from a 1981 BBC interview, though it is not clear that he is talking about economics but rather uses an example about organic food being better for you):
Because of the success of science, there is a kind of a pseudo-science. Social science is an example of a science which is not a science. They follow the forms. You gather data, you do so and so and so forth, but they don’t get any laws, they haven’t found out anything. They haven’t got anywhere – yet. Maybe someday they will, but it’s not very well developed.
Here, Feynman's view of "science" is versus his view of "not science" is what I refer to as "established" and "nascent" science in this blog post. Social sciences like economics are nascent sciences: they don't have any "laws" (i.e. frameworks like Newton's laws that capture empirical regularities). Physics was a nascent science as recently as the 1600s. This is fine; we have to start somewhere.

I went to this length to discuss this quote about prediction because there are things that Feynman would concede is science, but where we are unable to predict in the forecasting sense: the weather more than a week or so in the future; the orbits of planets thousands of years in the future; the precise timing, locations and magnitudes of earthquakes along faults and fatigue fractures. These examples all have at their heart nonlinear and network models that require knowledge of initial conditions at the right scale that is more precise than we might ever obtain making them theoretically or at least practically unpredictable. You can't predict the path of a single photon through the double slit experiment, but you can predict its distribution. It is a question of scope. That is to say the real question is succinctly put by Chris Dillow in his excellent recent post: what can people reasonably be expected to foresee and what not? And at the heart of this question is whether we have good enough models of the phenomena or frameworks to understand them to say whether or not some observable can be forecast and under what scope.

I also wanted to add in Feynman's "leaning over backwards" because this paragraph in Orrell's article fails utterly:
The usual excuse offered for failing to predict the crisis, as Robert Lucas put it in 2009, is that ‘simulations were not presented as assurance that no crisis would occur, but as a forecast of what could be expected conditional on a crisis not occurring.’ That is like a weather forecaster saying their forecast was explicitly based on no storms. (The claim here reflects the efficient market-related idea that changes are caused by random external shocks.) Another is that no one else predicted it either – though a number of heterodox economists and others would beg to disagree.
First, the Bezemer article cited at the end appears to have fabricated its examples of predictions of the crisis (and Bezemer comes at the subject as biased in favor of heterodox economics). A possible prediction (that is also fabricated) from Wynne Godley about the housing crisis appears to be both a luck of rhetoric and cancelled by Godley himself [1]. This fails Feynman's "leaning over backwards" test of science. Second, this is not the usual excuse, and it is not "leaning over backwards" to call it that. The usual excuse is given by Diane Coyle in an article Orrell actually cites in his article:
But macroeconomics is inherently hard because there is very little data. There is only a handful of key variables, all linked to each other and changing only slowly, the outcome of multiple possible causes in a complex system, and with little opportunity for doing experiments. It will never be able to predict a crisis with a high degree of confidence.
This is the same "excuse" noted by Mark Thoma and Noah Smith years ago. There are all kinds of models (a couple helpfully pointed out by Chris Dillow in that same post) that would mean recessions are inherently unpredictable. But we don't even know enough to choose between those models.

This bring us back to the question of What can we reasonably be expected to foresee? Getting back to Feynman's thunderstorms, Orrell believes that the story behind weather forecasting is a useful analogy for transitioning from the view that we can't predict something to being able to make some reasonable forecasts. Admiral Robert FitzRoy, a pioneer in weather forecasting so seminal that he's responsible for the use of the word "forecast", was not "well received ... by ... the scientific establishment" [2]. Weather forecasts currently predict out about 10 to 16 days at varying degrees of resolution (longer at lower resolution). Macro models like DSGE and VARs are actually pretty good a quarter to two in the future (see here, here, and here). The VAR from the Minneapolis Fed has done well with unemployment for over a year:


So is economics now good science given this prediction (and ones like it)? Note in particular that the Minneapolis Fed is a bastion of so-called freshwater macro and neoclassical economics decried in Orrell's article via quotes as being an "unquestioned belief system" that students want to be "liberated from". The obvious retort is that we're not talking about predicting the unemployment rate several months out, but rather financial crises and recessions. That means we're no longer saying "the test of science is the ability to predict", but rather "the test of science is the ability to predict things I selected". If we're doing that, let me just put my unemployment rate forecast (detailed in my paper) out there:


...

PS

David Orrell's thing is "quantum economics". He's written several papers with an understanding of quantum mechanics I can only guess was based on reading Michio Kaku's popular science books. There is no math in them despite most of the ideas in quantum physics being mathematical in nature (i.e. commuting and non-commuting observables, which is exactly what I noted here about "quantum psychology"). It's just the word quantum that sounds cool. The quantum finance he mentions (but doesn't engage with) is based on the path integral approach (in e.g. this book I used when I wanted to become a "quant" years ago), but path integrals are related to thermodynamic partition functions making this effectively a sum over Brownian motion paths not entirely dissimilar from the statistical averages in the stochastic calculus of the Black-Scholes equation. 

These papers also fail to make any empirical predictions or really engage with data at all. I get the impression that people aren't actually interested in making predictions or an actual scientific approach to macro- or micro-economics, but rather in simply using science as a rhetorical device. On this blog, my critiques of macro- or micro-economics as poor science get roughly 4 to 10 times the number of views as the predictions or empirical work I've done. The same goes for Twitter. This makes sense of Orrell's papers and articles: the stuff that gets views isn't actual science but rather talking about the metaphysics surrounding it.

The galling thing is that Orrell starts with a quote about making predictions and saying economics is bad science, but then closes promoting a book that (based on the source material in his papers) won't make any predictions or engage with empirical data either. Like Steve Keen, this approach represents exactly what is wrong with parts of macro while claiming to decry it.

PPS


On Twitter, I apparently gave my thoughts on another of Orrell's articles (in Aeon) before after being asked by Diane Coyle earlier this year.


...

Footnotes:

[1] Godley makes a "prediction" of a household debt-led crisis in 1999 but then says in 2003 that process has been averted with government deficit spending becoming the bigger issue. The exact "eight years" quote in 1999 (that gives the purported 2007 date of crisis) is an opening line: "The US economy has now been expanding for nearly eight years ...". This represents the duration of the expansion after the 1991 recession up until the paper's release. This does not appear to be some sort of model.

In 2003, the Bush administration had started military operations and the post-9/11 boost to military spending along with major tax cuts ended the budget surplus of the late 90s. Godley, using his sectoral balance approach, then says that government current account deficits and not household debt (which had flattened in the aftermath of the dot-com bust and the early 2000s recession) was on an unsustainable path. As the two sectors (government and households) are linked by sectoral balance (one declines and the other increases), this interpretation makes sense. But this interpretation also means that Godley is no longer seeing a housing crisis until 2006, after which it had partially already begun and other more mainstream economists (like Paul Krugman) had already said the same thing in 2005. See here.

[2] FitzRoy apparently faced so much opposition from the scientific establishment that he was a protege of a member of the Royal Astronomical Society, recommended by the President of the Royal Society, and given sufficient budget to support himself and three employees [wikipedia]:
As the protégé of Francis Beaufort [of Beaufort wind scale fame], in 1854, FitzRoy was appointed, on the recommendation of the President of the Royal Society, as chief of a new department to deal with the collection of weather data at sea. His title was Meteorological Statist to the Board of Trade, and he had a staff of three. This was the forerunner of the modern Meteorological Office.

1 comment:

  1. Thanks for your comments on the article. A few replies:

    - “the quote from Feynman at the top of the article is taken out of context”

    There are two contexts, the local one (the specific paragraph) and the larger one. Here I would say Feynman is saying something that he felt to be generally true, namely that prediction serves as a way of testing models.

    - The Bezemer article which I cite “appears to have fabricated its examples of predictions”

    That is a serious charge. Have you checked with the author to get his side? Also, your critique seems to focus on one example of several given, but your statement as written implies they were all fabricated, which is certainly not true.

    - The conditional excuse “is not the usual excuse”

    The excuse you mention that the event was simply unpredictable is just a variation of “no one else predicted it either” as stated. There was enough data to warn of a storm, which is why a number of economists did exactly that. The problem was that most economists were looking in the wrong place – in large part because their models were misleading them.

    - Note [2], “FitzRoy apparently faced so much opposition from the scientific establishment that he was a protege of a member of the Royal Astronomical Society”

    Obviously he started off in good shape because he was allowed to found the Met. Office in 1854! The point is that he lost that prestige.

    - “Weather forecasts currently predict out about 10 to 16 days”

    The standard test of forecasting is skill relative to naive forecasts (such as persistence or climatology). It also matters whether the phenomenon being predicted is important in itself, or is being chosen because it is easy to predict. Weather models are good at predicting things like the 500 mb heights quite far out, but are less good at things like low-level wind, temperature, and precipitation (i.e. what we call the weather), and storms are more difficult still.

    - “decried in Orrell's article via quotes”

    You mean statements in articles that I link to, which is not the same thing and you should make that clear.

    - Papers are “based on reading Michio Kaku's popular science books”

    I have not read his books, though I did study quantum mechanics, and once taught it as part of a university level mathematics course.

    - “There is no math in them”

    That’s a deliberate choice on my part, because I am trying to make the topic accessible to an audience who may not be familiar with the quantum formalism (e.g. most people). In the book I do show worked examples where the math can be communicated graphically. And of course I cite many papers and books which do take a mathematical approach, if that is what you are after. One thing I point out in the book, though, is that mathematics is a double-edged sword, and can be used to obfuscate as well as communicate.

    - “It's just the word quantum that sounds cool.”

    That actually has it the wrong way round. Using the word “quantum” outside its physics context is fraught with difficulty which is why most (not all) scientists still avoid it. One way to use the quantum approach is to say that it is just a more flexible way of describing probabilities. In the book I argue though that it makes sense to treat money as a quantum system in its own right. Just as physicists once adopted the Hilbert space as a way to model the behaviour of subatomic particles, social scientists can usefully adopt the same framework to describe things like the economy. But as I note in the book, it is interesting how much the typical objections to these ideas rest on the use of particular words.

    - “The galling thing is that Orrell starts with a quote about making predictions and saying economics is bad science, but then closes promoting a book that (based on the source material in his papers) won't make any predictions or engage with empirical data either.”

    That sounds like a prediction based on little data, given that you haven’t read the book.

    ReplyDelete

Comments are welcome. Please see the Moderation and comment policy.

Also, try to avoid the use of dollar signs as they interfere with my setup of mathjax. I left it set up that way because I think this is funny for an economics blog. You can use € or £ instead.

Note: Only a member of this blog may post a comment.