|Falsifiabilité, simplicité, succès. Ou la mort.|
Tom Brown had some good things to say about falsifiability in a comment:
Evolutionary biologists can produce a list of falsifying evidence for the theory of evolution, perhaps the most famous of which is the pre-Cambrian rabbit fossil. ... [A question for various economic theories:] what's your pre-Cambrian rabbit?
In your opinion Jason, do you think some of the prominent economist bloggers ask themselves that question very often? I.e. "How would I know if I'm wrong?" It seems like a fundamental question that any honest person purporting to make knowledge claims about reality should ask and discuss publicly.
I then responded with a bit about how market monetarism isn't falsifiable in the Popperian sense:
market monetarism isn't falsifiable in the Popperian sense; (borrowing from Wikipedia) it says your theory T implies some observation O that won't be seen, i.e.
T → ¬O
However for market monetarism there is no O which can't be observed. All NGDP growth rates, levels, inflation rates, price levels, exchange rates etc are in principle observable. Therefore ∄ O : ¬O ∈ U, or O = U (with U being the universe of observations).
The other aspect of it is that market monetarism suffers from "no true Scotsman" disease with its predictions. If the Fed doesn't achieve its inflation target, it must not have wanted to achieve it.
Market monetarism can be proven wrong (a better word is outperformed) by a more concrete, falsifiable model that gives empirically accurate predictions. Basically if there is a theory that works better than handwaving about central bank mindsets and targets, it wins.
Falsifiability is definitely a desirable quality in a theory. But there is more to it than just that and I think Lars Syll's continued jeremiad against simple, unrealistic macroeconomic theories misses the mark (H/T Mike Norman)
Even though — as Pfleiderer [in his paper on "Chameleon models"] acknowledges — all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified. [Emphasis in the original]
This doesn't get at what the all models are false statement means. It means that all models are wrong so there is no reason to make a model that is more complicated than the data it is supposed to explain. There shouldn't be too many parameters — that makes eliminating theories impossible! [Edited sentence to to make more clear.]
In the PS at this post, I mention Noah Smith's contention that macro data is uninformative. That's exactly what the warning "all models are false" is supposed to be preventing from occurring! All macro models are way too complex for the data that's available.
That Newton’s theory in most regards is simpler than Einstein’s is of no avail. Today Einstein has replaced Newton. The ultimate arbiter of the scientific value of models cannot be simplicity.
I agree with the idea that simplicity shouldn't be the only consideration, but the specific example is wrong in a way that is useful. Einstein has not replaced Newton and engineers the world over use Newtonian mechanics with little fear of being wrong because for most situations the speed of light can be treated as infinite. A notable exception is GPS where general relativity corrections must be applied because the satellite clocks move at a different rate not just because they are moving, but because they are not as deep in the Earth's gravitational well.
Einstein's theory would have been way too complicated (and the data uninformative) if it had been developed in the 1600s.
I really think economic and the world needs the concept from physics of an effective theory (see here for my use of it in economics). You can think of an effective theory as a theory that contains everything that could possibly happen given the limits of observation -- but that construction is only useful if you have a well defined scale for the limits of your observations. For example, if you say your theory contains everything that can happen unless you look at it with a microscope, your scale is about 1 micron (roughly the limit of observability with visible photons) and your theory is "effective" for things that are bigger than 1 micron. Newtonian mechanics is an effective theory for speeds below the speed of light (the scale of special relativity). It's also an effective theory for quantum mechanics or quantum field theory when ћ can be considered small (more concretely, ћc gives an energy/time or length/momentum scale). The standard model is effective for scales bigger than the Planck length (so far as we know).
In economics, there is unfortunately no agreed-upon scale. This blog considers both the monetary base and NGDP to be good scales, and you can think of the information transfer macro model as an effective theory that expands in powers of MB/NGDP (which is ~ 0.1 implying errors on the order of 10%).
The scale of the effective theory tells you the limits of the validity of your model. If the fluctuations in observations due to measurement error are much larger than this scale, then your theory is too complicated for the data. If they are smaller than the scale, then you theory is capable of being falsified by the data. Additionally, each parameter introduced potentially introduces another scale making it more likely your theory can't be falsified by data.
There is a big theme running through this discussion so far -- that of empirical success. Falsifiability means that empirical success is not trivial. Simplicity depends on your measurements. But you also want your theory to produce the results you actually see! As they say, nothing succeeds like success. This set of heuristics gives us what theory should look like:
- Falsifiability. There should exist observations that your theory doesn't allow.
- Simplicity. Your theory should not be too complex to be falsifiable.
- Success. Your theory should not be falsified!
In the end, market monetarism fails 1. And according to Noah Smith, most mainstream macroeconomic models fail 2 -- the data is "uninformative", which just means the theory is too complicated to be falsified. The information equilibrium model passes all three, but that just means it is good theory -- not necessarily the best or correct theory. As I mentioned to Tom Brown, it's already "wrong":
Actually, the ITM is already wrong -- it doesn't explain the deviations from trend associated with recessions. It allows them, but doesn't explain them. There's no reason why a recession couldn't result in a 100% unemployment rate, for example.
But in the sense of an effective theory this is fine. Consider it an expansion in information content: I(X)/I(Y) ~ 1 + o(X/Y) + ... It is one of the well defined limits of the validity of the theory.
Imagine that sentence coming from any economist!