Economics is a social science
I've been told that countless times by lots of different people. Recently by John Handley; awhile ago by Maiko. Here's another another one from Sam Watts.
[I made some updates due to some reasonable objections from John Handley. I marked these with asterisks.]
Here's Daniel Little:
... Does the phenomenon of [social phenomena] admit of a scientific treatment along the lines of Galileo, Newton, or Lavoisier?
The answer is resoundingly no. Such a goal displays a fundamental misunderstanding of the social world. Social things and processes at every level are the contingent and interactive result of the activities of individual actors.
I even (kind of) said it here. Well, I said there are two pieces of economics: working markets and failing markets. The former's properties (in my theoretical view) derive from the properties of the state space. The latter's properties will come from social science.
But what does it mean to say economics (or aspects of it) are social science?
The "Lucas critique"
This idea I wholeheartedly agree with: empirical regularities may suddenly fail to be empirical regularities because humans have both individual and group behaviors. This is the reason behind non-ideal information transfer. Humans can spontaneously coordinate around some event (e.g. a stock market crash) and markets will fail to be ideal.
Economic models are qualitative
I am sympathetic to this one (I love a good zero order theory as much as the next theorist), but the existence of quantitative models that are quantitatively compared to data invalidate this objection. These range from Mark Sadowski's VARs and the NY Fed's DSGE model to my own recent entry. You can't use "but it's qualitative" as a defense of your empirically flawed model if your model has dozens of parameters and other models have quantitative results. You can't move the goalposts. Other models are quantitative. Your qualitative model better be for something that's never been quantitatively compared to data before. That immediately excludes the anything to do with the business cycle, inflation or interest rates.
Economics can't be explained by quantitative models
This is a bit different from the previous one. It represents a combination of the Lucas critique, the idea that models are qualitative, and, well, basically assuming you know everything about economics. You know for a fact e.g. that economics is too complex to be represented with any model. You need to show this, not just assume it like Daniel Little does above.
I think this one persists because it sounds serious -- in the "very serious person" (VSP) sense. Just look at the problem: it's made of millions of humans making millions of decisions with millions of dollars every hour! Obviously a tractable quantitative model doesn't exist.
But as I've said before: your failure of imagination is not evidence of anything. Humans in prehistory probably couldn't conceive lightning could be as well understood as it is today. Imagine if people went around saying: There's no way you can understand electricity ... it's too complex to model quantitatively. It's a social science of little homunculi!
Data can't falsify economic models
This is effectively the definition of derp. Your priors are too strong.
*This specific objection was intended to be addressed to the Lucas and Prescott types who Sargeant recalls in an interview:
But after about five years of doing likelihood ratio tests on rational expectations models, I recall Bob Lucas and Ed Prescott both telling me that those tests were rejecting too many good models.A less derpy version is that data is uniformative.
[*] The data is uninformative
This is the weak form of "data can't falsify models". Basically it says that there is insufficient data to reject most models. However, this is a model complexity-dependent statement and if it applies generally, then generally the models are too complex for the available data. Try simpler models. If you can't reject those, try even simpler ones. At some level you'll get to log NGDP = a t + b, which is completely rejected.
This relates "economics is a social science because the data is uninformative" to the idea that economics is assumed (as a strong derpy prior) to be complex. It is "very serious" to say such things, but it's not a reason to avoid empirical data and treat economics like philosophy.
[*] You'll never nail down correlation vs causation
Due to external factors (and lack of controlled experiments), this may be true. However, this is a reason not to praise an empirically successful model (e.g. this information equilibrium model) too highly. It is not a reason to accept a model that would otherwise be rejected empirically.
If external factors always seem to confound your model rendering the originally observed correlation on which it was based moot, then the model is useless even if it is correct. Personally, I'd say the Phillips curve should be thrown out based on this.
Most mainstream economic models simply assume a causal mechanism (utility optimizing agents). Is this better or worse than assuming correlation is causation? At least the latter has some connection to the empirical data.
In any case, this is not a reason to abandon looking at empirical data or treating economics like a "hard science". You can still reject models!
You shouldn't use math
I wrote a whole post against this idea. Anyone who says you shouldn't use math is trying to pull a fast one. N.b. this is not the same thing as saying you don't need math. You don't; you should be able to explain mathematical concepts in human languages as well.
Economics studies the social behavior of humans
Well, this is perfectly true. If you don't mean anything more by it, then I'm fine with it.
Have I missed any?
Economists (and everyone else out there in the econoblogosphere): time to stop hiding behind theory and qualitative analysis and start addressing the empirical data out there.
"Data can't falsify economic models
ReplyDeleteThis is effectively the definition of derp. Your priors are too strong."
I still think this has nothing to do with priors and everything to do with the fact that you can't do proper experiments in a social science. Good luck determining causality without being able to isolate a single independent variable. Aggregate time series simply cannot reject a model that is complicated enough to venture at an explanation of the aggregate time series.
Without a closed system you cannot separate correlation from causation effectively.
This comment has been removed by the author.
DeleteBut you can reject correlations that fail over time as useless.
DeleteAnd you can accept models that are just correlations over models that aren't compared to empirical data!
Delete"It represents a combination of the Lucas critique, the idea that models are qualitative, and, well, basically assuming you know everything about economics. You know for a fact e.g. that economics is too complex to be represented with any model."
ReplyDeleteShow me a properly closed system in macro and I'll concede that an economics model can be proven.
Are my standards too high? Is demanding a greater degree of certainty somehow a crime? Ironically it seems as if your devotion to empirical evidence leads to epistemologically unsound conclusions from an empiricist perspective. If there's no way to determine cause and effect, then empirical evidence is as good as useless. Or, rather, then the proper empiricist position is one of complete agnosticism.
I made some updates.
DeleteHowever, I think you are taking what I'm saying as "proving" models are correct. But science never really does that ... what it does is reject models that are bad. That's the issue with economics. And that's why there should be more comparison with the data.
And the ones standing after being subject to data should be given a greater prior probability of being correct -- but they are in no sense "proven". [That's what I'm asking for for the IE model -- it works empirically, so it should get a little more weight than e.g. market monetarism and other things that aren't compared to data.]
Even in an open system you can determine that a particular correlation is useless -- the Phillips curve and the naive version of the quantity theory of money (P ~ M) are good examples. The idea of sticky micro prices should be rejected based on data, too.
Jason,
ReplyDeleteI have a web site where I discuss some of these issues (BondEconomics.com). I just want to make some observations about this and your work more generally. This comment is long, but it is based on reading a fair amount of your articles, and not this one in particular.
Firstly, I have a background in applied mathematics, and no formal economics training. I learned on the job, working on financial/economic models in industry for 15 years. At the same time, I have almost no mathematics on my web site. A blog is an informal setting, and I doubt that there's going to be a lot of math-heavy blogs. Expecting bloggers to delve into mathematics is unreasonable.
Mainstream macro takes data analysis seriously. (I would note that I do not take mainstream macro seriously, but that is neither here nor there.) Central banks routinely produce all kinds of forecasts, and the bulk of mainstream macro is aimed at improving those models. Mainstream bloggers write about political controversies on their blogs; it does not reflect the literature.
For those of us in the post-Keynesian tradition, we have very good reasons to believe that the cycle cannot be forecasted. This is a negative result. The answer why is found in things like SFC models. When you discussed SFC models, you were focussed on the equations, and not the reasoning in the text. The text of Godley and Lavoie is more important than the equations; there's a lot hidden in there.
Finance is very interested in forecasting. If you could reliably forecast the next month's CPI, that alone is interesting. If you reliably forecast six months out, you might even have a veritable money-printing machine. But guess what? The Street is able to do those kinds of forecasts, within certain limits. You would need to replicate what they are trying to do to see if you can do better.
With regards to your work, I do not have time to look at it. One reason is that I have absolutely no interest in your physics analogies; you could say that you developed the theory by talking to angels, and I would view it in the same light. When we face a forecasting problem, we are faced by discrete time historical data, and our objective is to forecast future values of other discrete time series. The only thing of interest to someone like myself is the algorithm you use to convert discrete time back history into forecasts. You have tended to bury that information elsewhere, which is one reason why I would not spend time looking at your work. Of course, you have no reason to care what I think. However, a lot of the people who would be interested in your work in finance are probably thinking the exact same thing.
"Expecting bloggers to delve into mathematics is unreasonable"
DeleteI have to disagree with you there Brian. This blog is explicitly a "working paper" (it says so right there in the title). I actually LIKE the fact that Jason often explicitly delves into the math.
Also, Jason is not the only one. Nick Rowe often delves into the math, as does Dave Giles, David Andolfatto, Nick Edmonds and John Handley. John Cochrane and Stephen Williamson will at least sometimes leave a link to the math. And those aren't the only examples, of course.
Now, regarding you and your peers reluctance to dig in and the reasons for it, I have to take your word for it, and maybe there's a reasonable suggestion in there for Jason (I don't know). However, Jason also does quite a few posts with no math in them. This post is a prime example. Personally I think it's a good mix, especially when the empirical data is folded in to the mix as well.
... plus once in a while one of us actually finds an error (as do Nick Rowe's readers for him). That seems like the kind of thing a "working paper" is good for.
DeleteHi Brian,
Delete"One reason is that I have absolutely no interest in your physics analogies; you could say that you developed the theory by talking to angels, and I would view it in the same light."
The models come from information theory, not physics.
I have no idea what this means:
"The only thing of interest to someone like myself is the algorithm you use to convert discrete time back history into forecasts."
I'm pretty sure there is a typo in there, but I am not sure how to resolve it.
If this is about discrete time series versus continuous forecasts, then it should be obvious that the fluctuations in the time series are too big for this to matter. That makes me think this is about something else.
Is it about how I make the forecasts? Generally, log-linear extrapolation of the independent variables based plugged into the model in question (which is not buried, but occurs everywhere). A couple of times I've used Mathematica's native time series forecast function (e.g. here) which also isn't "buried".
It also seems that your blog doesn't have any models being compared to data. There is data and there are models, but not on the same graphs. For example ... interest rates:
https://www.google.com/search?q=site:www.bondeconomics.com+interest+rates&tbm=isch
There is one exception ... you're 'world's simplest bond valuation model':
http://www.bondeconomics.com/2013/10/the-worlds-simplest-bond-valuation-model.html
But this has literally no specific description of how the model works. It's a simple martingale based on a correlation (which is not really a model). But what algorithm do you use to convert discrete time back history into forecasts [sic]? Is it a backward looking average? How many terms? Is the input daily, monthly, annual effective fed funds data? How does it do outside the range you've given? There's more to even the simplest martingale than is given in your blog post.
I'm all for criticism. It keeps us honest. But don't hold me to a higher standard than you adhere to yourself.
And if you want the original Mathematica codes, there's a box on the side for requests. I've even posted the code sometimes.
PS In contrast, nearly all of my interest rate graphs have theoretical models compared with data:
https://www.google.com/search?q=site:informationtransfereconomics.blogspot.com+interest+rates&tbm=isch
Jason,
DeleteAs I said, I do not do a lot of math on my site. I may eventually write a report on SFC models, but that would not be in the same style, and I would not be posting the mathematics on the site. Yes, some bloggers get away with posting equations (including yourself), but that is not the market I am aiming for, nor do I think I would have much success with it.
What I wrote:
[discrete time back history series ] -(algorithm)> [discrete time series forecast]
As for my "World's Simplest Bond Model," I believe that it is a 120-month moving average. I did not describe it, since everyone in finance and economics knows what that is. It could be viewed as sarcasm, although it may have significance to readers who are familiar with other bond valuation models. Eyeballing the chart conveys quite a bit of information, at least if you are familiar with such models. Pretty well everyone with access to time series data can replicate it in seconds, so I saw little need to discuss its time series properties.
If I had highly accurate interest rate forecasting models, there's no way I would be publishing them on the Internet. I know people in fixed income hedge funds. The models I built when I worked in the industry were aimed at making money, and would only be of interest to those doing fixed income relative value.
I have serious doubts about creating accurate economic forecasting models, which is why I do not spend too much time building the things on the blog. I created a few "teaching models" using SFC techniques, which is how Godley & Lavoie use them in their text.
Extrapolating a variable works -- until it doesn't. Other than doing something like estimating the effect of policy changes (for example, see the debate over Sander's plan), the most interesting part of economic forecasting is calling recessions. By definition, you cannot call a recession based on extrapolating the data from the expansion. That's why post-Keynesians think economic forecasting is doomed; we do not care about extrapolating trends during an expansion, our worry is the recession.
Tired of physics analogies to explain economics? This guy uses economics to explain physics! (He's referring to another physicist, ... it sounds like "Maldecena"... a physicist I heard Leonard Susskind praise a day or two ago... I'd never heard of him before that)
DeleteMaldecena has the most cited theory paper in physics -- he came up with the AdS/CFT correspondence conjecture. Supergravity on an anti de Sitter (AdS) space (a constant curvature space time) is equivalent to a conformal field theory (CFT) on its boundary.
DeleteIt's a statement of the holographic principle in string theory,
Ah, makes sense. The holographic principle is one of the things Susskind was talking about (a talk geared towards laymen).
DeleteJason,
DeleteHi, I looked back on my comment, and I did not emphasize this enough -- I am not a reader in particular that you should be targeting (as I explain below). However, I wanted to give feedback about the barriers you will face from readers with a similar background. My wording was too harsh, sorry. But if I was too polite, my comments would have been meaningless.
In my case, I am highly skeptical about mathematical methods in economics. The unfamiliarity of your approach would mean that it would take a lot of work for me to understand it before I could endorse it. Conversely, I could easily write up a skeptical article. At the same time, I am under a lot of time pressure. This creates an obviously unfair bias, and so I prefer to not delve into the topic. I am not hugely prominent, so this is not a major loss for yourself. At the same time, other people with similar backgrounds may have a similar barrier to acceptance for your work.
Brian, your write above:
Delete"...the most interesting part of economic forecasting is calling recessions."
Your skepticism about mathematical methods for that purpose may be well grounded. However, perhaps your focus is too narrow, and mathematical methods are useful for other purposes.
As an example in a different field, predicting Earthquakes is certainly a desirable goal of Earth sciences (geology, plate tectonics, etc), but it may prove to be forever elusive. That doesn't mean that the science as a whole is useless though, or that math isn't useful in other regards for geologists.
I don't think Jason has any strong claim about his framework being able to forecast recessions. He has addressed the topic, but it's not his focus (from what I can tell). That doesn't mean he's barking up the wrong tree with his math though.
Hi Brian
DeleteI am not offended. I just find things like this baffling:
I am highly skeptical about mathematical methods in economics. The unfamiliarity of your approach would mean that it would take a lot of work for me to understand it before I could endorse it. Conversely, I could easily write up a skeptical article.
What would that skeptical article be based on?
You also said:
... the most interesting part of economic forecasting is calling recessions
But previously you said:
For those of us in the post-Keynesian tradition, we have very good reasons to believe that the cycle cannot be forecasted.
So the most interesting part of economics can't actually be done? No wonder it's called the dismal science!
Jason - I have read your work, relatively quickly. It looks similar to things I have seen that did not work. (No formal name, just called "data mining".) It would be easy to say that it is the same thing (as the techniques that did not work); it would be harder to determine that it is not data mining.
DeleteAre you sure you took a quick look?
DeleteAnd what are these things you saw; I'm genuinely interested because I've searched and searched for years and I haven't come up with anything that is related.
Information equilibrium recovers a lot of basic mainstream economics in various limits -- and a lot of Keynesian economics.
You took a cursory glance at the history of economics and concluded it didn't work? Keynes is garbage?
This is one of my least favorite criticisms (of anything). Dismissal by analogy. "I looked at your stuff and it looks the same (in some way) as unnamed/named idea X and X didn't work so your stuff doesn't work."
But the thing is, I did what all good theorists do: ensure your theory reduces in appropriate limits to accepted theory. There isn't much accepted in economics, but there are a couple of things. In order to throw my theory out, you have to throw out supply and demand and marginalism.
No really! The fundamental equation of the information equilibrium framework is
dA/dB = k A/B
which is just a generalization of marginalism (adding the k). Marginal rate of substitution. Marginal cost. From this equation comes supply and demand.
So you threw this out with a quick glance?
Jason, I'm sure you're aware that John Handley has another post on the subject of empirical evidence in economics.
ReplyDeleteAre there examples from other branches of science which have the problem John sketches regarding functions f() and g(), and yet are still able to consistently use empirical evidence to test hypotheses?
It seems to me there are plenty of examples amongst the natural sciences that don't have the benefit of a closed system to work with, yet are nonetheless capable of having their hypotheses tested against empirical data. At least to some extent. Cosmology comes to mind as an example. Evolution seems like another. Perhaps meteorology, plate tectonics or climate science as well. By a closed system he's really saying a laboratory setting, no? Sure a laboratory can augment some of those fields I mention, but they all suffer at least to some extent from relying on natural experiments, no? You're the scientist, so you'd know better than me.
DeleteI wrote a comment earlier (that I erased) with gravity waves as an example. Nobody can create measurable gravity waves in any closed system, or in anyway way whatsoever actually. And yet we've now detected them. They were predicted by general relativity and created by natural events that we could never hope to emulate. Their detection is another piece of positive evidence for general relativity. If instead we'd built instruments that according to the theory had been sensitive enough to detect these kinds of natural events (e.g. two black holes colliding) and we had some idea of the frequency of these events (again, according to other aspects of cosmological theory), and yet we never found evidence for gravity waves... perhaps when it seemed highly probable that we should have (perhaps after trying for a century or more?)... then that would be evidence that something is wrong, wouldn't it? Either with GR or with the expected rate of occurrence of detectable gravity wave producing events... something in the puzzle would be amiss.
I can't really relate that to John's cause and effect example. But still, I'm just not seeing why social sciences need what appears to me to be special treatment.
"I'm just not seeing why social sciences need what appears to me to be special treatment."
DeleteThat may not necessarily be it. Maybe empirical evidence just doesn't work in general without closed systems.
When it comes to Evolution, though, there are closed systems (e.g., Lenski et al 1991). If we are talking about testing a model in biology, this is done in the lab, not by, e.g., running around watching E. Coli evolve wherever we find them naturally.
If I were to frame my critique in a more Popperian way, I would say that none of the models are falsifiable, so are useless from that point of view. If every economic model has this problem, then empirical evidence is just a waste of time and we can resign ourselves to using it sparingly and never to completely reject a model.
DeleteThat, of course, doesn't mean we can't reject an assumption that is wrong, like (at least according to Jason) "micro sticky prices."
"not by, e.g., running around watching E. Coli evolve wherever we find them naturally."
DeleteNo, but evolution makes certain predictions about the outcome of natural experiments or what will be discovered outside the laboratory. It predicts, for example, that rabbit fossils will not be found in a pre-Cambrian layer (to use a famous example).
Also, I see that Jason has addressed usefulness in a comment on your post.
But evolution was considered accepted theory long before it was ever shown in a lab. Those lab experiments didn't "prove" evolution -- and they wouldn't have rejected evolution if they had negative results.
DeleteYou don't need a closed system to have experiments.
In macroeconomics, Okun's law has held up for almost 60 years (and across different policy regimes). It has a level of usefulness and validity that I would call proven (to a given level of accuracy/first order theory). So does supply and demand (when you impact the quantities, not the price ... that seems to have different effects).
But as I mentioned on your post, John: Popperian falsifiability as Popper stated it isn't how science works.
The IE model in the head to head with the NY Fed DSGE model has "falsified" the latter in my view. Not because it is inconsistent with the data, but because the IE model is simpler and slightly more consistent with the data and comes from a framework with more empirical successes with simple models. I still need to convince the community of this, and if it proceeded more like a social science instead of a branch of politics or (mathematical) philosophy, this would be easier.
I'm not sure why I said:
Delete"You don't need a closed system to have experiments."
I should have said:
"You don't need a closed system and experiments to prove models useful (or reject them)."
Probably due to losing my train of thought while tapping out letters one at a time on my phone :)
BTW, I'm clearly no evolutionary biologist either, but I thought this series of six short videos (each one addressing a particular point) was pretty convincing regarding why evolution should be considered to be scientific (he's responding to creationist arguments that it's not).
DeleteWhat would the equivalent set of six videos look like for economics? :D
Another strong result in economics:
Deletelog r ~ log NGDP/MB
I have a post calling this the cleanest experiment ever.
Jason & John: you've probably noticed that Noah Smith has another post up today on empiricism in economics.
Delete"But evolution was considered accepted theory long before it was ever shown in a lab. Those lab experiments didn't "prove" evolution -- and they wouldn't have rejected evolution if they had negative results."
DeleteI disagree. No, the theory of Evolution wasn't proven in 1991 by Lenski et al, but the various models in Evolution (e.g., natural selection) were tested by observing different species in closed systems (say, for instance, the Galapagos islands). Does that count as a completely closed system? No, but it is a whole lot more closed than anything that can be achieved in macroeconomics.
The Galapagos didn't "prove" it either. Wallace studied insects all over the world.
DeleteThe things that "proved" evolution were the predictions of the theory -- and the lack of anything that rejects it. Transitional forms. The discovery of a molecule that transfers information from generation to generation.
In economics, you could consider a theory proven by its policy recommendations working, predicting a new correlation that hadn't been discovered, or accurate forecasts.
Even in social science, theories generally have to lead somewhere new in order to be useful.
Sorry to cut in again, but I'm liking these short "sixty symbols" interviews. This one is about the hypothesized "planet 9" and the multiple pieces of evidence for it. There's some tie-ins to the "cause and effect" discussion in an open system (perhaps), and a bit on the weakness of competing hypotheses to explain the evidence. Though we're a long way from anything conclusive at this point, it sounds like. A direct sighting would be a big win of course.
Delete"The Galapagos didn't "prove" it either. Wallace studied insects all over the world."
DeleteThat wasn't my point. I was simply arguing that research in Biology is done in a (quasi?)-closed system, even if it's not in a laboratory setting.
"In economics, you could consider a theory proven by its policy recommendations working, predicting a new correlation that hadn't been discovered, or accurate forecasts."
I agree, but I don't think you can disprove a model in economics, because of the arguments I've been making the entire time (i.e., empirical evidence on the minimum wage doesn't disprove the idea that minimum wage increases in a closed system result in lower employment).
"empirical evidence on the minimum wage doesn't disprove the idea that minimum wage increases in a closed system result in lower employment"
DeleteBut since economics is an open system, how is this knowledge useful?
And the original theory for the minimum wage (and other price floors) was that it worked even in an open system! That's why they even bothered. Economics is an open system; if you go about making theories that only can be tested with closed systems, that's totally useless.
Additionally, the minimum wage has been tested in a open system with a natural experiment (looking at e.g. the region near the border of New Jersey and Pennsylvania when one adopted a higher minimum wage and the other didn't) and the econ 101 effect doesn't work. That's convincing enough for a theory of open (i.e. existing) systems. Maybe it econ 101 works for non-existent closed economic systems, but who cares?
"but who cares?"
DeleteI would if it turned out that the underlying theory was correct, but everyone decided to come up with a wrong, yet empirically accurate model to replace it. I'm worried about Kuhn's paradigm shifts going backwards because no one realizes that empirical evidence can't completely falsify (to the extent that a paradigm shift is necessary) a model in economics.