Blackboard photographed by Spanish artist Alejandro Guijarro at the University of California, Berkeley. |

In the aftermath of the Great Recession, there has been much discussion about the use of math in economics. Complaints range from "too much math" to "not rigorous enough math" (Paul Romer) to "using math to obscure" (Paul Pfleiderer). There are even complaints that economics has "physics envy". Ricardo Reis [pdf] and John Cochrane have defended the use of math saying it enforces logic and that complaints come from people who don't understand the math in economics.

As a physicist, I've had no trouble understanding the math in economics. I'm also not adverse to using math, but I am adverse to using it improperly. In my opinion, there seems to be a misunderstanding among both detractors and proponents of what mathematical theory is for. This is most evident in macroeconomics and growth theory, but some of the issues apply to microeconomics as well.

The primary purpose of mathematical theory is to provide equations that illustrate relationships between sets of numerical data. That what Galileo was doing when he was rolling balls down inclined planes (comparing distance rolled and time measured with water flowing), discovering distance was proportional to the square of the water volume (i.e. time).

Not all fields deal with numerical data, so math isn't always required. Not a single equation appears in Darwin's

*Origin of Species*, for example. And while there exist many cases where economics studies unquantifiable behavior of humans, a large portion of the field is dedicated to understanding numerical quantities like prices, interest rates, and GDP growth.
Once you validate the math with empirical data and observations, you've established "trust" in your equations. Like a scientist's academic credibility letting her make claims about the structure of nature or simplify science to teach it, this trust lets the math itself become a source for new research and pedagogy.

Only after trust is established can you derive new mathematical relationships (using logic, writing proofs of theorems) using those trusted equations as a starting point. This is the forgotten basis in Reis' claims about math enforcing logic. Math does help enforce logic, but it's only meaningful if you start from empirically valid relationships.

This should not be construed to require models to start with "realistic assumptions". As Milton Friedman wrote [1], unrealistic assumptions are fine as long as the math leads to models that get the data right. In fact, models with unrealistic assumptions that explain data would make a good scientist question her thinking about what is "realistic". Are we adding assumptions we feel in our gut are "realistic" that don't improve our description of data simply because we are biased towards them?

Additionally, toy models, "quantitative parables", and models that simplify in order to demonstrate principles or teach theory should either come after empirically successful models and establish "trust", or they themselves should be subjected to tests against empirical data. Keynes was wrong when he said that one shouldn't fill in values in the equations in a letter to Roy Harrod. Pfleiderer's chameleon models are a symptom of ignoring this principle of mathematical theory. Falling back to claims a model is a simplified version of reality when it fails when compared to data should immediately prompt questions of why we're considering this model at all. Yet Pfleiderer tells us some people consider this argument a valid defense of their models (and therefore their policy recommendations).

I am not saying that all models have to perform perfectly right out of the gate when you fill in the values. Some will only qualitatively describe the data with large errors. Some might only get the direction of effects right. The reason to compare to data is not just to answer the question "How small are the residuals?", but more generally "What does this math have to do with the real world?" Science at its heart is a process for connecting ideas to reality, and math is a tool that helps us do that when that reality is quantified. If math isn't doing that job, we should question what purpose it is serving. Is it trying to make something look more valid than it is? Is it obscuring political assumptions? Is it just signaling abilities or membership in the "mainstream"? In many cases, it's just tradition. You derive a DSGE model in the theory section of a paper because everyone does.

Beyond just comparing to the data, mathematical models should also be appropriate for the data.

A model's level of complexity and rigor (and use of symbols) should be comparable to the empirical accuracy of the theory and the quantity of data available. The rigor of a DSGE model is comical compared to how poorly the models forecast. Their complexity is equally comical when they are outperformed by simple autoregressive processes. DSGE models frequently have 40 or more parameters. Given only 70 or so years of higher quality quarterly post-war data (and many macroeconomists only deal with data after 1984 due to a change in methodology), 40 parameter models should either perform very well empirically or be considered excessively complex. The poor performance ‒ and excessive complexity given that performance ‒ of DSGE models should make us question the assumptions that went into their derivation. The poor performance should also tell us that we shouldn't use them for policy.

A big step in using math to understand the world is when you've collected several different empirically successful models into a single paradigm or framework. That's what Newton did in the seventeenth century. He collected Kepler's, Galileo's, and others' empirical successes into a framework we call Newtonian mechanics.

When you have a mathematical framework built upon empirical successes, deriving theorems starts to become a sensible thing to do (e.g. Noether's theorem in physics). Sure, it's fine as a matter of pure mathematics to derive theorems, but only after you have an empirically successful framework do those theorems have implications for the real world. You can also begin to understand the scope of the theory by noting where your successful framework breaks down (e.g. near the speed of light for Newtonian mechanics).

A good case study for where this has gone wrong in economics is the famous Arrow-Debreu general equilibrium theorem. The "framework" it was derived from is rational utility maximization. This isn't a real framework because it is not based on empirical success but rather philosophy. The consequence of inappropriately deriving theorems in frameworks without empirical (what economists call external) validity is that we have no clue what the scope of general equilibrium is. Rational utility maximization may only be valid near a macroeconomic equilibrium (i.e. away from financial crises or recessions) rendering Arrow-Debreu general equilibrium moot. What good is a theorem telling you about the existence of an equilibrium price vector when it's only valid if you're in equilibrium? That is to say the microeconomic rational utility maximization framework may require "macrofoundations" ‒ empirically successful macroeconomic models that tell us what a macroeconomic equilibrium is.

From my experience making these points on my blog, I know many readers will say that I am trying to tell economists to be more like physics, or that social sciences don't have to play by the same rules as the hard sciences. This is not what I'm saying at all. I'm saying economics has unnecessarily wrapped itself in a straitjacket of its own making. Without an empirically validated framework like the one physics has, economics is actually

*far more free*to explore a variety of mathematical paradigms and empirical regularities. Physics is severely restricted by the successes of Newton, Einstein, and Heisenberg. Coming up with new mathematical models consistent with those successes is hard (or would be if physicists hadn't developed tools that make the job easier like Lagrange multipliers and quantum field theory). Would-be economists are literally free to come up with anything that appears useful [2]. Their only constraint on the math they use is showing that their equations are indeed useful ‒ by filling in the values and comparing to data.**Footnotes:**

[1] Friedman also wrote: "Truly important and significant hypotheses will be found to have 'assumptions' that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense) (p. 14)." This part is garbage. Who knows if the correct description of a system will involve realistic or unrealistic assumptions? Do you? Really? Sure, it can be your personal heuristic, much like many physicists look at the "beauty" of theories as a heuristic, but it ends up being just another constraint you've imposed on yourself like a straitjacket.

[2] To answer Chris House's question, I think this freedom is a key factor for many physicists wanting to try their hand at economics. Physicists also generally play by the rules laid out here, so many don't see the point of learning frameworks or models that haven't shown empirical success.

"Keynes was wrong when he said that one shouldn't fill in values in the equations in a letter to Roy Harrod."

ReplyDeleteA bit lazy to look at the history of the conversation but IIRC, Keynes wasn't against modeling per se but using it inaccurately.

DeleteIt seems to me that economics is a branch of logic, a way of thinking; and that you do not repel sufficiently firmly attempts à la Schultz to turn it into a pseudo-natural-science. ... But it is of the essence of a model that one does not fill in real values for the variable functions. [4] To do so would make it useless as a model. For as soon as this is done, the model loses its generality and its value as a mode of thought.And that footnote [4] (per my link to the letter above) is referring to this:

4. Refers to Harrod's claim that the substitution "of real equations for the present empty forms" would bring economics "on its way to looking much more like a mature science" (Harrod 1938:15 , p. 401).It's really hard to read this any other way. But maybe Keynes just meant that Roy Harrod shouldn't fill in the values because he was bad at math or something.

"Their only constraint on the math they use is showing that their equations are indeed useful ‒ by filling in the values and comparing to data."

ReplyDeleteHere is one economist that tries to follow this approach.

“Neoclassical economics turns out to be the one school of thought within the discipline of economics, indeed one of the very few intellectual disciplines in general, that rejects the inductive approach favoured by scientists, and prefers deductivism. It must be considered a unique phenomenon in the history of thought that the originally marginal and eccentric deductive approach to economics has today become the mainstream school of thought. Unhindered by economic reality, deductive economists can start with their preferred axioms, which do not need to be supported by facts – such as the axiom that individuals only care about the maximization of their own material benefit. Additional unrealistic assumptions produce the theories that are so removed from reality. While this is certainly allowed and may be useful as an exercise in logic, the theories, which are specific to the hypothetical environment created by the assumptions, are then used to advance policy recommendations. By this stage, no further mentioning is made of the assumptions necessary for the validity of the argument. The jump from the theoretical and hypothetical models to actual, supposedly workable policy advice is not usually explained. It is striking how seamlessly neoclassical economists have bridged the gap from their wholly fictional world of unrealistic models to recommendations of policies that actual politicians are supposed to implement in reality."

R.A. Werner: New Paradigm in Macroeconomics.

The realism of the assumptions is a side question. If unrealistic assumptions produce empirically valid models, then great! If realistic ones produce empirically invalid models, then that realism doesn't matter.

DeleteDoes anyone think the assumptions of quantum mechanics are "realistic"? Not really. That's why there's constant attempts at new interpretation. It is counter to our intuition about "realism". But they're empirically accurate, so you kind of have to give up on your human gut feelings about "realism". They are

biases. Assumptions are either empirically accurate themselves, or effective (i.e. the theory constructed from them is empirically accurate). The "realism" is unnecessary -- unless you just consider "realistic" to be a synonym with "empirically accurate".In which case:

just say empirically accurate.However in my experience, the purported replacements are themselves as empirically inaccurate as the assumptions they're trying to replace.

I really enjoyed this one Jason!

ReplyDeleteThanks Tom!

DeleteAbout 40 years ago I implemented a simple A/D converter to obtain a digital signal suitable for transmitting a speech signal over a digital channel.

ReplyDeleteThis solution was known as "Adaptive delta modulation" although at the time I was not aware that it was already invented some years earlier.

https://en.m.wikipedia.org/wiki/Delta_modulation

In my solution, the number of transitions in the digital signal was counted and averaged over a certain time and used as the varying step in a second feedback loop which tried to set the transition density to about 50% average. It worked wonderfully.

With no analogue input signal the result is highly ordered digital signal with 100% transition density, a simple continouous square wave where the stepsize is minimal.

With a very big input signal the stepsize is maximal but insufficient and then the transition density tends to approach 0%, essentially also a square wave signal because it is in fact the clipped input signal.

In these extreme situations the digital signal is highly ordered and has a low entropy and no interesting information other than about the presence of an input signal. Whether or not the higher entropy at 50% transition density represent interesting information is quite another matter.