Thursday, August 31, 2017

Who has two thumbs and a really great long term interest rate model?

This guy:

This is an update of the forecast model described here.

Forecast updates and more: IE versus Fed DSGE model

We're coming up to the end of a forecast I made almost three years ago. The previous update is here and everything is at the aggregated forecast post. It's a forecast I made in comparison with a NY Fed DSGE model, and it appears to be coming down to a tie. However that's a win for the five parameter monetary information equilibrium model (that also works for the entire post-war period) versus the 40+ parameter DSGE model.

Here is the updated forecast graph for the recently released PCE inflation data:

The performance is approximately the same (the IE model (blue) is slightly better, but biased low):

What is interesting is that the constant inflation model (green) does better than both. It's interesting because it's the same result as a dynamic equilibrium model of PCE inflation:

The dynamic equilibrium says that without shocks, PCE inflation is approximately constant (1.7%) and there was only a tiny shock in mid-2013 (before either of the forecasts above were made). That means that over the forecast window, the dynamic equilibrium model is equivalent to constant PCE inflation -- the model that does better than the IE monetary model and the NY Fed DSGE model. And aside from the two shocks (three parameters each), it only has one parameter (the dynamic equilibrium of 1.7% inflation). And it's really only that single parameter that is in effect (isn't exponentially suppressed) over the forecast period.


As a side note, over the course of working with the information equilibrium model I've come to a better understanding of how it all fits together. This is a good sign as it implies I'm learning something! The monetary IE model used above (and documented in detail in my paper) is probably best understood as an ensemble model with money as a factor of production with a particular model for the changing IT index:

\frac{d \langle N \rangle}{dM0} = \langle k \rangle \; \frac{\langle N \rangle}{M0}

\langle k \rangle \sim \frac{\log \langle N \rangle}{\log M0}

As such, it's scope is defined in terms of how well the model for the IT index matches the empirical data.

Update: forgot equations. Added them.

Tuesday, August 29, 2017

Lazy econ critique critiques

The title is probably more inflammatory than is necessary, but I thought it was a fun play on Noah Smith's very good "Lazy econ critiques" (as well as the following Bloomberg View article).

In essence, Noah says that econ critiques have become predictable and make predictable over-generalizations which could fall under the general hashtag of #NotAllEconomists. However there's also been a similar predictable over-generalization of econ criticism that would fall under the hashtag of #NotAllEconCritics.

In a recent twitter exchange between @UnlearningEcon and @Britonomist (I usually learn something in these exchanges), the latter brought up a blog post from academic economist Chris Auld (who happens to work a relatively short hydrofoil ride away from me in Victoria, BC).

This made me laugh (having seen the movie):
Criticism of economics in the public sphere often echoes [movie character Derek] Zoolander’s confusion.  Critics take common modeling assumptions in theoretical economics and claim economists believe that these assumptions are literally true, akin to claiming architects don’t understand people aren’t actually ants. 
Auld continues:
... Economists do not believe people are always rational, etc, we rather believe that such assumptions are often useful abstractions to further the goal of understanding the incredible complexity of human societies.
I agree that "unrealistic assumptions" has to be just about the laziest econ critique in existence. I wrote a post I was particularly proud of about how a lot of econ criticism is starting to look like vacuous art criticism. Auld also sarcastically points out that many econ critics who think econ should be more like science don't really understand science:
Real Scientists™ “require assumptions to be tested empirically before a theory can be built out of them,” whereas economists use models with assumptions which are empirically false.
I agree with Auld's snark here. Real scientists sometimes use assumptions that haven't been tested empirically directly [1]. I wrote an extended defense of economics from a specific instance of these criticisms about false assumptions and "as if" modeling. However, let me turn around and defend the econ critics here.

While "real scientists" make unrealistic assumptions all the time, "real scientists" also 1) translate those assumptions into scope conditions, 2) usually stop making them if they have been directly refuted.

Regarding 1), even economist Noah Smith pointed out that economists don't do this. I have never seen it in any paper. In fact, in one of Auld's own papers [pdf] we see him applying a rational utility model out of scope [2]. Despite the fact that rational utility-based explanations have been shown to be empirically inaccurate for individual people, in his conclusion Auld uses his possibly accurate effective theory at the macro level population to suggest mechanisms of individual behavior: 
This paper demonstrates that rational response to an infectious disease can lead to perverse and counterintuitive effects for both the individual and the population. An individual's response to increased risk of infection may be to undertake even more risk.
This is a subtle point (and one that is lost in both lazy econ critiques and lazy econ defenses). It is perfectly fine to make an unrealistic assumption in constructing a model (rational utility maximization). The model constructed with that unrealistic assumption may be accurate under scope conditions where that unrealistic assumption is either not unrealistic or not empirically invalidated. In our example, rational utility maximization has been shown to be false for individual humans in experiments. Therefore regardless of whether the population theory is empirically accurate or not, its conclusions cannot be extended to individuals.  

In my link above about scope conditions [linked again], I told a story about a specific instance of this and where I think economists (like in Auld's paper) go wrong. I put together a model that had the unrealistic assumption of no quark confinement. The result was a model of deep inelastic scattering that worked empirically. But because of the scope limitations due to my unrealistic assumption, I could not then use my model to make claims about the interactions of individual quarks. This is what Auld does in his paper: he uses his aggregate model to make claims about individual behavior even though the rational agent model is not empirically accurate.

Regarding 2), I did a search at NBER on "rational utility", and sure enough recent papers use rational utility maximizing models. The econ critique in my book offers a possible solution to this via Gary Becker's 1962 paper "Irrational behavior and economic theory". If we think of rational utility maximization as emergent from random agents, then a) the experiments on individual agents don't actually test the emergent rationality, and b) it means we have an explicit reason not to apply the lessons from the aggregate (rational) behavior to the (random) micro behavior.

And this lets us see where Auld goes wrong in his counter-criticism. He cites a recent paper in the econophysics tradition that makes many of the same (or even more) unrealistic assumptions economists are criticized for. But these unrealistic assumptions actually do limit the scope of the model. The authors do not apply their conclusions to individual behavior, but instead keep the discussion at the model scale (i.e. talking about aggregates and large classes of agents, not individual agents). Additionally, while the assumptions made are simplifications they are not actually empirically refuted at the scale of the individual in the same way rational utility maximization is. In fact, most of the assumptions are about the aggregate economy. The agents are random, which may or may not be realistic but this represents an assumption from ignorance rather than an assumption of particular knowledge (of rational utility maximization). In a sense, the modelers "give up" attempting to understand human behavior as inscrutable algorithmic randomness instead of providing (in a classic example of "male answer syndrome") a specific agent model or mechanism. This latter point is more a methodology choice than a real definition of "science". However, if the econophysicists were to use the random agent model to say "wealth in cases of high inequality is due to random luck", this would be an inappropriate violation of model scope — and that would be unscientific.

So I think "scientists use unrealistic assumptions too" is, per the title, a "lazy econ critique critique". I agree that "unrealistic assumptions" is a lazy econ critique, but the defense is better motivated in terms how useful the resulting model is. A good example where the lazy econ defense is really wrong-headed is the Euler equation in DSGE models. Yes, scientists sometimes use unrealistic assumptions, but the result is usually something useful which DSGE models using Euler equations have rarely (never?) shown themselves to be.



[1] An extremely good example of this is the wildly successful theory called quantum chromodynamics, the theory of quarks. The existence of quarks has never been proven empirically because you cannot observe a free quark. Depending on how strict you are with the definition of "proven", atoms were just an incredibly successful modeling assumption until the time of Einstein (who provided us with a way to measure Avogadro's number in Brownian motion) or until the 80s with the advent of scanning electron microscopes and the first visualization of atoms in a crystal.

[2] There are also myopic and adaptive expectations in the paper.

A great review!

A great review from economist Dr. Cameron Murray (co-author of Game of Mates) — the intro:
Jason Smith, a random physicist, has a new book out where he takes aim at some of the core foundations of microeconomics. I encourage every economist out there to open their mind, read it, and genuinely consider the implications of this new approach.
Also via tweet:
Extremely provocative and insightful new take on economics
This is probably the best review I could have ever hoped for!

And now Amazon seems to recognize the paperback and Kindle e-book are the same book, so they appear together on the same page (click the image):

Update: Sorry, this post was intended to go on the book site and not clutter up ITE with book-related material. I'll leave it as it's the first review (and a really good one!), but follow me on Twitter @infotranecon or at 

for future book updates and book-related posts.

Monday, August 28, 2017

The replication argument

Increasing or decreasing returns to scale?

Now that the book is out and I'm back from vacation, I can start up the regular blogging again. While I was on vacation, I read Miles Kimball's post on decreasing returns to scale (via unlearningecon). It opens:
There is no such thing as decreasing returns to scale. Everything people are tempted to call decreasing returns to scale has a more accurate name. The reason there is no such thing as decreasing returns to scale was explained well by Tjalling Koopmans in his 1957 book Three Essays on the State of Economic Science. The argument is the replication argument: if all factors are duplicated, then an identical copy of the production process can be set up and output will be doubled. It might be possible to do better than simply duplicating the original production process, but there is no reason to do worse. In any case, doing worse is better described as stupidity, using an inappropriate organizational structure, or X-inefficiency rather than as decreasing returns to scale.
I think this is an excellent example of a case where economics takes a cold logical argument, and attempts to apply it to real world data. Essentially, Kimball is saying there is no such thing as decreasing returns to scale because of logic (the replication argument [1]) therefore everything that appears as decreasing returns to scale must be something else. That something else, however, relies on a particular model of the underlying microeconomics that we don't necessarily understand with a good empirical model (organizational structure, "stupidity").

In physics, sometimes we don't understand the particular micro theory well enough but are able to make an effective macro theory by using micro theory to narrow down a form and fit parameters. The classic example is chiral perturbation theory (which is so classic that when you say Effective Field Theory without specifying any more detail, the assumption is that you're talking about chiral perturbation theory). In that case we don't understand quark physics enough to describe nuclei and hadron interactions in terms of the quark theory (QCD).

In another (possible) example, Einstein's gravity may actually not be a real force but rather an entropic force (here, here) and therefore Einstein's description is an effective theory where we don't understand the real micro theory (e.g. is there quantum gravity?).

In the economics example, we don't necessarily understand the underlying microeconomics that yield decreasing returns to scale, but we can begin to understand them as an effective theory of decreasing returns to scale. Kimball's claim translated into physics would have him saying there is no such thing as a gravitational field, it's all gravitons. Not only would physicists continue to use Einstein's equations without knowing the quantum theory of gravity, but Kimball the physicist could be completely wrong because it may turn out there is no such thing as a graviton because gravity is an emergent effective theory.

I think the point I am trying to make here is that the underlying micro theory of production by humans organized in firms is not some well-established empirically accurate theory. But Kimball is making assumptions about it that may turn out to be incorrect. I can illustrate the converse with what turns out to be a formally equivalent argument in physics: the Gibbs paradox.

The Gibbs paradox is about the entropy of an ideal gas: the first formula derived was a decent effective description but had issues with a theoretical replication argument. If you doubled the amount of an ideal gas, you more than doubled the entropy using Gibbs formula. It was a problem, but in this case it was a problem that involved an otherwise empirically successful micro theory (statistical mechanics of atoms). Because physicists had been successful with the micro theory, you could take the replication argument seriously. If physicists had been ignorant of the underlying micro theory, there would have been no reason to think this was a problem (maybe entropy wasn't extensive, i.e. "constant returns to scale"). Maybe whatever matter was made of had this property in terms of Boltzmann's definition of entropy? With 20/20 hindsight, we know it wasn't correct [2] but was statistical mechanics a foregone conclusion? If it started disagreeing with empirical data, like many other ideas in physics, it would've been thrown out. How does Kimball know X-efficiency isn't going to be thrown out by empirical studies?

The thing is that organizations and economic forces are at their heart social systems. There is no particular reason that doubling all the means of production should yield at least double the output, especially if we're including things like money. Two people working on a project doesn't necessarily double the output or divide the time it takes in half. Why? I don't know. It's probably complicated.

However, let me close with an explicit example of a plausible social model that could manifest decreasing returns to scale: trust. Trust has decreasing returns to scale. The bigger a group of humans, the less trust there is among them. As trust decreases, contracts inside a firm need to be more explicitly specified resulting in additional costs (Coase and the theory of the firm). It's true I may never be able to build a microfounded model of agent trust, but I could build an effective theory based on sociology studies.

In the end, it doesn't make sense to say: "There is no such thing as decreasing returns to scale, you should call it lack of trust, which always happens in human systems and always leads to decreasing returns to scale." Decreasing returns to scale (if that model is true) is as fundamental to production as the production inputs if it always happens. It is true that maybe we could discover an alien species where trust doesn't decrease as you increase the size of the social group (e.g. the Borg). At that point we might have to rewrite the theories. But much like we can't logically deduce the existence of aliens, using logic to say there's no such thing as decreasing returns to scale without an empirically accurate theory that tells us the underlying micro theory assumptions are sensible.



[1] The replication argument is used to argue that there can't be increasing returns to scale either.

[2] The solution was found in recognizing atoms of the same type are actually indistinguishable, so the many identical states where you exchange one atom for another are over-counted in Gibbs' entropy formula.

Friday, August 25, 2017

A random physicist takes on economics: Out now!

My Kindle e-book is out now!

Feel free to leave your commentary at:

The first open thread on the book is available here.

Update: Now available in print-on-demand paperback as well:

Update: The "out now" is such an obscure reference, I think I have to just put it up here ...

Thursday, August 24, 2017

Latest book news

I'm in the final stages of e-publishing my book, including getting the book's website set up:

It'll be a place for book-focused discussions and supplemental materials as well as a place where I'll keep the more commercial side of things (advertising and reviewing economics books from a physicist's perspective) leaving this site. Clicking through the Amazon links can also help me defray site maintenance costs!

I am hoping to finish up the Kindle formatting and publish by the end of September.

Update: Formatting has turned out to be easier than I thought. I expect the book to be out in the next few days depending on the Amazon process.

Update #2: This went way faster than I thought. The book is out now!

Saturday, August 5, 2017

Dynamic equilibrium in average hourly wages

Another piece of data FRED updated on Friday was average hourly earnings. Being a price (and a ratio), the dynamic equilibrium model should be applicable. Sure enough it is (and it works really well):

What is interesting to me is the "Phillips curve" behavior ‒ the bursts of wage increases prior to recessions (and reductions in the civilian labor force):

The large, broad increase in wages is associated with the broad increase in the labor force. The Great Recession is associated with a decrease in wages and a negative shock to the labor force. The other four smaller wage increases occur just before recessions in a similar fashion to the smaller shocks impacting PCE inflation:

Essentially, this creates a picture where there are two kinds of shocks to wages: demographic and post-recovery/pre-recession shocks that occur just before recessions. There was a broad demographic increase in wages associated with women entering the workforce, and a smaller one associated with the Great Recession (I imagine early/forced retirements of some "Baby Boomers" [1]). The smaller and narrower positive shocks occur between recessions (centered in 1980 ‒ i.e. between the 1974 and 1981 recessions ‒ as well as 1989, 1997, and 2007). These match up with shocks to PCE inflation. This is not to say the post-recovery/pre-recession shocks cause recessions. They likely don't; what happens is that wages start to rise and a recession intervenes cutting the improvement short.

This would tell a story of why wages are stagnant: wages haven't increased because there haven't been any demographic increases in the labor force and because too many recessions have cut wage growth short. According to the model, wages grow at an average rate of 2.3%. However rising wages between recessions are a significant component of higher wages.



[1] This creates an interesting hypothesis: was the Great Recession bad simply because it occurred when Baby Boomers started to reach retirement age? The baby boom is generally associated with beginning in the 1940s, and the 2008 recession was the first one after 1940 + 65 years = 2005. Forget the Fed's missteps or over-leveraged banks ‒ was the Great Recession inevitable after the post WWII baby boom?

Friday, August 4, 2017

Labor market model updates

There were some labor market data releases today, including the civilian labor force ("prime age") and the unemployment rate. So I've updated the graphs (including the one in this discussion of forecast stability). The black points are the new data. I added the RMS error to the unemployment rate change graph as well.

Thursday, August 3, 2017

More updates to the python IEtools

I added some more functionality to the package (on GitHub) for working with information equilibrium. There's now a fitting function for the parameters in an information equilibrium relationship as well as better file readers (FRED xls and csv), and the imported data structures now include growth rates and interpolating functions.

There's also a little demo of Okun's law.

Comparing recoveries

One of the points I try to stress on this blog is that models can be used to frame data. The tweet above shows one particular framing of investment data that shows the post-Great Recession recovery has been lackluster compared to other recoveries.

I used the same framing for nominal GDP data (which is roughly proportional to investment):

Included alongside the data is the dynamic equilibrium model of GDP (that I previously used in two discussions of data framing) shown with dotted lines. They follow the data fairly closely. In fact using this framing, the issue with the present recovery is that it's normalized to the housing bubble peak, but is otherwise right on track:

However, the dynamic equilibrium model essentially says there are few long-run features of the NGDP data. In a sense there are only three between 1948 and the present:

  • The Korean war and the permanent build-up of the Department of Defense (the military-industrial complex)
  • Women entering the workforce
  • The housing bubble

This means that, using this frame, the gradual fall in recovery performance is really about the gradual fading of the burst of nominal growth that came with women entering the workforce and has little to do with policy choices of the present compared to policy choices of the past [1]. Other recoveries were stronger because they were riding a wave where half the population was moving into GDP-measured work.



[1] I want to stress that this doesn't mean policy couldn't change the situation. They would just have to be policy choices on the scale of the social change involved when women entered the workforce, or generating the military-industrial complex. For example, there is some evidence that the Affordable Care Act (Obamacare) may have generated a small economic boom in terms of employment. It does not appear to be large enough to show up in the NGDP data, though.

Wednesday, August 2, 2017

Great Recession timing by state

Using the dynamic equilibrium model with state-level (not seasonally adjusted) unemployment data, I put together a collection of the centroids of the Great Recession shock by state. Here are all the fits on a single graph:

And here is a histogram of the start dates:

I was testing to see if any pattern could be discerned (e.g. did the Great Recession start somewhere specific and spread?), but the result looks pretty random (blue late, red early):


Update 8 November 2017

Here is a histogram of the dynamic equilibrium (log) slopes by state (US average of 0.09 marked):

The unchallenged assumption of human agency in economics

One of the things I read in multiple places is that economics is different because its subject is human beings who can reflect on the findings of economics and change their behavior. And while I do think this is possible for some economic observables, I do not think it is inevitable for every economic observable. Part of the reason is that I do not think humans are "really thinking" about many of their economic choices. Additionally, there's actually some evidence that humans aren't really thinking about interest rates for example.

Despite the sometimes convoluted rationales one hears about investment decisions from co-workers, I am under the impression that many of those decisions are made for reasons not known to the decider.

This is one step beyond what I usually assume as a basis for the information equilibrium approach — instead of the the rationales being inscrutable to the economic theorist, they are in addition inscrutable to the economic agent.

Today, Sean Carroll linked to a piece about a philosophical discussion about comprehension and consciousness, and the Daniel Dennett's ideas could form a philisophical basis for the even stronger assumption. The introduction to the debate states:
On comprehension, [Daniel] Dennett maintains that much animal and indeed human behaviour displays “competence without comprehension”, achieving ends without the subject’s understanding why. In a similar vein, he holds that human cultures can develop blindly, due to the natural selection of the “informational viruses” that Richard Dawkins has labelled “memes”, including some of the greatest products of human culture ... 
When we get down the the debate itself, Dennett says:
I am claiming – and it seems quite obvious to me, not in the least “peculiar” – that we must break our habit of assuming “thinking” whenever we see cleverness.

As Dennett points out, we are uncomfortable with this idea:
People are generally comfortable with the discovery that they have no direct knowledge “from the inside” of the properties of the blood-purifying events in their kidneys, or of the properties of the peripheral events in the eyeball and optic nerve that subserve vision, but the idea that this ignorance of internal properties of the relevant events in the brain carries “all the way up” is deeply counterintuitive. 
Of course it's not an absolute, but as Dennett says human ingenuity may have less of a role than we think:
Yes, some of the marvels of culture can be attributed to the genius of their inventors, but much less than is commonly imagined, and all of it rests on the foundations of good design built up over millennia by uncomprehending hosts of memes competing with each other for rehearsal time in brains ...
So when economists start with rational agents, or even agents that have "agency", we have to understand this is actually a philosophical assumption turned modeling assumption and not necessarily obvious.  And may in fact be behind some of the problems of economic theory.

If correct, this will probably be the hardest idea to convince people of ...


Update 7 August 2017

Thanks to Chris Dillow for linking to this post. You should read his — he gives more practical reasons instead of "high-falutin’ grand theory".

One thing I did want to emphasize is that regardless of whether humans really are thinking about economic choices, it is effectively a model assumption to start your economic model with agents thinking about their choices. Like all assumptions, this may or may not be a good model choice. You can start modeling an ideal gas by first figuring out the quark states in atomic nuclei and then build up atoms and molecules, finally doing a vast simulation using individual atoms figuring out that the pressure is inversely proportional to the volume. An easier way is to start from simply the idea that an ideal gas is made of a lot of something.

The issue is that no one has yet built up an empirically accurate model of the macroeconomy starting from the idea that humans are thinking about their decisions (or from agents at all). For specific assumptions about how humans think, there are even theoretical results that show those assumptions have no consequences (the famous SMD theorem). And for microeconomics, it's actually not necessary for agents to think about their decisions to obtain some standard economic results (for example, there is Gary Becker's paper about "irrational" agents, or see here where I reproduced the results of John List's experiments using random agents).

It seems like the natural starting point to think about economics is human decisions — and even our own experience, attempting to probe our own minds for insight. But maybe this is just a bias because we ourselves are humans. When dealing with animal ecosystems, biologists don't usually start from their intuition of what the animals are thinking. In fact, there exist examples where the animals are treated pretty much like molecules in a chemical reaction.

In fact, one of the best economic models treats humans as not fully thinking about their decisions (random utility discrete choice models). Maybe we should open up the field to a less human decision based, more pluralistic approach where the primary metric is empirical (external) validity — not whether the model satisfies our biases about how an economic model should work.

Tuesday, August 1, 2017

Economics criticism as art criticism

“To justify its existence, criticism must be partial, passionate, and political, that is to say, written from an exclusive view that opens the widest horizons.”
In reference to an article about economics education, Steve Keen said [1]:
... they use the word "complex" while clearly not understanding its modern meaning
The word "complex" does not have a specific modern meaning. Saying an economist misunderstands the modern meaning complexity is about as meaningful as writing an art review stating an artist misunderstands the modern meaning of complexity, which is to say: not.

In reference to Tony Yates saying Keen was policing heterodox semantics in the use of "complex", Keen goes on to say:
 ... And it's maths semantics by the way, not economics. Look it up.
I have no idea what Keen means here, because while there are different definitions of "complex" used in mathematics they are either nonsensical in this context or undermine Keen's own work ...

Complex numbers? Obviously not.

Group complexity? Ha ha. No.

Computational complexity? Pretty sure Keen doesn't mean the economy is NP-hard.

Complex dynamics? Does he mean this? Because this is the mathematics definition. What Keen does is actually just dynamical systems (which includes Lorenz attractors) as opposed to complex dynamical systems. Basically, if this is what he means, Keen should leave off the "complex" adjective. However that makes his comment about "complex" self-refuting, and its inclusion a meaningless affectation. Yes, I include complexity. How? By calling them complex dynamical systems instead of just dynamical systems.

Kolmogorov complexityThe definition here is the length of the shortest computer program that reproduces the output. In the context of economics this means that a Kolmogorov complex economy would essentially require simulating the entire economy down to every agent, every firm. In contrast, Keen's approach that says a system of dozens of nonlinear differential equations that can be written down on a few pages can capture the main behaviors of an economy unequivocally demonstrates that by this definition economies are not complex. [As an aside, this is what I mean when I say economic agents are complex.]

Complex adaptive systems? This is not really mathematics, but rather a general collection of ideas. One of those ideas is this:
Complex systems consist of a large number of elements. When the number is relatively small, the behaviour of the elements can often be given a formal description in conventional terms. However, when the number becomes sufficiently large, conventional means (e.g. a system of differential equations) not only become impractical, they also cease to assist in any understanding of the system.
Emphasis in the original. This is a quote from leading complexity theorist Paul Cilliers' Complexity and Postmodernism: Understanding Complex Systems [pdf], and it basically refutes Keen's approach to economics with his Minksy software which consists of differential equations.

*  *  *

Maybe by the modern definition of complex, Keen just means something like the Facebook setting for relationship status "It's Complicated" (which I used as a title for my piece on exactly this problem, and see also here). Similarly, Keen also appears in the documentary Boom Bust Boom for a few seconds, mentioning money and debt. However, later on the film had someone else say that economics shouldn't be approached like a branch of theoretical physics. If I had to pick an economist who used the most inappropriate physics models, it would be Keen who treats the economy like it's a nonlinear electronic circuit with his Minksy software. It's a very odd juxtaposition. Additionally, like asserting complexity, decrying economics as too physics-like [2] is another buzzphrase in economics similar to juxtaposition in art (at least in the 90s and 00s).

However with the last complexity definition, you may have noticed the rather jarring appearance of the word "postmodernism" ‒ or at least it might have been more jarring if I hadn't sprinkled this post with references to art criticism, Baudelaire, and juxtaposition.

In reading recent economic criticism, it seems like more and more art criticism ‒ art criticism of a movement falling out of fashion. Noah Smith writes that even the art criticism is becoming predictable:
At this point, blanket critiques of the economics discipline have been standardized to the point where it’s pretty easy to predict how they’ll proceed. Economists will be castigated for their failure to foresee the Great Recession. Some unrealistic assumptions in mainstream macroeconomic models will be mentioned. Economists will be cast as priests of free-market ideology, whose shortcomings will be vigorously asserted. We will be told that economics moves in cycles of fad and fashion. Readers will be reminded that economics deals with humans instead of atoms, making scientific certainty impossible. The piece will end with a call for humility on the part of economists, a more serious consideration of unconventional ideas and reduced prestige for the economics profession.
I had fun reconstructing a loose facsimile of this generic critique out of Baudelaire quotes about art and other things:
In order for the artist to have a world to express he must first be situated in this world ... In art, there is one thing which does not receive sufficient attention. The element which is left to the human will is not nearly so large as people think. ... The priest is an immense being because he makes the crowd believe astonishing things. ... That which is not slightly distorted lacks sensible appeal; from which it follows that irregularity – that is to say, the unexpected, surprise and astonishment, are a essential part and characteristic of beauty. ... An artist is only an artist on condition that he neglects no aspect of his dual nature. ... What is art? Prostitution.
You really only have to read "human will" as "rational agents" and "astonishing" with a negative connotation to pretty much capture it. But the real point here is that these critiques are not scientific ones based on technical arguments and empirical testing. They are critiques of "realism" or the "use of math". Realism is subjective -- it depends on scope and scale of the theory. Math is a tool. The critiques of mathematics in economics play out like critiques of a photographer's use of light, or an artist's use of mixed media. There's something wrong with it in an aesthetic sense, not a technical one (see more here and here).

The terms used in economics critiques take on lives of their own much like how the terms in art criticism take on lives of their own. "Complexity" means something different in economic criticism than it does in science (or even in economic theory). Phrases like "physics envy" and calling for "pluralism" just mean "wrong using math" and "listen to me".

And like many art critics, economics critics don't produce a lot of successful results themselves.



[1] Each of these quotes can be obtained from Twitter at this link.

[2] Ironically, the foremost research institute for complex adaptive systems (Santa Fe) was founded by (and is staffed by) several physicists so the the complaint that economics is too physics-like yet doesn't properly understand complexity is ... an odd juxtaposition.

Bootstrapping measurements

First, this isn't about bootstrapping in stats. It's about something more fundamental to the scientific method.

I've been seeing an idea crop up again ‒ in a recent comment that I can't seem to find right now, on Twitter, and in a blog post from Roger Farmer:
My talk was predicated on the fact that there can be no measurement without theory ...
This is one of those ideas that seems to have morphed from something insightful into something that very serious people say [1].

Yes, in some sense any "measurement" is going to be made inside some paradigm that is going to influence the measurement process (what to measure, how to measure, or whether a measurement is showing a "change"). It's the subject of James Burke's great documentary series The Day the Universe Changed where he explores how conceptual frameworks (theory) influence how the universe is perceived (measurement).

What is forgotten is that there are different degrees of influence. Sure, using an HP filter with GDP data leaves the idea of a recession defined by GDP shocks entirely up to the theorist. Your "theory" of how smooth GDP "should be" determines whether or not you see recessions in the GDP data. Even more theory goes into whether or not you think GDP is above or below potential.

In contrast, unemployment rates are pretty straightforward measurements that don't involve a lot of macroeconomic theory to interpret. Sure, there are theories involved in turning responses to the surveys (you might use an optical transfer function of a telescope as an analogy for the corrections to survey data) into unemployment rate data and different definitions of "unemployed" (for which data are also available). However, none of those caveats depend strongly on your macroeconomic theory. Whether unemployment is "high" or "low" depends on your theory (i.e. the counterfactual), but the time series is just an (imperfect) empirical measurement of the number of people without jobs who want one.

In macroeconomics (or economics in general) there exists a hierarchy of empirical measurements that depend more or less strongly on your theoretical framework. Here's a heuristic hierarchy starting from least theory dependent to most theory dependent with some examples:

S&P 500 (least)
Unemployment rate
Non-accelerating rate of unemployment
Natural rate of interest (most)

It is important to note that the S&P 500 measurement itself is not theory dependent. It is the weighted sum of some stock values. The S&P 500 is not "really" some other value in the sense that GDP could "really" be much higher because of stuff that isn't included. Whether or not this measurement is important to the economy is theory dependent, however.

The same kind of thing exists in physics. The QCD scale is theory dependent (and even regulator scheme dependent). The temperature outside is less theory dependent. The mass of an object is even less theory dependent.

And it's a good thing this hierarchy exists! Because otherwise science would never work. If all observations were strongly theory dependent, you'd never have a set of observations you could use to get started theorizing. Any observation would've come via some other implicit theory, so you couldn't use it to motivate your own theory. (I guess there's the outside chance you just happened to stumble upon the theory that explains everything at once right out of the gate.) You need a set of measurements that enables you to bootstrap into the push and pull of theory and evidence that we call science.

Physics started with falling objects. Evolutionary biology started with counting different creatures. Prices and counting seem like a good start for economics [2]. In any case, the idea that there can be no measurement without theory should be a qualified statement.



[1] I think another case of very serious people discussing economic methodology comes in the form of "complexity".

[2] I mentioned this before here, but counting and motion represent the two big paradigms of mathematics: algebra and calculus. And instead of the diagram from XKCD, I actually see two pyramids with physics at the top of one and economics at the top of the other with mathematics in the intersection. Money and physical reality (geometry, motion) are the two major drivers of mathematics and the mathematics that is actually developed tends to be constrained by this. In the 1200s Fibonnacci introduces algebra and "Arabic" (Indian) numerals in Italy for merchants' accounting an interest computations. Adelard is involved int he reintroduction of geometry and astronomy to Western Europe a bit before. You can also see the difference in that algebra was for business, but geometry and motion were more heavily involved with astronomy and time (and hence religion). Consider this a more social view of mathematics as a human institution instead of a philosophical view from Plato's cave.

The unrealistic assumptions of information equilibrium

The title is a bit of a joke as information equilibrium basically assumes humans are pretty ignorant about the details of economic processes ‒ a manifestly realistic assumption in my opinion. Anyway, I was reading Brad DeLong's blog post of contrition; it includes a lot of assumptions that he came to re-think after the financial crisis. This inspired me to write down the assumptions that go into the information equilibrium approach generally, but specifically because a particular point DeLong makes that I will use below.

Information equilibrium

First, information equilibrium between observable process variables A and B  (e.g. GDP and total employment, supply of toilet paper and demand for toilet paper) is not just assumed. It is first shown to be an empirically accurate description in the past and present, and assumed to hold in the future based on what is essentially Hume's uniformity of nature assumption [1]. The Lucas critique is frequently brought up in this context, essentially asserting the opposite. However, the uniformity of nature assumption is bolstered by the generality of the assumptions underlying an information equilibrium relationship.

These assumptions are:

  1. We are generally ignorant of the micro processes behind the two observable process variables A and B that are in information equilibrium. We only assume that the micro processes fully explore their respective micro process state spaces, are uncorrelated, and represent a large number of selections from those state spaces.
  2. We are completely ignorant of the micro processes behind the transfer system of information from one process variable to the other.

The first assumption says that if rolling 6-sided dice generates the process variable A, then those 6-sided dice will land on each side and there are lots of dice.

The first assumption is also what I mean when I sometimes say I assume humans are so complex, they can be treated as random. Randomly selecting states in a state space is not functionally different from complex agents that fully explore the state space through some unknown algorithm or algorithms that are algorithmically complex.

The piece about being uncorrelated will raise some eyebrows for lots of reasons, but it really isn't as unrealistic of an assumption as it seems. First, because it really isn't assumed -- it just separates information equilibrium from non-ideal information transfer (below). And second, because what really matters is temporary correlation. A large fraction of people in the US go through a generally correlated lifestyle where they are born, go to school, get a job, and work for some period of time. Many schools in the US are on a schedule with summers off. This causes some of us to be correlated in our graduation dates (high school in May or June year x, college in May or June of year x + 4). This is not the important correlation in the assumption above. The state space for the micro process is in a sense inaccessible.

The important (and regime-switching between ideal and non-ideal information transfer) uncorrelated behavior is that you and I don't buy toilet paper or sell a stock at the same time. If we do, that's important -- in the case of a stock, it can trigger a sell-off. Generally in equilibrium there are buyers and sellers of both toilet paper and stocks.

When I say uncorrelated, I mean agents (micro processes) are going about their business without regard to what a majority of other agents are doing. If they correlate, then we're in "information disequilibrium" (discussed below).

Dynamic equilibrium

This is a special case of information equilibrium where the large number of micro processes and selections from the state space is growing exponentially in the long run, except for a finite number of disequilibrium shocks. It is not so much assumed as empirically tested. The same assumptions as information equilibrium apply; their generality lends weight to the uniformity of nature assumption underlying the approach to forecasts.

Information disequilibrium (non-ideal information transfer)

Regarding that DeLong post, here is his point that inspired me to write down these assumptions:
... the discovery that the rating agencies had failed in their assessment of lower-tail risk to make the standard analytical judgment: that when things get really bad all correlations go to one.
In a sense, the information transfer framework operationalizes this assumption into a "founding principle": when things get "bad", those micro processes correlate and information equilibrium fails. Generally speaking, there should be only a finite number of discrete "bad" periods. The "bad" periods will show up as deviations from information equilibrium [2].

Again, it is not so much of an assumption but rather a usefulness criteria: if things are persistently "bad", then information equilibrium isn't a useful framework.



[1] I would like to point out that the general uniformity of nature assumption is essentially empty. If nature fails to be uniform in the particular way you thought it was uniform (i.e. your model fails), this does not in any sense disprove that some uniformity no one has thought of exists. That is to say the lack of observation of a particular uniformity does not prove some uniformity does not exist. Therefore assuming uniformity exists cannot be disproved except via an exhaustive search over all possible uniformities.

[2] Non-ideal information transfer lets us say a few things about these "bad" periods ‒ and they will be bounded by the equilibrium solution.