Wednesday, May 31, 2017

Civilian labor force participation and inflation

Carola Binder discussed Brad DeLong's discussion of Charles Evans statement that the US has "essentially returned to full employment". I'm not going to add much to the discussion of full employment. However, Binder shows us a graph of the civilian labor force (CLF) participation rate for prime age workers (25-54). It's a pretty pristine example of a dynamic equilibrium subjected to shocks:


Actually, I accidentally (LOESS) smoothed the data too much when in order to look at the derivative, but it resulted in a happy accident. The dynamic equilibrium model almost perfectly matches the smoothed data:


Another way to put this: the model is an almost perfect smoothing of the data. That's pretty astonishing.

Binder also says:
"It is not totally obvious why prime-age employment-to-population should drive inflation distinctly from unemployment--that is, why Delong's λ should not be zero, as in the standard Phillips Curve."
However this seems to have the situation entirely backwards. There might well be a second order effect relating unemployment and inflation:



But the dominant (and indeed, across several countries, principal) component of inflation over the post-war period seems to be driven by demographic factors involving labor force participation (women entering the workforce).


...

Update: Here is Fed Chair Janet Yellen on women in the workforce.

Tuesday, May 30, 2017

Dynamic equilibrium and the bitcoin exchange rate

I saw JP Koning's post on bitcoin today, and the graph of the dollar/bitcoin exchange rate shows the tell-tale signs of a dynamic equilibrium.

After cobbling together a time series from this source of data, the model shows that there are nine major shocks. Eight centroids are (one at the beginning didn't have enough data): 2012.5, 2013.2, 2013.8, 2014.4, 2015.8, 2016.4, 2017.0, and 2017.4 (the current massive rise). Graphically, this is what it looks like (in linear and log scale):



I wouldn't put too much stock in the exact timing of the collapse of the latest shock; it's fairly uncertain (as we can see from the graphs of the unemployment rate forecast). But it it's correct, then I'll totally take credit. Just kidding.

The interesting thing is that the dynamic equilibrium itself is a fractional decrease of −2.6/y (i.e. bitcoin loses more than half its value over the course of a year). This makes it similar to gold, but on a much faster time scale (gold is −0.027/y). That is to say, you can think of bitcoin as a time lapse picture of gold; what happens to gold over 100 years happens to bitcoin in a year.

Another consequence of that is that bitcoin is just as stable for a transaction that takes place over a day (trading in the 21st century) as gold is for a transaction that takes place over 100 days (trading in the 18th century).

I found that absolutely fascinating. 

What does this mean for the future? Well, as long as positive shocks keep coming that are big enough and/or frequent enough, the value of bitcoin never has to go to zero. Estimating using a Poisson process with the current shocks, we have a time scale of 1.4 years (inter-shock period) which may well be sufficient given the size of the shocks already visible in the data (I want to analyze this further). 

Checking in on some forecasts of core PCE inflation

The US core PCE inflation numbers for April were released this morning. The major deviation for March ended up being revised a bit (from −1.7% to −1.6% for the continuously compounded annual rate of change) and the April number is more in line with the past data (+1.8%). All of these are basically in line with the dynamic equilibrium model NGDP L:



The old forecast versus the FRBNY DSGE model for the past three years that I promised to update still looks like it is undershooting on average: 


These two different models (the former dynamic equilibrium and the latter monetary model) are actually incompatible with each other [1] but in a way that is interesting. The monetary model sees the fall of inflation over the past 30 or so years in the US as part of a long term trend towards zero. The dynamic equilibrium model sees that same fall as the receding demographic shock of the 1970s and 80s, recently returning to the "normal" inflation rate of about 1.7%.

In the monetary model, the ansatz for the changing information transfer index is seen as only an approximation in terms of the partition function approach which itself (given its definition in terms of well-defined growth rates) is more compatible with the dynamic equilibrium model. 

In fact, the poor performance of the monetary model roughly since 2016 was in the back of my mind when I wrote my post from yesterday where I made the bold claim that "money is unimportant".

As is typical for macro models it hasn't been rejected yet at any respectable p-value. But it probably isn't useful. Per my back and forth with Narayana Kocherlakota, it is close to being out-performed by a constant model (although it only has three parameters, so according to various information criteria it is still out-performing the FRBNY DSGE model that has at least 10 parameters despite it having a lower RMS error).

Unless core PCE inflation takes a dive over the next several months, I'll probably have to add a sad face to the forecast archive.

...

Footnotes:

[1] They can be made to be compatible, but in the light of this post I may want to give up on this inflation model. 

Monday, May 29, 2017

Success?

A few days ago, I had a back and forth with Narayana Kocherlakota on Twitter where he called economic forecasting of inflation a "success":


AIC refers to the Akaike information criterion (as a proxy for various information criteria) which is a maximum likelihood metric that takes into account the number of parameters in the model (penalizing a model with more parameters). EPOP refers to the employment-population ratio Kocherlakota refers to in his blog post (and for which there is a pretty good dynamic equilibrium model available, by the way). I also used π to refer to inflation as is common in DSGE models in my last tweet. 

The data Kocherlakota compares to the Fed inflation forecast is much better described by a constant 1.6% inflation. Not only would that model have a better AIC (since the Fed forecasts undoubtedly contain more than one parameter), but a forecast of 1.6% wouldn't have the negative bias of the Fed forecast. This is basically saying that an AR process would outperform the Fed model (as we've seen before).

My main point was that on its own, the inflation forecast is not evidence of success because it is beaten by a constant. In physics, if a model was beaten by a constant that model would be rejected and it's likely the entire approach would be abandoned unless some extremely compelling reasons to keep going were found. But Kocherlakota's post was part of a short series wherein forecasts of other variables from the same framework were completely wrong. If the IT framework got GDP and EPOP completely wrong and then underperformed a constant for inflation, that would be more than sufficient evidence to reject it. One's Bayesian prior for the entire framework should be given a lower probability because it gets two variables wrong and doesn't do as well as a constant. This is to say a scientist would abandon pretense of knowledge of EPOP and GDP (and the framework that produced the forecasts) and resort to a constant model of inflation. Kocherlakota not only doesn't reject the framework but calls the inflation forecast a success!

The next line was Kocherlakota saying "This the point - [inflation] evolves almost completely separately." which I thought was a good enough stopping point because I am not sure Kocherlakota understood what I was saying. And on its own, that is an interesting point coming from a former Fed official ‒ that inflation seems independent of other macroeconomic variables. However, my point was that despite the fact that Kocherlakota was staring at evidence that he should probably be starting over from scratch, he wasn't seeing it.

Saturday, May 27, 2017

Money is unimportant



I have a novel theory for why all the discussions of "money" in macroeconomics don't seem to go anywhere. Aside from cases of really high inflation (the only cases with any empirical support of money having a macroeconomic effect), money doesn't matter. It doesn't matter what it is. It doesn't matter what it does. It doesn't matter if it's base money or MZM. It doesn't matter how it's created. It doesn't matter how it's destroyed.

It simply doesn't matter.

Money is a proxy for our human behaviors in the economic sphere. It's like the iron filings conforming to the magnetic fields, or the smoke in a wind tunnel test. It's not doing anything; we're doing things.

Let me back this up with a few aspects of the information transfer model.

First, it is basically a mathematical identity to insert money that mediates transactions into an information equilibrium (definition) condition. If you have A B then A ⇄ M ⇄ B is just a chain rule and use of M/M = 1 away.

Second, most recessions and other shocks involve non-ideal information transfer (definition). It is caused by correlation of agents in state space (if agents were uncorrelated and fully exploring the state space, you'd have ideal information transfer). Money wouldn't correlate in state space without agents (in fact, if we had just mindless sources and sinks of money, macroeconomics would just be thermodynamics). Money in cases of non-ideal information transfer is just an indicator dye along for the ride reeling about with the non-equilibrium dynamics of human behavior.

And finally, what about those high inflation cases? In those cases we have empirical evidence that money is tied to inflation, so how can you say it doesn't matter? Well if we think of money M as a factor of production (along with labor L and other factors) we have 

(1) log P ~ ⟨α − 1⟩ log L + ... + ⟨β − 1⟩ log M

where P is the price level. If money grows at a rate μ, and labor at a rate λ, then we have

(2) π ~ ⟨α − 1⟩ λ + ... + ⟨β − 1⟩ μ

If m is large and π is large [1], we can approximate the equation with just 

(3) π ~ ⟨β − 1⟩ μ

which is basically the quantity theory of money. That's a relatively trivial role for money, however. And empirically, a trivial relationship is what we see for high inflation (over 10%). 

For most modern economies, inflation dynamics are more likely demographic (see here or here) or due to other shocks (e.g. oil). Basically, that means the other terms in Eq. (2) are more important than the money term.

Overall, we have a series of trivial (math identity, quantity theory) or non-causal ("iron filings") relationships between money and macroeconomics. In most of the policy-relevant scenarios (recessions, modern moderate inflation economies), money doesn't really matter [2].

...

Update 29 May 2017

This is mostly for commenter Shocker below, but I think this post has been generally misinterpreted to mean we don't need money at all. My thesis is that we don't need money to explain modern moderate inflation economies or to implement economic policy. Only in trivial scenarios (e.g. high inflation, or no money at all) does it have an impact. Graphically:


This is to say that for policy relevant scenarios in modern moderate inflation economies, for random macroeconomic variable R and money supply M:

∂R/∂M ≈ 0


...

[1] If π is small, then we must have a large cancellation.

Thursday, May 25, 2017

Scale invariance and wealth distributions

In a conversation with Steve Roth, I recalled a paper I'd read a long time ago about wealth distribution:
We introduce a simple model of economy, where the time evolution is described by an equation capturing both exchange between individuals and random speculative trading, in such a way that the fundamental symmetry of the economy under an arbitrary change of monetary units is insured.
That's how econophysicists Bouchaud and Mezard open their abstract. Their approach is a good example of an effective field theory approach (write down the simplest equation that obeys the symmetries of the system). But interestingly, the symmetry they chose is exactly the same scale invariance that leads to the information equilibrium condition (see here or here). I hadn't payed much attention to this line before, but now it has more significance for me. The scale invariance is also related to money: money is anything that helps the scale invariance hold.


The equation Bouchaud and Mezard write down simply couples (creates a nexus between) wealth of each agent and some field that exhibits Brownian motion with drift (i.e. a stock market). It also couples the wealth of agents to each other (i.e. exchange):



As you can see, taking W → α W leaves this equation unchanged.

This scale invariance is probably what allows their model to generate wealth distributions with Pareto (power law) tails.

Wednesday, May 24, 2017

More on Hayek and information theory


My piece at Evonomics was largely well-received in the econoblogosphere. The exception should be obvious: fans of Hayek. Actually, my editor and I discussed the likely backlash before publication.

The most common complaint was some sort of knee-jerk complaint response fans of Hayek seem to have: "you haven't read the vast literature of Hayek". It was pretty strange to me because I've actually read a bunch of Hayek's writing. Only a limited part of it is relevant to the microeconomics in my Evonomics piece.

Hayek wrote about (among other things, so do not consider this list exhaustive):

  1. The price mechanism (e.g. The Use of Knowledge in Society)
  2. Intertemporal equilibrium (e.g. Economics and Knowledge)
  3. Business cycles (includes his arguments with Keynes)
  4. The central planning calculation problem (his expansions on Mises, including The Use of Knowledge in Society)
  5. The political effects of central planning (e.g. The Road to Serfdom)

As no one really understands business cycles (the identity and cause of recessions represent an open question in macroeconomics), any contribution to item 3 is only meaningful if it represents a useful description of empirical data. Hayek is a bit on the wordy side, and doesn't really engage with data.

Item 4 is generally true at the time of Hayek and probably for hundreds of years in the future as Cosma Shalizi shows in his excellent book review of Red Plenty:
There are many, many things to be said against the market system, but it is a mechanism for providing feedback from users to producers, and for propagating that feedback through the whole economy, without anyone having to explicitly track that information. This is a point which both Hayek, and Lange (before the war) got very much right. The feedback needn’t be just or even mainly through prices; quantities (especially inventories) can sometimes work just as well. But what sells and what doesn’t is the essential feedback. 
It’s worth mentioning that this is a point which Trotsky got right.
However, while Hayek might have intuitively understood the issue, you can't demonstrate it in any convincing way without understanding computational complexity (as Shalizi also shows). Just asserting the calculation problem is too hard to solve isn't quite the same as showing that the linear programming problem would take a massive amount of computational resources. And truthfully, the linear programming problem is actually solvable, so some future society could eventually implement it (meaning Mises and Hayek were correct, but only for a period of time).

The point Shalizi also makes is that because the problem is too complex to solve without a heroic dose of computational resources, you can't actually know if the market's "heuristic" solution is optimal. It's just "a" solution.


Item 5 is a political treatise, and, empirically speaking, a largely false one as e.g. the United Kingdom hasn't devolved into totalitarianism in the intervening decades despite running a mixed economy. I recently had a discussion about this with some colleagues that were fans of Hayek who backtracked to the position that Hayek was only talking about something that "could happen". However even that is incorrect as Hayek said that "tyranny ... inevitably results from government control of economic decision-making" (emphasis mine).


This leaves items 1 and 2.

Coincidentally, David Glasner posted a pretty good rundown of item 2 earlier this week. I also discuss the concept of intertemporal equilibrium (and its potential failure) using information equilibrium in many contexts on this blog (including framing the problem as information transfer from the future to the present, statistical equilibrium, and dynamic equilibrium). The information equilibrium approach is fully consistent with Hayek in the sense that, as Glasner put it:
[Hayek believed] he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.
In that sense, information equilibrium can be seen as offering a potential framework for addressing the intertemporal equilibrium problem Hayek identified. But I didn't discuss this in the article.

While none of items 2-5 are really discussed in the article (item 2 is alluded to with a comment about future and past distributions, and I actually agree with Hayek on item 4 but only allude to it with a link to Shalizi's blog post linked above), many of the Hayek fans brought them up in comments at Evonomics or on Twitter saying that I misunderstood them or failed to talk about them. 

I'll freely admit I failed to talk about them (and it's hard to say if someone misunderstands things they don't talk about). My Evonomics article concentrated on item 1: the price mechanism. I tried to explain what Hayek got wrong about it, what he got right, and how we might understand it in terms of information theory. Information theory naturally leads to serious arguments against assuming the efficacy of the market mechanism, so that where Hayek is enthralled with how well it works, we should instead by surprised -- and on the look-out for some non-market mechanism in place propping it up.

I actually don't claim Hayek got many things wrong. The title is "Hayek Meets Information Theory. And Fails." Generally titles for pieces are created by the editors, and this case was no different. However, I did approve it so I am at least partially responsible for it. And given the arguments in the article, this title is not far off the mark: Hayek's description of the price mechanism as a communication system is not consistent with information theory.

The only claims I make about Hayek in the article are:
1. Friedrich Hayek did have some insight into prices having something to do with information, but he got the details wrong and vastly understated the complexity of the system.
[Several readers took this to mean that I said Hayek said markets weren't complex. If you read that carefully, you'll notice that I only said Hayek understated the complexity.]
2. Hayek thought a large amount of knowledge about biological or ecological systems, population, and social systems could be communicated by a single number: a price.
[This is the statement behind the title that I go into more detail about below.]
3. Ideas that were posited as articles of faith or created through incomplete arguments by Hayek are not even close to the whole story, and leave you with no knowledge of the ways the price mechanism, marginalism, or supply and demand can go wrong.
[No one seems to be arguing that Hayek had a complete understanding of the price mechanism. However I will discuss the part about how markets "go wrong" in more detail below.]
4. But [Hayek] didn’t have the conceptual or mathematical tools of information theory to understand the mechanisms of that relationship
[This isn't even debatable. Hayek never used information theory to understand the price mechanism.]
There is also another thread about how I am supposedly claiming to be designing a machine learning algorithm that will work better than markets. However, this is just a reading comprehension failure as I claim the exact opposite:
The thing is that with the wrong settings, [machine learning] algorithms fail and you get garbage. I know this from experience in my regular job researching ... algorithms. Therefore depending on the input data (especially data resulting from human behavior), we shouldn’t expect to get good results all of the time. These failures are exactly the failure of information to flow from the real data to the generator through the detector – the failure of information from the demand to reach the supply via the price mechanism.
I was actually making an analogy that the failure of machine learning algorithms might be similar to the failure of markets. I do claim "The understanding of prices and supply and demand provided by information theory and machine learning algorithms is better equipped to explain markets", but again that doesn't mean machine learning is better than markets but rather a potential model of markets.


Now on to the more substantive complaints above ...


One of the main things Hayek got wrong was his "metaphor" (that he says is "more than a metaphor") of price as a communication system, from "The Use of Knowledge in Society" (1945):
We must look at the price system as such a mechanism for communicating information if we want to understand its real function—a function which, of course, it fulfills less perfectly as prices grow more rigid. (Even when quoted prices have become quite rigid, however, the forces which would operate through changes in price still operate to a considerable extent through changes in the other terms of the contract.) The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.
The main point in my Evonomics article is that information is not passed through prices, and the markets are not transmitting information as a telecommunications system. His words are fairly straightforward. Hayek makes these claims about the price mechanism (emphasis mine in the quote above) despite the fact that they are inconsistent with information theory.

A more subtle and interesting point raised by Hayek fans was that Hayek never claimed the system was perfect or free from error or failures (that markets never "go wrong"). Again, from "The Use of Knowledge in Society":
Of course, these [price] adjustments are probably never "perfect" in the sense in which the economist conceives of them in his equilibrium analysis. But I fear that our theoretical habits of approaching the problem with the assumption of more or less perfect knowledge on the part of almost everyone has made us somewhat blind to the true function of the price mechanism and led us to apply rather misleading standards in judging its efficiency. The marvel is that in a case like that of a scarcity of one raw material, without an order being issued, without more than perhaps a handful of people knowing the cause, tens of thousands of people whose identity could not be ascertained by months of investigation, are made to use the material or its products more sparingly; i.e., they move in the right direction. This is enough of a marvel even if, in a constantly changing world, not all will hit it off so perfectly that their profit rates will always be maintained at the same constant or "normal" level.
I was well aware that Hayek did say the price mechanism could fail (famously in the case of government interference such as taxes or subsidies). However my claim was that Hayek doesn't tell you "the ways the price mechanism ... can go wrong" -- not that he doesn't tell you "that the price mechanism ... can go wrong". In my description of non-ideal information transfer, I show mathematically that market failures lead to lower prices. That's a way markets fail. Although I didn't go into it in the article, correlations among agents are one way to get non-ideal information transfer (essentially a failure of the maximum entropy assumptions). Markets also can fail if you don't have enough transactions. Where Hayek says airplanes can crash, I claim Hayek doesn't tell us how airplanes crash but information theory does.

Strictly speaking, this is not entirely true. Hayek does claim that price controls and other government interventions will cause the price mechanism to fail. However the failure mode I talk about in my article does not require government intervention, and the implication when I say that "the price mechanism, marginalism, or supply and demand can go wrong" is that we are talking about the possibility of failure even in the case free markets. Hayek also says that the market is self-correcting (emphasis in the previous quote), but this is only true in the case of nearly ideal information transfer.

*  *  *

As I don't make that many claims about Hayek, the corpus of material required to understand those claims about Hayek is actually relatively small. It's also not hard to understand what Hayek was saying in general about the price mechanism: prices are a way to get information about a drought in one region to markets in another. But while his intuition was useful, you have to be consistent with information theory which leads to a better understanding of the possible failure modes.

I wasn't talking about the "economic calculation problem" (about central planning), the business cycle, or any of the politics in e.g. The Road to Serfdom so references to those topics aren't germane to the discussion of Hayek and the price mechanism. Therefore a lot of the criticism of my Evonomics article misses the point.


PS For those interested, I have a more detailed argument about how markets can fail to aggregate information (they represent a heuristic algorithm solution to the allocation problem, but not the information aggregation problem). 


PPS I have an animation I started to put together about this subject several years ago that was never completed:


The animation first describes Hayek's information aggregation function (where the all-knowing market spits out a price after aggregating all the information. The second part shows the information equilibrium picture where the price is just "listening in" (using a particularly 2013-relevant metaphor).









Friday, May 19, 2017

Principal component analysis of state unemployment rates

One of my hobbies on this blog is to apply various principal component analyses (PCA) to economic data. For example, here's some jobs data by industry (more here). I am not saying this is original research (many economics papers have used PCA, but a quick Googling did not turn up this particular version).

Anyway, this is based on seasonally adjusted FRED data (e.g. here for WA) and I put the code up in the dynamic equilibrium repository. Here is all of the data along with the US unemployment rate (gray):


It's a basic Karhunen–Loève decomposition (Mathematica function here). Blue is the principal component (first principal component), and the rest of the components aren't as relevant. To a pretty good approximation, the business cycle in employment is a national phenomenon:


There's an overall normalization factor based on the fact that we have 50 states. We can see the first (blue) and second (yellow) components alongside the national unemployment rate (gray, right scale): 


Basically the principal component is the national business cycle. The second component is interesting as it suggests differences in different states based on the two big recessions of the past 40 years (1980s and the Great Recession) that go in opposite directions. The best description of this component is that it represents that some states did much worse in the 1980s and some states did a bit better in the 2000s (see the first graph of this post).

As happened before, the principal component is pretty well modeled by the dynamic equilibrium model (just like the national data):


The transitions (recession centers) are at 1981.0, 1991.0, 2001.7, 2008.8 and a positive shock at 2014.2. These are consistent with the national data transitions (1981.1, 1991.1, 2001.7, 2008.8 and 2014.4).

Wednesday, May 17, 2017

My article at Evonomics

I have an article up at Evonomics about the basics of information equilibrium looking at it from the perspective of Hayek's price mechanism and the potential for market failure. Consider this post a forum for discussion or critiques. I earlier put up a post with further reading and some slides linked here.

I also made up a couple of diagrams that I didn't end up using illustrating price changes:




Tuesday, May 16, 2017

Explore more about information equilibrium

Originally formulated by physicists Peter Fielitz and Guenter Borchardt for natural complex systems, information equilibrium [arXiv:physics.gen-ph] is a potentially useful framework for understanding many economic phenomena. Here are some additional resources:


A tour of information equilibrium
Slide presentation (51 slides)


Dynamic equilibrium and information equilibrium
Slide presentation (19 slides)


Maximum entropy and information theory approaches to economics
Slide presentation (27 slides)


Information equilibrium as an economic principle
Pre-print/working paper (44 pages)
[arXiv:q-fin.EC], [SSRN], [EconPapers:RePEc]

...

Update 27 July 2017


Macro ensembles and information equilibrium
Simple macroeconomics: AD-AS, IS-LM
Slide presentation (29 slides)
[download pdf], [slide images], ["Twitter talk"]

...

Update 28 December 2017


Forecasting macroeconomic observables using (dynamic) information equilibrium
Slide presentation (36 slides)
[download pdf], [slide images], ["Twitter talk"]

...

Update 01 January 2018


Maximum entropy and information theory approaches to economics
Pre-print/working paper (21 pages)
[SSRN]

...

Update 2 April 2018


Trends in macroeconomic observables and (dynamic) information equilibrium
Slide presentation (22 slides)
[download pdf], ["Twitter talk"], [page link]

...

Update 6 October 2018


Information equilibrium and economics: Applications of maximum entropy information theory
Presented at the "Outside the Box" workshop
at the UW Economics department 5 October 2018
Slide presentation (67 slides)

For a general audience:



A random physicist takes on economics
2017 e-Book and Paperback (133 pages)
[book website]




A Workers' History of the United States 1948-2020
2019 e-Book and Paperback (138 pages)
[book website]



Hayek Meets Information Theory. And Fails.
Evonomics article (3500 words)
[evonomics.com]


Supply and demand and GANs
Slide presentation/twitter talk (9 "slides")
["Twitter talk"]


Information equilibrium
Slide presentation/twitter talk (16 "slides")
["Twitter talk"], [slide images]

Saturday, May 13, 2017

Theory and evidence in science versus economics


Noah Smith has a fine post on theory and evidence in economics so I suggest you read it. It is very true that there should be a combined approach:
In other words, econ seems too focused on "theory vs. evidence" instead of using the two in conjunction. And when they do get used in conjunction, it's often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. ... I see very few economists explicitly calling for the kind of "combined approach" to modeling that exists in other sciences - i.e., using evidence to continuously restrict the set of usable models.

This does assume the same definition of theory in economics and science, though. However there is a massive difference between "theory" in economics and "theory" in sciences. 

"Theory" in science

In science, "theory" generally speaking is the amalgamation of successful descriptions of empirical regularities in nature concisely packaged into a set of general principles that is sometimes called a framework. Theory for biology tends to stem from the theory of evolution which was empirically successful at explaining a large amount of the variation in species that had been documented by many people for decades. There is also the cell model. In geology you have plate tectonics that captures a lot of empirical evidence about earthquakes and volcanoes. Plate tectonics explains some of the fossil record as well (South America and Africa have some of the same fossils up to a point at which point they diverge because the continents split apart). In medicine, you have the germ theory of disease.

The quantum field theory framework is the most numerically precise amalgamation of empirical successes known to exist. But physics has been working with this kind of theory since the 1600s when Newton first came up with a concise set of principles that captured nearly all of the astronomical data about planets that had been recorded up to that point (along with Galileo's work on projectile motion).

But it is important to understand that the general usage of the word "theory" in the sciences is just shorthand for being consistent with past empirical successes. That's why string theory can be theory: it appears to be consistent with general relativity and quantum field theory and therefore can function as a kind of shorthand for the empirical successes of those theories ... at least in certain limits. This is not to say your new theoretical model will automatically be correct, but at least it doesn't obviously contradict Einstein's E = mc² or Newton's F = ma in the respective limits.

Theoretical biology (say, determining the effect of a change in habitat on a species) or theoretical geology (say, computing how the Earth's magnetic field changes) is similarly based on the empirical successes of biology and geology. These theories are then used to understand data and evidence and can be rejected if evidence contradicting them arises.

As an aside, experimental sciences (physics) have an advantage over observational ones (astronomy) in that the former can conduct experiments in order to extract the empirical regularities used to build theoretical frameworks. But even in experimental sciences, experiments might be harder to do in some fields than others. Everyone seems to consider physics the epitome of science, but in reality the only reason physics probably had a leg up in developing the first real scientific framework is that the necessary experiments required to observe the empirical regularities are incredibly easy to set up: a pendulum, some rocks, and some rolling balls and you're pretty much ready to experimentally confirm everything necessary to posit Newton's laws. In order to confirm the theory of evolution, you needed to collect species from around the world, breed some pigeons, and look at fossil evidence. That's a bit more of a chore than rolling a ball down a ramp.

"Theory" in economics

Theory in economics primarily appears to be solving utility maximization problems, but unlike science there does not appear to be any empirical regularity that is motivating that framework. Instead there are a couple of stylized facts that can be represented with the framework: marginalism and demand curves. However these stylized facts can also be represented with ... supply and demand curves. The question becomes what empirical regularity is described by utility maximization problems but not by supply and demand curves. Even the empirical work of Vernon Smith and John List can be described by supply and demand curves (in fact, at the link they can also be described by information equilibrium relationships).

Now there is nothing wrong with using utility maximization as a proposed framework. That is to say there's nothing wrong with positing any bit of mathematics as a potential framework for understanding and organizing empirical data. I've done as much with information equilibrium.

However the utility maximization "theory" in economics is not the same as "theory" in science. It isn't a shorthand for a bunch of empirical regularities that have been successfully described. It's just a proposed framework; it's mathematical philosophy.

The method of nascent science

This isn't necessarily bad, but it does mean that the interplay between theory and evidence reinforcing or refuting each other isn't the iterative process we need to be thinking about. I think a good analogy is an iterative algorithm. This algorithm produces a result that causes it to change some parameters or initial guess that is fed back into the same algorithm. This can converge to a final result if you start off close to it, but it requires your initial guess to be good. This is the case of science: the current state of knowledge is probably decent enough that the iterative process of theory and evidence will converge. You can think of this as the scientific method ... for established science.

For economics, it does not appear that the utility maximization framework is close enough to the "true theory" of economics for the method of established science to converge. What's needed is the scientific method that was used back when science first got its start. In a post from about a year ago, I called this the method of nascent science. That method was based around the different metric of usefulness rather than model rejection in established science. Here's a quote from that post:
Awhile ago, Noah Smith brought up the issue in economics that there are millions of theories and no way to reject them scientifically. And that's true! But I'm fairly sure we can reject most of them for being useless.


"Useless" is a much less rigorous and much broader category than "rejected". It also isn't necessarily a property of a single model on its own. If two independently useful models are completely different but are both consistent with the empirical data, then both models are useless. Because both models exist, they are useless. If one didn't [exist], the other would be useful.
Noah Smith (in the post linked at the beginning of this post) put forward three scenarios of theory and evidence in economics:
1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence. 
2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions ("Minimum wages don't have significant disemployment effects"). 
3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to "explain" the empirical result they just found. These models are generally never used or seen again.
Using these categories, we can immediately say 1 & 3 are useless. If a model never checked rigorously against data or if a model is never seen again, they can't possibly be useful.

In this case, the theories represent at best mathematical philosophy (as I mentioned at the end of the previous section). It's not really theory in the (established) scientific sense.

But!

Mathematical Principles of Natural Philosophy

Sometimes a little bit of mathematical philosophy will have legs. Isaac Newton's work, when it was proposed, was mathematical philosophy. It says so right in the title. So there's nothing wrong with the proliferation of "theory" (by which we mean mathematical philosophy) in economics. But it shouldn't be treated as "theory" in the same sense of science. Most if it will turn out to be useless, which is fine if you don't take it seriously in the first place. And using economic "theory" for policy would be like using Descartes to build a mag-lev train ...



...

Update 15 May 2017: Nascent versus "soft" science

I made a couple of grammatical corrections and added a "does" and a "though" to the sentence after the first Noah Smith quote in my post above.

But I did also want to add the point that by "established science" vs "nascent science" I don't mean the same thing as many people mean when they say "hard science" vs "soft science". So-called "soft" sciences can be established or nascent. I think of economics as a nascent science (economies and many of the questions about them barely existed until modern nation states came into being). I also think that some portions will eventually become a "hard" science (e.g. questions about the dynamics of the unemployment rate), while others might become a "soft" science with the soft science pieces being consumed by sociology (e.g. questions about what makes a group of people panic or behave as they do in a financial crisis).

I wrote up a post that goes into that in more detail about a year ago. However, the main idea is that economics might be explicable -- as a hard science even -- in cases where the law of large numbers kicks in and agents do not highly correlate (where economics becomes more about the state space itself than the actions of agents in that state space ... Lee Smolin called this "statistical economics" in an analogy with statistical mechanics). 

I think for example psychology is an established soft science. Its theoretical underpinnings are in medicine and neuroscience. That's what makes the replication crisis in psychology a pretty big problem for the field. In economics, it's actually less of a problem (the real problem is not the replication issue, but that we should all be taking the econ studies less seriously than we take psychology studies).

Exobiology or exogeology could be considered nascent hard sciences. Another nascent hard science might be so-called "data science": we don't quite know how to deal with the huge amounts of data that are only recently available to us and the traditional ways we treat data in science may not be optimal.

Monday, May 8, 2017

Government spending and receipts: a dynamic equilibrium?

I was messing around with FRED data and noticed that the ratio of government expenditures to government receipts seems to show a dynamic equilibrium that matches up with the unemployment rate. Note this is government spending and income at all levels (federal + state + local). So I ran it through the model [1] and sure enough it works out:


Basically, the ratio of expenditures to receipts goes up during a recession (i.e. deficits increase at a faster rate) and down in the dynamic equilibrium outside of recessions (i.e. deficits increase at a slower rate or even fall). The dates of the shocks to this dynamic equilibrium match pretty closely with the dates for the shocks to unemployment (arrows).

This isn't saying anything ground-breaking: recessions lower receipts and increase use of social services (so expenditures over receipts will go up). It is interesting however that the (relative) rate of improvement towards budget balance is fairly constant from the 1960s to the present date ... independent of major fiscal policy changes. You might think that all the disparate changes in state and local spending is washing out the big federal spending changes, but in fact the federal component is the larger component so it is dominating the graph above. In fact, the data looks almost the same with the just the federal component (see result below). So we can strengthen the conclusion: the (relative) rate of improvement towards federal budget balance is fairly constant from the 1960s to the present date ... independent of major federal fiscal policy changes.


...

Footnotes:

[1] The underlying information equilibrium model is GE ⇄ GR (expenditures are in information equilibrium with receipts, except during shocks).

Friday, May 5, 2017

Dynamic equilibrium in employment-population ratio in OECD countries

John Handley asks on Twitter about whether the dynamic equilibrium model works for the unemployment-population ratio for other countries besides the US. So I re-ran the model on some of the shorter OECD time series available on FRED (most of them were short, and I could easily automate the procedure for time series of approximately the same length).

As with the US, some countries seem to be undergoing a "demographic transition" with women entering the workforce. Therefore most of the data sets are for men only. I just realized that I actually have both for Greece. These are all for 15-64 year olds, and cases where there was data for at least 2000-2017. Some of the series only go back to 2004 or 2005, which is really too short to be conclusive. I also left off the longer time series (to come later in an update) because it was easy to automate the model for time series of approximately the same length.

Anyway, the men-only model country list is: Denmark, Estonia, Greece, Ireland, Iceland, Italy, New Zealand, Portugal, Slovenia, Turkey, and South Africa. The men and women are included for: France, Greece (again), Poland, and Sweden. I searched FRED manually, so these are just the countries that came up.

Here are the results (some have 1 shock, some have 2):


What is interesting is that while the global financial crisis seems to often be conflated with the Greek debt crisis, the Greek debt crisis appears to hit much later (centered at 2011.2). For example, the recession in Iceland is centered at 2008.7 (about 2.5 years earlier, closer to the recession center for the US).

...

Update:

Here are the results for Australia, Canada, and Japan which have longer time series: