Wednesday, May 31, 2017

Civilian labor force participation and inflation

Carola Binder discussed Brad DeLong's discussion of Charles Evans statement that the US has "essentially returned to full employment". I'm not going to add much to the discussion of full employment. However, Binder shows us a graph of the civilian labor force (CLF) participation rate for prime age workers (25-54). It's a pretty pristine example of a dynamic equilibrium subjected to shocks:


Actually, I accidentally (LOESS) smoothed the data too much when in order to look at the derivative, but it resulted in a happy accident. The dynamic equilibrium model almost perfectly matches the smoothed data:


Another way to put this: the model is an almost perfect smoothing of the data. That's pretty astonishing.

Binder also says:
"It is not totally obvious why prime-age employment-to-population should drive inflation distinctly from unemployment--that is, why Delong's λ should not be zero, as in the standard Phillips Curve."
However this seems to have the situation entirely backwards. There might well be a second order effect relating unemployment and inflation:



But the dominant (and indeed, across several countries, principal) component of inflation over the post-war period seems to be driven by demographic factors involving labor force participation (women entering the workforce).


...

Update: Here is Fed Chair Janet Yellen on women in the workforce.

Tuesday, May 30, 2017

Dynamic equilibrium and the bitcoin exchange rate

I saw JP Koning's post on bitcoin today, and the graph of the dollar/bitcoin exchange rate shows the tell-tale signs of a dynamic equilibrium.

After cobbling together a time series from this source of data, the model shows that there are nine major shocks. Eight centroids are (one at the beginning didn't have enough data): 2012.5, 2013.2, 2013.8, 2014.4, 2015.8, 2016.4, 2017.0, and 2017.4 (the current massive rise). Graphically, this is what it looks like (in linear and log scale):



I wouldn't put too much stock in the exact timing of the collapse of the latest shock; it's fairly uncertain (as we can see from the graphs of the unemployment rate forecast). But it it's correct, then I'll totally take credit. Just kidding.

The interesting thing is that the dynamic equilibrium itself is a fractional decrease of −2.6/y (i.e. bitcoin loses more than half its value over the course of a year). This makes it similar to gold, but on a much faster time scale (gold is −0.027/y). That is to say, you can think of bitcoin as a time lapse picture of gold; what happens to gold over 100 years happens to bitcoin in a year.

Another consequence of that is that bitcoin is just as stable for a transaction that takes place over a day (trading in the 21st century) as gold is for a transaction that takes place over 100 days (trading in the 18th century).

I found that absolutely fascinating. 

What does this mean for the future? Well, as long as positive shocks keep coming that are big enough and/or frequent enough, the value of bitcoin never has to go to zero. Estimating using a Poisson process with the current shocks, we have a time scale of 1.4 years (inter-shock period) which may well be sufficient given the size of the shocks already visible in the data (I want to analyze this further). 

Checking in on some forecasts of core PCE inflation

The US core PCE inflation numbers for April were released this morning. The major deviation for March ended up being revised a bit (from −1.7% to −1.6% for the continuously compounded annual rate of change) and the April number is more in line with the past data (+1.8%). All of these are basically in line with the dynamic equilibrium model NGDP L:



The old forecast versus the FRBNY DSGE model for the past three years that I promised to update still looks like it is undershooting on average: 


These two different models (the former dynamic equilibrium and the latter monetary model) are actually incompatible with each other [1] but in a way that is interesting. The monetary model sees the fall of inflation over the past 30 or so years in the US as part of a long term trend towards zero. The dynamic equilibrium model sees that same fall as the receding demographic shock of the 1970s and 80s, recently returning to the "normal" inflation rate of about 1.7%.

In the monetary model, the ansatz for the changing information transfer index is seen as only an approximation in terms of the partition function approach which itself (given its definition in terms of well-defined growth rates) is more compatible with the dynamic equilibrium model. 

In fact, the poor performance of the monetary model roughly since 2016 was in the back of my mind when I wrote my post from yesterday where I made the bold claim that "money is unimportant".

As is typical for macro models it hasn't been rejected yet at any respectable p-value. But it probably isn't useful. Per my back and forth with Narayana Kocherlakota, it is close to being out-performed by a constant model (although it only has three parameters, so according to various information criteria it is still out-performing the FRBNY DSGE model that has at least 10 parameters despite it having a lower RMS error).

Unless core PCE inflation takes a dive over the next several months, I'll probably have to add a sad face to the forecast archive.

...

Footnotes:

[1] They can be made to be compatible, but in the light of this post I may want to give up on this inflation model. 

Monday, May 29, 2017

Success?

A few days ago, I had a back and forth with Narayana Kocherlakota on Twitter where he called economic forecasting of inflation a "success":


AIC refers to the Akaike information criterion (as a proxy for various information criteria) which is a maximum likelihood metric that takes into account the number of parameters in the model (penalizing a model with more parameters). EPOP refers to the employment-population ratio Kocherlakota refers to in his blog post (and for which there is a pretty good dynamic equilibrium model available, by the way). I also used π to refer to inflation as is common in DSGE models in my last tweet. 

The data Kocherlakota compares to the Fed inflation forecast is much better described by a constant 1.6% inflation. Not only would that model have a better AIC (since the Fed forecasts undoubtedly contain more than one parameter), but a forecast of 1.6% wouldn't have the negative bias of the Fed forecast. This is basically saying that an AR process would outperform the Fed model (as we've seen before).

My main point was that on its own, the inflation forecast is not evidence of success because it is beaten by a constant. In physics, if a model was beaten by a constant that model would be rejected and it's likely the entire approach would be abandoned unless some extremely compelling reasons to keep going were found. But Kocherlakota's post was part of a short series wherein forecasts of other variables from the same framework were completely wrong. If the IT framework got GDP and EPOP completely wrong and then underperformed a constant for inflation, that would be more than sufficient evidence to reject it. One's Bayesian prior for the entire framework should be given a lower probability because it gets two variables wrong and doesn't do as well as a constant. This is to say a scientist would abandon pretense of knowledge of EPOP and GDP (and the framework that produced the forecasts) and resort to a constant model of inflation. Kocherlakota not only doesn't reject the framework but calls the inflation forecast a success!

The next line was Kocherlakota saying "This the point - [inflation] evolves almost completely separately." which I thought was a good enough stopping point because I am not sure Kocherlakota understood what I was saying. And on its own, that is an interesting point coming from a former Fed official ‒ that inflation seems independent of other macroeconomic variables. However, my point was that despite the fact that Kocherlakota was staring at evidence that he should probably be starting over from scratch, he wasn't seeing it.

Saturday, May 27, 2017

Money is unimportant



I have a novel theory for why all the discussions of "money" in macroeconomics don't seem to go anywhere. Aside from cases of really high inflation (the only cases with any empirical support of money having a macroeconomic effect), money doesn't matter. It doesn't matter what it is. It doesn't matter what it does. It doesn't matter if it's base money or MZM. It doesn't matter how it's created. It doesn't matter how it's destroyed.

It simply doesn't matter.

Money is a proxy for our human behaviors in the economic sphere. It's like the iron filings conforming to the magnetic fields, or the smoke in a wind tunnel test. It's not doing anything; we're doing things.

Let me back this up with a few aspects of the information transfer model.

First, it is basically a mathematical identity to insert money that mediates transactions into an information equilibrium (definition) condition. If you have A B then A ⇄ M ⇄ B is just a chain rule and use of M/M = 1 away.

Second, most recessions and other shocks involve non-ideal information transfer (definition). It is caused by correlation of agents in state space (if agents were uncorrelated and fully exploring the state space, you'd have ideal information transfer). Money wouldn't correlate in state space without agents (in fact, if we had just mindless sources and sinks of money, macroeconomics would just be thermodynamics). Money in cases of non-ideal information transfer is just an indicator dye along for the ride reeling about with the non-equilibrium dynamics of human behavior.

And finally, what about those high inflation cases? In those cases we have empirical evidence that money is tied to inflation, so how can you say it doesn't matter? Well if we think of money M as a factor of production (along with labor L and other factors) we have 

(1) log P ~ ⟨α − 1⟩ log L + ... + ⟨β − 1⟩ log M

where P is the price level. If money grows at a rate μ, and labor at a rate λ, then we have

(2) π ~ ⟨α − 1⟩ λ + ... + ⟨β − 1⟩ μ

If m is large and π is large [1], we can approximate the equation with just 

(3) π ~ ⟨β − 1⟩ μ

which is basically the quantity theory of money. That's a relatively trivial role for money, however. And empirically, a trivial relationship is what we see for high inflation (over 10%). 

For most modern economies, inflation dynamics are more likely demographic (see here or here) or due to other shocks (e.g. oil). Basically, that means the other terms in Eq. (2) are more important than the money term.

Overall, we have a series of trivial (math identity, quantity theory) or non-causal ("iron filings") relationships between money and macroeconomics. In most of the policy-relevant scenarios (recessions, modern moderate inflation economies), money doesn't really matter [2].

...

Update 29 May 2017

This is mostly for commenter Shocker below, but I think this post has been generally misinterpreted to mean we don't need money at all. My thesis is that we don't need money to explain modern moderate inflation economies or to implement economic policy. Only in trivial scenarios (e.g. high inflation, or no money at all) does it have an impact. Graphically:


This is to say that for policy relevant scenarios in modern moderate inflation economies, for random macroeconomic variable R and money supply M:

∂R/∂M ≈ 0


...

[1] If π is small, then we must have a large cancellation.

Thursday, May 25, 2017

Scale invariance and wealth distributions

In a conversation with Steve Roth, I recalled a paper I'd read a long time ago about wealth distribution:
We introduce a simple model of economy, where the time evolution is described by an equation capturing both exchange between individuals and random speculative trading, in such a way that the fundamental symmetry of the economy under an arbitrary change of monetary units is insured.
That's how econophysicists Bouchaud and Mezard open their abstract. Their approach is a good example of an effective field theory approach (write down the simplest equation that obeys the symmetries of the system). But interestingly, the symmetry they chose is exactly the same scale invariance that leads to the information equilibrium condition (see here or here). I hadn't payed much attention to this line before, but now it has more significance for me. The scale invariance is also related to money: money is anything that helps the scale invariance hold.


The equation Bouchaud and Mezard write down simply couples (creates a nexus between) wealth of each agent and some field that exhibits Brownian motion with drift (i.e. a stock market). It also couples the wealth of agents to each other (i.e. exchange):



As you can see, taking W → α W leaves this equation unchanged.

This scale invariance is probably what allows their model to generate wealth distributions with Pareto (power law) tails.

Wednesday, May 24, 2017

More on Hayek and information theory


My piece at Evonomics was largely well-received in the econoblogosphere. The exception should be obvious: fans of Hayek. Actually, my editor and I discussed the likely backlash before publication.

The most common complaint was some sort of knee-jerk complaint response fans of Hayek seem to have: "you haven't read the vast literature of Hayek". It was pretty strange to me because I've actually read a bunch of Hayek's writing. Only a limited part of it is relevant to the microeconomics in my Evonomics piece.

Hayek wrote about (among other things, so do not consider this list exhaustive):

  1. The price mechanism (e.g. The Use of Knowledge in Society)
  2. Intertemporal equilibrium (e.g. Economics and Knowledge)
  3. Business cycles (includes his arguments with Keynes)
  4. The central planning calculation problem (his expansions on Mises, including The Use of Knowledge in Society)
  5. The political effects of central planning (e.g. The Road to Serfdom)

As no one really understands business cycles (the identity and cause of recessions represent an open question in macroeconomics), any contribution to item 3 is only meaningful if it represents a useful description of empirical data. Hayek is a bit on the wordy side, and doesn't really engage with data.

Item 4 is generally true at the time of Hayek and probably for hundreds of years in the future as Cosma Shalizi shows in his excellent book review of Red Plenty:
There are many, many things to be said against the market system, but it is a mechanism for providing feedback from users to producers, and for propagating that feedback through the whole economy, without anyone having to explicitly track that information. This is a point which both Hayek, and Lange (before the war) got very much right. The feedback needn’t be just or even mainly through prices; quantities (especially inventories) can sometimes work just as well. But what sells and what doesn’t is the essential feedback. 
It’s worth mentioning that this is a point which Trotsky got right.
However, while Hayek might have intuitively understood the issue, you can't demonstrate it in any convincing way without understanding computational complexity (as Shalizi also shows). Just asserting the calculation problem is too hard to solve isn't quite the same as showing that the linear programming problem would take a massive amount of computational resources. And truthfully, the linear programming problem is actually solvable, so some future society could eventually implement it (meaning Mises and Hayek were correct, but only for a period of time).

The point Shalizi also makes is that because the problem is too complex to solve without a heroic dose of computational resources, you can't actually know if the market's "heuristic" solution is optimal. It's just "a" solution.


Item 5 is a political treatise, and, empirically speaking, a largely false one as e.g. the United Kingdom hasn't devolved into totalitarianism in the intervening decades despite running a mixed economy. I recently had a discussion about this with some colleagues that were fans of Hayek who backtracked to the position that Hayek was only talking about something that "could happen". However even that is incorrect as Hayek said that "tyranny ... inevitably results from government control of economic decision-making" (emphasis mine).


This leaves items 1 and 2.

Coincidentally, David Glasner posted a pretty good rundown of item 2 earlier this week. I also discuss the concept of intertemporal equilibrium (and its potential failure) using information equilibrium in many contexts on this blog (including framing the problem as information transfer from the future to the present, statistical equilibrium, and dynamic equilibrium). The information equilibrium approach is fully consistent with Hayek in the sense that, as Glasner put it:
[Hayek believed] he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.
In that sense, information equilibrium can be seen as offering a potential framework for addressing the intertemporal equilibrium problem Hayek identified. But I didn't discuss this in the article.

While none of items 2-5 are really discussed in the article (item 2 is alluded to with a comment about future and past distributions, and I actually agree with Hayek on item 4 but only allude to it with a link to Shalizi's blog post linked above), many of the Hayek fans brought them up in comments at Evonomics or on Twitter saying that I misunderstood them or failed to talk about them. 

I'll freely admit I failed to talk about them (and it's hard to say if someone misunderstands things they don't talk about). My Evonomics article concentrated on item 1: the price mechanism. I tried to explain what Hayek got wrong about it, what he got right, and how we might understand it in terms of information theory. Information theory naturally leads to serious arguments against assuming the efficacy of the market mechanism, so that where Hayek is enthralled with how well it works, we should instead by surprised -- and on the look-out for some non-market mechanism in place propping it up.

I actually don't claim Hayek got many things wrong. The title is "Hayek Meets Information Theory. And Fails." Generally titles for pieces are created by the editors, and this case was no different. However, I did approve it so I am at least partially responsible for it. And given the arguments in the article, this title is not far off the mark: Hayek's description of the price mechanism as a communication system is not consistent with information theory.

The only claims I make about Hayek in the article are:
1. Friedrich Hayek did have some insight into prices having something to do with information, but he got the details wrong and vastly understated the complexity of the system.
[Several readers took this to mean that I said Hayek said markets weren't complex. If you read that carefully, you'll notice that I only said Hayek understated the complexity.]
2. Hayek thought a large amount of knowledge about biological or ecological systems, population, and social systems could be communicated by a single number: a price.
[This is the statement behind the title that I go into more detail about below.]
3. Ideas that were posited as articles of faith or created through incomplete arguments by Hayek are not even close to the whole story, and leave you with no knowledge of the ways the price mechanism, marginalism, or supply and demand can go wrong.
[No one seems to be arguing that Hayek had a complete understanding of the price mechanism. However I will discuss the part about how markets "go wrong" in more detail below.]
4. But [Hayek] didn’t have the conceptual or mathematical tools of information theory to understand the mechanisms of that relationship
[This isn't even debatable. Hayek never used information theory to understand the price mechanism.]
There is also another thread about how I am supposedly claiming to be designing a machine learning algorithm that will work better than markets. However, this is just a reading comprehension failure as I claim the exact opposite:
The thing is that with the wrong settings, [machine learning] algorithms fail and you get garbage. I know this from experience in my regular job researching ... algorithms. Therefore depending on the input data (especially data resulting from human behavior), we shouldn’t expect to get good results all of the time. These failures are exactly the failure of information to flow from the real data to the generator through the detector – the failure of information from the demand to reach the supply via the price mechanism.
I was actually making an analogy that the failure of machine learning algorithms might be similar to the failure of markets. I do claim "The understanding of prices and supply and demand provided by information theory and machine learning algorithms is better equipped to explain markets", but again that doesn't mean machine learning is better than markets but rather a potential model of markets.


Now on to the more substantive complaints above ...


One of the main things Hayek got wrong was his "metaphor" (that he says is "more than a metaphor") of price as a communication system, from "The Use of Knowledge in Society" (1945):
We must look at the price system as such a mechanism for communicating information if we want to understand its real function—a function which, of course, it fulfills less perfectly as prices grow more rigid. (Even when quoted prices have become quite rigid, however, the forces which would operate through changes in price still operate to a considerable extent through changes in the other terms of the contract.) The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.
The main point in my Evonomics article is that information is not passed through prices, and the markets are not transmitting information as a telecommunications system. His words are fairly straightforward. Hayek makes these claims about the price mechanism (emphasis mine in the quote above) despite the fact that they are inconsistent with information theory.

A more subtle and interesting point raised by Hayek fans was that Hayek never claimed the system was perfect or free from error or failures (that markets never "go wrong"). Again, from "The Use of Knowledge in Society":
Of course, these [price] adjustments are probably never "perfect" in the sense in which the economist conceives of them in his equilibrium analysis. But I fear that our theoretical habits of approaching the problem with the assumption of more or less perfect knowledge on the part of almost everyone has made us somewhat blind to the true function of the price mechanism and led us to apply rather misleading standards in judging its efficiency. The marvel is that in a case like that of a scarcity of one raw material, without an order being issued, without more than perhaps a handful of people knowing the cause, tens of thousands of people whose identity could not be ascertained by months of investigation, are made to use the material or its products more sparingly; i.e., they move in the right direction. This is enough of a marvel even if, in a constantly changing world, not all will hit it off so perfectly that their profit rates will always be maintained at the same constant or "normal" level.
I was well aware that Hayek did say the price mechanism could fail (famously in the case of government interference such as taxes or subsidies). However my claim was that Hayek doesn't tell you "the ways the price mechanism ... can go wrong" -- not that he doesn't tell you "that the price mechanism ... can go wrong". In my description of non-ideal information transfer, I show mathematically that market failures lead to lower prices. That's a way markets fail. Although I didn't go into it in the article, correlations among agents are one way to get non-ideal information transfer (essentially a failure of the maximum entropy assumptions). Markets also can fail if you don't have enough transactions. Where Hayek says airplanes can crash, I claim Hayek doesn't tell us how airplanes crash but information theory does.

Strictly speaking, this is not entirely true. Hayek does claim that price controls and other government interventions will cause the price mechanism to fail. However the failure mode I talk about in my article does not require government intervention, and the implication when I say that "the price mechanism, marginalism, or supply and demand can go wrong" is that we are talking about the possibility of failure even in the case free markets. Hayek also says that the market is self-correcting (emphasis in the previous quote), but this is only true in the case of nearly ideal information transfer.

*  *  *

As I don't make that many claims about Hayek, the corpus of material required to understand those claims about Hayek is actually relatively small. It's also not hard to understand what Hayek was saying in general about the price mechanism: prices are a way to get information about a drought in one region to markets in another. But while his intuition was useful, you have to be consistent with information theory which leads to a better understanding of the possible failure modes.

I wasn't talking about the "economic calculation problem" (about central planning), the business cycle, or any of the politics in e.g. The Road to Serfdom so references to those topics aren't germane to the discussion of Hayek and the price mechanism. Therefore a lot of the criticism of my Evonomics article misses the point.


PS For those interested, I have a more detailed argument about how markets can fail to aggregate information (they represent a heuristic algorithm solution to the allocation problem, but not the information aggregation problem). 


PPS I have an animation I started to put together about this subject several years ago that was never completed:


The animation first describes Hayek's information aggregation function (where the all-knowing market spits out a price after aggregating all the information. The second part shows the information equilibrium picture where the price is just "listening in" (using a particularly 2013-relevant metaphor).









Friday, May 19, 2017

Principal component analysis of state unemployment rates

One of my hobbies on this blog is to apply various principal component analyses (PCA) to economic data. For example, here's some jobs data by industry (more here). I am not saying this is original research (many economics papers have used PCA, but a quick Googling did not turn up this particular version).

Anyway, this is based on seasonally adjusted FRED data (e.g. here for WA) and I put the code up in the dynamic equilibrium repository. Here is all of the data along with the US unemployment rate (gray):


It's a basic Karhunen–Loève decomposition (Mathematica function here). Blue is the principal component (first principal component), and the rest of the components aren't as relevant. To a pretty good approximation, the business cycle in employment is a national phenomenon:


There's an overall normalization factor based on the fact that we have 50 states. We can see the first (blue) and second (yellow) components alongside the national unemployment rate (gray, right scale): 


Basically the principal component is the national business cycle. The second component is interesting as it suggests differences in different states based on the two big recessions of the past 40 years (1980s and the Great Recession) that go in opposite directions. The best description of this component is that it represents that some states did much worse in the 1980s and some states did a bit better in the 2000s (see the first graph of this post).

As happened before, the principal component is pretty well modeled by the dynamic equilibrium model (just like the national data):


The transitions (recession centers) are at 1981.0, 1991.0, 2001.7, 2008.8 and a positive shock at 2014.2. These are consistent with the national data transitions (1981.1, 1991.1, 2001.7, 2008.8 and 2014.4).

Wednesday, May 17, 2017

My article at Evonomics

I have an article up at Evonomics about the basics of information equilibrium looking at it from the perspective of Hayek's price mechanism and the potential for market failure. Consider this post a forum for discussion or critiques. I earlier put up a post with further reading and some slides linked here.

I also made up a couple of diagrams that I didn't end up using illustrating price changes:




Tuesday, May 16, 2017

Explore more about information equilibrium

Originally formulated by physicists Peter Fielitz and Guenter Borchardt for natural complex systems, information equilibrium [arXiv:physics.gen-ph] is a potentially useful framework for understanding many economic phenomena. Here are some additional resources:


A tour of information equilibrium
Slide presentation (51 slides)


Dynamic equilibrium and information equilibrium
Slide presentation (19 slides)


Maximum entropy and information theory approaches to economics
Slide presentation (27 slides)


Information equilibrium as an economic principle
Pre-print/working paper (44 pages)

Saturday, May 13, 2017

Theory and evidence in science versus economics


Noah Smith has a fine post on theory and evidence in economics so I suggest you read it. It is very true that there should be a combined approach:
In other words, econ seems too focused on "theory vs. evidence" instead of using the two in conjunction. And when they do get used in conjunction, it's often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. ... I see very few economists explicitly calling for the kind of "combined approach" to modeling that exists in other sciences - i.e., using evidence to continuously restrict the set of usable models.

This does assume the same definition of theory in economics and science, though. However there is a massive difference between "theory" in economics and "theory" in sciences. 

"Theory" in science

In science, "theory" generally speaking is the amalgamation of successful descriptions of empirical regularities in nature concisely packaged into a set of general principles that is sometimes called a framework. Theory for biology tends to stem from the theory of evolution which was empirically successful at explaining a large amount of the variation in species that had been documented by many people for decades. There is also the cell model. In geology you have plate tectonics that captures a lot of empirical evidence about earthquakes and volcanoes. Plate tectonics explains some of the fossil record as well (South America and Africa have some of the same fossils up to a point at which point they diverge because the continents split apart). In medicine, you have the germ theory of disease.

The quantum field theory framework is the most numerically precise amalgamation of empirical successes known to exist. But physics has been working with this kind of theory since the 1600s when Newton first came up with a concise set of principles that captured nearly all of the astronomical data about planets that had been recorded up to that point (along with Galileo's work on projectile motion).

But it is important to understand that the general usage of the word "theory" in the sciences is just shorthand for being consistent with past empirical successes. That's why string theory can be theory: it appears to be consistent with general relativity and quantum field theory and therefore can function as a kind of shorthand for the empirical successes of those theories ... at least in certain limits. This is not to say your new theoretical model will automatically be correct, but at least it doesn't obviously contradict Einstein's E = mc² or Newton's F = ma in the respective limits.

Theoretical biology (say, determining the effect of a change in habitat on a species) or theoretical geology (say, computing how the Earth's magnetic field changes) is similarly based on the empirical successes of biology and geology. These theories are then used to understand data and evidence and can be rejected if evidence contradicting them arises.

As an aside, experimental sciences (physics) have an advantage over observational ones (astronomy) in that the former can conduct experiments in order to extract the empirical regularities used to build theoretical frameworks. But even in experimental sciences, experiments might be harder to do in some fields than others. Everyone seems to consider physics the epitome of science, but in reality the only reason physics probably had a leg up in developing the first real scientific framework is that the necessary experiments required to observe the empirical regularities are incredibly easy to set up: a pendulum, some rocks, and some rolling balls and you're pretty much ready to experimentally confirm everything necessary to posit Newton's laws. In order to confirm the theory of evolution, you needed to collect species from around the world, breed some pigeons, and look at fossil evidence. That's a bit more of a chore than rolling a ball down a ramp.

"Theory" in economics

Theory in economics primarily appears to be solving utility maximization problems, but unlike science there does not appear to be any empirical regularity that is motivating that framework. Instead there are a couple of stylized facts that can be represented with the framework: marginalism and demand curves. However these stylized facts can also be represented with ... supply and demand curves. The question becomes what empirical regularity is described by utility maximization problems but not by supply and demand curves. Even the empirical work of Vernon Smith and John List can be described by supply and demand curves (in fact, at the link they can also be described by information equilibrium relationships).

Now there is nothing wrong with using utility maximization as a proposed framework. That is to say there's nothing wrong with positing any bit of mathematics as a potential framework for understanding and organizing empirical data. I've done as much with information equilibrium.

However the utility maximization "theory" in economics is not the same as "theory" in science. It isn't a shorthand for a bunch of empirical regularities that have been successfully described. It's just a proposed framework; it's mathematical philosophy.

The method of nascent science

This isn't necessarily bad, but it does mean that the interplay between theory and evidence reinforcing or refuting each other isn't the iterative process we need to be thinking about. I think a good analogy is an iterative algorithm. This algorithm produces a result that causes it to change some parameters or initial guess that is fed back into the same algorithm. This can converge to a final result if you start off close to it, but it requires your initial guess to be good. This is the case of science: the current state of knowledge is probably decent enough that the iterative process of theory and evidence will converge. You can think of this as the scientific method ... for established science.

For economics, it does not appear that the utility maximization framework is close enough to the "true theory" of economics for the method of established science to converge. What's needed is the scientific method that was used back when science first got its start. In a post from about a year ago, I called this the method of nascent science. That method was based around the different metric of usefulness rather than model rejection in established science. Here's a quote from that post:
Awhile ago, Noah Smith brought up the issue in economics that there are millions of theories and no way to reject them scientifically. And that's true! But I'm fairly sure we can reject most of them for being useless.


"Useless" is a much less rigorous and much broader category than "rejected". It also isn't necessarily a property of a single model on its own. If two independently useful models are completely different but are both consistent with the empirical data, then both models are useless. Because both models exist, they are useless. If one didn't [exist], the other would be useful.
Noah Smith (in the post linked at the beginning of this post) put forward three scenarios of theory and evidence in economics:
1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence. 
2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions ("Minimum wages don't have significant disemployment effects"). 
3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to "explain" the empirical result they just found. These models are generally never used or seen again.
Using these categories, we can immediately say 1 & 3 are useless. If a model never checked rigorously against data or if a model is never seen again, they can't possibly be useful.

In this case, the theories represent at best mathematical philosophy (as I mentioned at the end of the previous section). It's not really theory in the (established) scientific sense.

But!

Mathematical Principles of Natural Philosophy

Sometimes a little bit of mathematical philosophy will have legs. Isaac Newton's work, when it was proposed, was mathematical philosophy. It says so right in the title. So there's nothing wrong with the proliferation of "theory" (by which we mean mathematical philosophy) in economics. But it shouldn't be treated as "theory" in the same sense of science. Most if it will turn out to be useless, which is fine if you don't take it seriously in the first place. And using economic "theory" for policy would be like using Descartes to build a mag-lev train ...



...

Update 15 May 2017: Nascent versus "soft" science

I made a couple of grammatical corrections and added a "does" and a "though" to the sentence after the first Noah Smith quote in my post above.

But I did also want to add the point that by "established science" vs "nascent science" I don't mean the same thing as many people mean when they say "hard science" vs "soft science". So-called "soft" sciences can be established or nascent. I think of economics as a nascent science (economies and many of the questions about them barely existed until modern nation states came into being). I also think that some portions will eventually become a "hard" science (e.g. questions about the dynamics of the unemployment rate), while others might become a "soft" science with the soft science pieces being consumed by sociology (e.g. questions about what makes a group of people panic or behave as they do in a financial crisis).

I wrote up a post that goes into that in more detail about a year ago. However, the main idea is that economics might be explicable -- as a hard science even -- in cases where the law of large numbers kicks in and agents do not highly correlate (where economics becomes more about the state space itself than the actions of agents in that state space ... Lee Smolin called this "statistical economics" in an analogy with statistical mechanics). 

I think for example psychology is an established soft science. Its theoretical underpinnings are in medicine and neuroscience. That's what makes the replication crisis in psychology a pretty big problem for the field. In economics, it's actually less of a problem (the real problem is not the replication issue, but that we should all be taking the econ studies less seriously than we take psychology studies).

Exobiology or exogeology could be considered nascent hard sciences. Another nascent hard science might be so-called "data science": we don't quite know how to deal with the huge amounts of data that are only recently available to us and the traditional ways we treat data in science may not be optimal.

Monday, May 8, 2017

Government spending and receipts: a dynamic equilibrium?

I was messing around with FRED data and noticed that the ratio of government expenditures to government receipts seems to show a dynamic equilibrium that matches up with the unemployment rate. Note this is government spending and income at all levels (federal + state + local). So I ran it through the model [1] and sure enough it works out:


Basically, the ratio of expenditures to receipts goes up during a recession (i.e. deficits increase at a faster rate) and down in the dynamic equilibrium outside of recessions (i.e. deficits increase at a slower rate or even fall). The dates of the shocks to this dynamic equilibrium match pretty closely with the dates for the shocks to unemployment (arrows).

This isn't saying anything ground-breaking: recessions lower receipts and increase use of social services (so expenditures over receipts will go up). It is interesting however that the (relative) rate of improvement towards budget balance is fairly constant from the 1960s to the present date ... independent of major fiscal policy changes. You might think that all the disparate changes in state and local spending is washing out the big federal spending changes, but in fact the federal component is the larger component so it is dominating the graph above. In fact, the data looks almost the same with the just the federal component (see result below). So we can strengthen the conclusion: the (relative) rate of improvement towards federal budget balance is fairly constant from the 1960s to the present date ... independent of major federal fiscal policy changes.


...

Footnotes:

[1] The underlying information equilibrium model is GE ⇄ GR (expenditures are in information equilibrium with receipts, except during shocks).

Friday, May 5, 2017

Dynamic equilibrium in employment-population ratio in OECD countries

John Handley asks on Twitter about whether the dynamic equilibrium model works for the unemployment-population ratio for other countries besides the US. So I re-ran the model on some of the shorter OECD time series available on FRED (most of them were short, and I could easily automate the procedure for time series of approximately the same length).

As with the US, some countries seem to be undergoing a "demographic transition" with women entering the workforce. Therefore most of the data sets are for men only. I just realized that I actually have both for Greece. These are all for 15-64 year olds, and cases where there was data for at least 2000-2017. Some of the series only go back to 2004 or 2005, which is really too short to be conclusive. I also left off the longer time series (to come later in an update) because it was easy to automate the model for time series of approximately the same length.

Anyway, the men-only model country list is: Denmark, Estonia, Greece, Ireland, Iceland, Italy, New Zealand, Portugal, Slovenia, Turkey, and South Africa. The men and women are included for: France, Greece (again), Poland, and Sweden. I searched FRED manually, so these are just the countries that came up.

Here are the results (some have 1 shock, some have 2):


What is interesting is that while the global financial crisis seems to often be conflated with the Greek debt crisis, the Greek debt crisis appears to hit much later (centered at 2011.2). For example, the recession in Iceland is centered at 2008.7 (about 2.5 years earlier, closer to the recession center for the US).

...

Update:

Here are the results for Australia, Canada, and Japan which have longer time series:



"You're wrong because I define it differently"

There is a problem in the econoblogosphere, especially among heterodox approaches, where practitioners do not recognize that their approach is non-standard. I'm not trying to single out commenter Peiya, but this comment thread is a teachable moment, and I thought my response had more general application. 

Peiya started off saying:
Many economic theories are based on wrong interpretation on accounting identities and underlying data semantics.
and went on to talk about a term called "NonG". In a response to my question about the definitions of "NonG", Peiya responded:
Traditional definition of the "income accounting identity" (C+I+G = C + S + T or S-I = G-T) is widely-misused with implicit assumption NonG = 0.
So Peiya was using a different definition. My response is what I wanted to promote to a blog post (with one change to link to Paul Romer's blog post on Feynman integrity where I realize the direct quote uses the word "leaning" rather than "bending"):
For the purposes of this blog, we'll stick to the traditional definition unless there is e.g. a model of empirical data that warrants a change of definition. Changing definitions of accounting identities and saying "Many economic theories are based on wrong interpretation on accounting identities" is a bit disingenuous. 
Imagine if I said you were wrong because I define accounting identities as statistical equilibrium potentials? I could say that there is no entropic force associated with your "nonG" term, therefore you have a wrong interpretation of the accounting identities. 
But I don't say that. And you shouldn't say that about the "traditional" definition of accounting identities unless you have a really good reason backed up with some peer-reviewed research or at least open presentations of that research. 
You must always try to "[bend] over backwards" to consider the fact that you might be wrong. Or at least note when you are considering some definition that is non-standard that it is in fact non-standard. In my link above, I admit the approach is speculative. I say "At least if [the equation presented] is a valid way to build an economy." I recognize that it is a non-standard definition of the accounting identities. 
Saying people misunderstand a definition and then presenting a non-standard version of that definition is not maintaining the necessary integrity for intellectual discussion and progress.
I've encountered this many times where people basically assume their own approach is a kind of null hypothesis and other people are wrong because they didn't use their definitions of their model. Even economists with Phds sometimes do this. However "You're wrong because I define it differently" is not a valid argument, and it's even worse if you just say "You're wrong" leaving off the part about the definition because you assume everyone is using your definition for some reason. The only people who can assume other people are using their definition are mainstream economists because that's the only way science and academia operates. The mainstream consensus is the default, and not recognizing the mainstream consensus or mainstream definitions is failing to lean over backwards and show Feynman integrity

Commenter maiko followed up with something that is also a teachable moment:
maybe by nature he is just harsher on confused post keynesians and more compliant with asylum inmates.
By "he" maiko is referring to me, and by "asylum inmates", maiko is referring to mainstream economists (at least I think so).

And yes, that's exactly right. At least when it comes to definitions. There are thousands of books and thousands of education programs in the world teaching the mainstream approach to economics. Therefore mainstream economic definitions are the default. If you want to deviate from them, that's fine. However, because the mainstream definitions are the default you need to 1) say you are deviating from them, and 2) have a really good reason for doing so (preferably because it allows you to explain some empirical data).

Update:

In my Tweet of this post, I said that in order to have academic integrity, you must recognize the academic consensus. This has applications far beyond the econoblogosphere and basically sums up the problem with Charles Murray (failing to have academic integrity because he fails to recognize that the academic consensus is that his research is flawed) as well as Bret Stephens in the New York Times (in a twitter argument) who not only failed to recognize the scientific consensus but actually put false statements in his OpEd.

Thursday, May 4, 2017

Labor force dynamic equilibrium

Employment data comes out tomorrow and I have some forecasts that will be "marked to market" (here's the previous update). If the unemployment rate continues to fall, then we're probably not seeing the leading edge of a recession.

I thought I'd add a look at the civilian labor force with the dynamic equilibrium model:



In this picture, we have just two major events over the last ~70 years in the macroeconomy: women entering the workforce and the Great Recession (where people left the workforce). This is the same general picture for inflation and output (see also here). Everything else is a fluctuation.

We'll get a new data point for this series tomorrow as well, so here's a zoomed-in version of the most recent data:

...

Update 5 May 2017

Here's that unemployment rate number. It's looking like the no-recession conditional forecast is the better one:


Tuesday, May 2, 2017

Mathiness in modern monetary theory


Simon Wren-Lewis sends us via Twitter to Medium for an exquisite example of my personal definition of mathiness: using math to obscure rather than enlighten.

Here's the article in a nutshell:
Any proposed government policy is challenged with the same question: “how are you going to pay for it”. 
The answer is: “by spending the money”.
Which may sound counter intuitive, but we can show how by using a bit of mathematics. 
[a series of mathematical definitions] 
And that is why you pay for government expenditure by spending the money [1]. The outlay will be matched by taxation and excess saving to the penny after n transactions. 
Expressing it using mathematics allows you to see what changing taxation rates attempts to do. It is trying to increase and decrease the magnitude of n — the number of transactions induced by the outlay. It has nothing to do with the monetary amount.
I emphasized a sentence that I will go back to in the end. But first let's delve into those mathematical definitions, shall we? And yes, almost every equation in the article is a definition. The first set of equations are definitions of initial conditions. The second is a definition of the relationship between $f$ and $T$ and $S$. The third set of equations define $T$. The fourth defines $S$. The fifth defines $r$. The sixth defines the domain of $f$, $T$, and $S$. Only the seventh isn't a definition. It's just a direct consequence of the previous six as we shall see.

The main equation defined is this:

$$
\text{(1) }\; f(t) \equiv f(0) - \sum_{i}^{t} \left( T_{i} + S_{i}\right)
$$

It's put up on the top of the blog post as if it's $S = k \log W$ on Boltzmann's grave. Already we've started some obfuscation because $f(0)$ is previously set to be $X$, but let's move on. What does this equation say? As yet, not much. For each $i < t$, we take a bite out of $f(0)$ that we arbitrarily separate into $T$ and $S$ which we call taxes and saving because those are things that exist in the real world and so their use may lend some weight to what is really just a definition that:

$$
K(t) \equiv M - N(t)
$$

In fact we can rearrange these terms and say:

$$
\begin{align}
f(t) \equiv & f(0) - \sum_{i}^{t} T_{i} -  \sum_{i}^{t} S_{i}\\
f(t) \equiv & M - T(t) -  S(t)\\
K(t) \equiv & M - N(t)
\end{align}
$$

As you can probably tell, this is about national income accounting identities. In fact, that is Simon Wren-Lewis's point. But let's push forward. The article defines $T$ in terms of a tax rate $0 \leq r < 1$ on $f(t-1)$. However, instead of defining $S$ analogously in terms of a savings rate $0 \leq s < 1$ on $f(t-1)$, the article obfuscates this as a "constraint"

$$
f(t-1) - T_{t} - S_{t} \geq 0
$$

Let's rewrite this with a bit more clarity using a savings rate, substituting the definition of $T$ in terms of a tax rate $r$:

$$
\begin{align}
f(t-1) - r_{t} f(t-1) - S_{t} & \geq 0\\
(1- r_{t}) f(t-1) - S_{t} & \geq 0\\
s_{t} (1- r_{t}) f(t-1) & \equiv S_{t} \; \text{given}\; 0 \leq s_{t} < 1
\end{align}
$$

Let's put both the re-definition of $T_{i}$ and this re-definition of $S_{i}$ in equation (1), where we can now solve the recursion and obtain

$$
f(t) \equiv f(0) \prod_{i}^{t} \left(1-r_{i} \right) \left(1-s_{i} \right)
$$

This equation isn't derived in the Medium article (and it really doesn't simplify the recursive equation without defining the savings rate). Note that both $s_{i}$ and $r_{i}$ are positive numbers less than 1. There's an additional definition that says that $r_{t}$ can't be zero for all times. Therefore the product of (one minus) those numbers is another number $0 < a_{i} < 1$ (my real analysis class did come in handy!) so what we really have is:

$$
\text{(2) }\; f(t) \equiv f(0) \prod_{i}^{t} a_{i}
$$

And as we all know, if you multiply a number by a number that is less than one, it gets smaller. If you do that a bunch of times, it gets smaller still.

In fact, that is the content of all of the mathematical definitions in the Medium post. You can call it the polite cheese theorem. If you put out a piece of cheese at a party, and if people take a non-zero fraction of it each half hour, those pieces will get smaller and smaller but eventually there is nothing left (i.e. somebody takes the last bit of cheese when it is small enough). Which is to say for $t \gg 1$ (with dimensionless time) $X \equiv T + S$ because $f(t) = 0$ with $t \gg 1$. 

But that's just an accounting identity and the article just obfuscated that fact by writing it in terms of a recursive function. Anyway, I wrote it all up in Mathematica in footnote [2]. 

Now back to that emphasized sentence above:
Expressing it using mathematics allows you to see what changing taxation rates attempts to do.
No. No it doesn't. If I write $Y = C + S + T$ per the accounting identities, then a change in $T$ by $\delta T$ means [3]

$$
\delta Y =  \left( \frac{\partial C}{\partial T}+ \frac{\partial S}{\partial T} + 1 \right) \delta T
$$

Does consumption rise or fall with increased taxation rates? Does saving rise or fall with increased taxation rate? Whatever the answer to those questions are, they are either models or empirical regularities. The math just helps you figure out the possibilities; it doesn't specify which occurs (for that you need data). The Medium article claims that all that changes is how fast $f(t)$ falls (i.e. the number of transactions before it reaches zero). However that's just the consequence of the assumptions leading to equation (2). And those assumptions represent assumptions about $\partial C/\partial T$ (and to a lesser extent $\partial S/\partial T$). Let's rearrange equation (3) and use $G = T + S$ [4]:

$$
\begin{align}
\delta Y = &  \frac{\partial C}{\partial T}\delta T + \frac{\partial S}{\partial T}\delta T  + \delta T \\
\delta Y = &  \frac{\partial C}{\partial T}\delta T + \frac{\partial G}{\partial T}\delta T \\
\delta Y = &  \frac{\partial C}{\partial T}\delta T + \delta G
\end{align}
$$

And there's where we see the obfuscation of original prior. In the medium article, $f(0) = X$ is first called the "initial government outlay". It's $\delta G$. However, later $f(t-1)$ is called "disposable income". That is to say it's $\delta Y - \delta T$. However those two statements are impossible to reconcile with the accounting identities unless $X$ is the initial net government outlay, meaning it is $\delta G - \delta T$. In that case we can reconcile the statements, but only if $\partial C/\partial T = 0$ because we've assumed 

$$
\begin{align}
\delta Y - \delta T & = \delta G - \delta T\\
\delta Y & = \delta G
\end{align}
$$

This was a long journey to essentially arrive at the prior behind MMT: government spending is private income, and government spending does not offset private consumption. It was obfuscated by several equations that I clipped out of the quote at the top of this post. And you can see how that prior leads right to the "counterintuitive" statement at the beginning of the quote:
Any proposed government policy is challenged with the same question: “how are you going to pay for it”. 
The answer is: “by spending the money”.
Which may sound counter intuitive, but we can show how by using a bit of mathematics.
No, you don't need the mathematics. If government spending is private income, then (assuming there is only a private and a public sector) private spending is government "income" (i.e. paying the government outlay back by private spending).

Now is this true? For me, it's hard to imagine that $\partial C/\partial T = 0$ or $\delta Y = \delta G$ exactly. The latter is probably a good approximation (effective theory) at the zero lower bound or for low inflation (it's a similar result to the IS-LM model). For small taxation changes, we can probably assume $\partial C/\partial T \approx 0$. Overall, I have no real problem with it. It's probably not a completely wrong collection of assumptions.

What I do have a problem with, however, is the unnecessary mathiness. I think it's there to cover up the founding principle of MMT that government spending is private income. Why? I don't know. Maybe they don't think people will accept that government spending is their income (which could easily be construed as saying we're all on welfare)? Noah Smith called MMT a kind of halfway house for Austrian school devotees, so maybe there's some residual shame about interventionism? Maybe MMT people don't really care about empirical data, and so there's just an effluence of theory? Maybe MMT people don't want to say they're making unfounded assumptions just like mainstream economists (or anyone, really) and so hide them "chameleon model"-style a la Paul Pfleiderer.

Whatever the reason (I like the last one), all the stock-flow analysis, complex accounting, and details of how the monetary system works serve mainly to obscure the primary point that government spending is private income for us as a society. It's really just a consequence of the fact that your spending is my income and vice versa. That understanding is used to motivate a case against austerity: government cutting spending is equivalent to cutting private income. From there, MMT people tell us austerity is bad and fiscal stimulus is good. This advice is not terribly different from what Keynesian economics says. And again, I have no real problem with it.

I'm sure I will get some comments that say I've completely misunderstood MMT and that it's really about something else. But please don't forget to tell us all what that "something else" is. But the statement here that "money is a tax credit" plus accounting really does say, basically, that government spending is our income.

But with all the definitions and equations, it ends up looking and feeling like this:


There seems to be a substitution of mathematics for understanding. In fact, the Medium article seems to think the derivation it goes through is necessary to derive its conclusion. But how can a series of definitions lead to anything that isn't itself effectively a definition?

Let me give you an analogy. Through a series of definitions (which I have done as an undergrad math major in that same real analysis course mentioned above), I can come to the statement

$$
\frac{df(x)}{dx} = 0
$$

implies $x$ optimizes $f(x)$ (minimum or maximum). There's a bunch of set theory (Dedekind cuts) and some other theorems that can be proven along the way (e.g. the mean value theorem). This really tells us nothing about the real world unless we make some connection to it however. For example, I could call $f(x)$ tax revenue and $x$ the tax rate ‒ and adding some other definitions ($f(x) > 0$ except $f(0) = f(1) = 0$) and say that the Laffer curve is something you can clearly see if you just express it in terms of mathematics.

The thing is that the Laffer curve is really just a consequence of those particular definitions. The question of whether or not it's a useful consequence of those definitions depends on comparing the "Laffer theory" to data.

Likewise, whether or not "private spending pays off government spending" is a useful consequence of the definitions in the Medium article critically depend on whether or not the MMT definitions used result in a good empirical description of a macroeconomy.

Without comparing models to data, physics would just be a bunch of mathematical philosophy. And without comparing macroeconomic models to data, economics is just a bunch of mathematical philosophy.

...

Update 5 May 2017:

Here's a graphical depiction of the different ways an identity $G = B + R$ can change depending on assumptions. These would be good pictures to use to try and figure out which one someone has in their head. For example, Neil has the top-right picture in his head. The crowding out picture is the bottom-right. You could call the picture on the bottom-left a "multiplier" picture.


Update 6 May 2017: Fixed the bottom left quadrant of the picture to match the top right quadrant.

...

Footnotes:


This is basically equivalent to what is done in the Medium article.

[2] Here you go:


[3] If someone dares to say something about discrete versus continuous variables I will smack you down with some algebraic topology [pdf].

[4] I think people who reason from accounting identities seem to make the same mistakes that undergrad physics students make when reasoning from thermodynamic potentials. Actually, in the information equilibrium ensemble picture this becomes a more explicit analogy.